Reliability is a fundamental part of the design calculation for wired and wireless networks. Generally, traditional wired reliability testing methodologies are used despite a key and impactful difference in networks: wired is time-invariant and wireless is not.
What does that mean for wireless? There is no viable way to capture degradation in testing, and ultimately reliability goals are met by having a large margin for error in reliability assurance, prediction and validation.
Wireless reliability is based on budgeting power; however, in determining reliability assurance, more power is often used than required.
For example, power in a Wireless Personal Area Network (WPAN) is described in terms of decibels (db) and decibel-milliwatts (dBM). The simplified relationship between transmitted power (Pt) and received power (Pr) is expressed in a formula in relation to gain (G), frequency (f) and distance (d):
Traditionally, the path loss exponent (n) is based upon this chart:
However, this formula doesn’t factor in the processes by which greater reliability is attained, such as increasing radio power by designing a better antenna or placing the antenna higher, which is important for wearable electronics. For example, a head-mounted device like Google Glass has better transmission than Nike+ that is placed in a shoe.
Increasing radio power sounds like a simple workaround to implement, and it is; however, it is limited by battery life during average and peak power periods, and regulatory requirements related to antenna transmitter power, antenna gain and cable losses, collectively known as effective isotropic radiated power (EIRP).
The Federal Communications Commission (FCC) allows antenna field strength up to 12,500 uV/m at 3m if transmitting a simple control signal, or 5,000 uV/m at 3m if the transmission is shorter than 1 second and the silent period is either 30x the transmission period or 10 seconds, whichever is longer. Even more power is allowable if the transmission is less than 0.1 second.
Doubling the antenna height for more power is a possibility in some wireless applications, but it increases the link budget by 6dB and must fall within the appropriate Fresnal zone (such as 32 x h1 x h2 for 2.4GHz). Doubling the range also requires an additional 9 to 12 dB of link budget. These increases could impinge upon the recommended wireless minimums of 25 dBm for signal-to-noise ratio and 10 dB system operating margin (with 20 dB preferred).
There are other ways to optimize reliability assurance, including:
- Increasing the degree of error-checking, though you are limited by the protocol you choose
- Reducing the packet size and slowing the data rate, but this will slow down transmission speed and likely dissatisfy customers
- Polled protocol, meaning packets are expected at specific intervals
- Switching master channels after a number of failed attempts
- Dynamic Spectrum Access
Sophisticated wireless reliability prediction tools are available, but most are time-invariant and focused on hard-wired systems. Degradation cannot be captured because:
- Each sub-system is calculated individually, often using MTBF
- Overall system-level reliability is calculated based on series/parallel configuration and dependencies, without attention being paid to connections
Wireless network performance fluctuates in both the short- and long-term. Connections could, indeed, be the weak point. Connection error rates could hold the key to predicting reliability:
- Wired networks: Bit Error Rate (BER) of 10^-12
- Wireless networks (1st gen): Bit Error Rate (BER) of 10^-3
- Wireless networks (Current): Bit Error Rate (BER) of 10^-8
While BERs may suggest reliability prediction, experience with existing wireless technology does not necessarily confirm it for various reasons including:
- Cell phone manufacturers do not have to deal with Industrial, Scientific and Medical (ISM) radio frequencies
- Bluetooth/WiFi is not consistent
- Military and industrial applications depend on extensive field testing
Mesh networks exponentially improve reliability prediction with parallel paths. Even with only 50% path reliability, 10 parallel paths can result in 99.91% reliability.
Validating reliability is generally accomplished through testing. The two prevalent testing methodologies, however, present unique challenges for wireless:
Testing standards ensure uniformity, but that uniformity is predicated on rigorous protocols that are designed to be consistent and repeatable instead of replications of real world conditions confronted by wireless applications.
Mandated compliance testing is required by all governmental bodies for all radiation-emitting technology to ensure it meets regulations and does not unduly interfere with other devices. These mandates actually have little to do with reliability, but most companies stop testing once compliance is reached. Wireless applications rely on new technology in new locations that will not be captured through compliance testing.
The wireless landscape is increasingly complex, but Sherlock Automated Design Analysis™ software can help simplify your approach to reliability assurance, prediction and validation. Find out what Sherlock can do for you! Request your free trial now by clicking the button below.