Wireless Sensor Network Topologies

The development of network technologies has prompted sensor folks to consider alternatives that reduce costs and complexity and improve reliability. Early sensor networks used simple twisted shielded–pair (TSP) implementations for each sensor. Later, the industry adopted multidrop buses (e.g., Ethernet). Now we’re starting to see true web-based networks (e.g., the World Wide Web) implemented on the factory floor.

Figure 1. In point-to-point network topologies, each sensor node requires a separate twisted shielded–pair wire connection. The cost is high, configuration management is difficult, and nearly all the information processing is done by the host.

As wireless sensors become real commodities on the market, new options or new arguments for old options are causing professionals to consider network strategies once ruled out. Let’s look at the three classic network topologies (point-to-point, multidrop, and web), assess their strengths and weaknesses, and look at how the rules have changed now that wireless systems are coming online.

In addition, to build functional sensor networks, you’ll probable have to integrate hardware and software from multiple vendors (see the sidebar “Network Questions,”). So along with everything else, you have to come to terms with standards and protocols—those that exist, those that are emerging, and those needed to ensure interoperability on the factory floor.

Point-to-Point Networks
Theoretically, these systems are the most reliable because there is only one single point of failure in the topology—the host itself (see Figure 1). You can improve the system by adding redundant hosts, but wiring two hosts can be a problem. The 4–20 mA standard allows multiple readout circuits if the standard loads are used at each readout. Problems can arise if readout devices load the circuit beyond its capability, but most designers are familiar with the limitations and are sufficiently careful.

Figure 2. In a multidrop network, each sensor node puts its information onto a common medium. This requires careful attention to protocols in hardware and software. The single-wire connection represents a potential single-point failure. But some vendors supply redundant connections to mitigate this potential problem.

Some networks provide frequency-modulated (FM) signals on the wires to carry multiple sensor readings on separate FM channels. Some standards (e.g., the HART bus) support multiplexing of digital signals on the existing analog wiring in older plants. These architectures blur the distinction between point-to-point and multidrop networks.

Early wireless networks were simple radio-frequency (RF) implementations of this topology. These networks used RF modems to convert the RS-232 signal to a radio signal and back again. Fluke (Everett, Wash ington) developed a digital voltmeter that could be configured to accept a voltage signal and transmit the signal over a dedicated radio frequency channel. The reliability of these implementations was sometimes suspect because most were designed with simple FM coding. Interference and multipath propagation effects caused significant degradation in factory environments, so many networks proved to be unreliable unless designers were particularly careful. The Federal Communications Commission licensed companies and devices to operate at the allocated frequencies.

Complete wireless local area networks (LANs) were implemented using this technique.These were successful in the office environment but didn’t fare as well in factories. Many designers implemented remote data acquisition systems with this topology by using a data concentrator in the field to feed the data to a radio transmitter for transmission to the hosts, where the signals were demultiplexed into the original sensor signals.

Multidrop Networks

Figure 3. In a web topology, all nodes are potentially connected to all other nodes. Connectivity among a large collection of sensors gets complex because all nodes must have a connection to all other nodes. Some connections can be eliminated by using repeaters and routers to make virtual connections. The World Wide Web is a good example of this topology.
Multidrop buses began to appear in the late 70s and early 80s. One of these, Modbus from Modicon (Schneider Auto mation, North Andover, Massa chusetts), led the way into the industrial sphere, followed by several proprietary and open buses (e.g., the Manufacturing Auto mation Protocol, QBus, and VME Bus).

The emergence of intelligent sensors and microcomputers capable of operating in industrial environments irrevocably changed the sensor network landscape. Multidrop networks (buses) reduced the number of wires required to connect field devices to the host, but they also introduced another single point of failure—the cable. Several suppliers of industrial-grade products offered redundant cabling designs, but these came with an increase in complexity (see Figure 2).

Once the industry began the migration to multidrop buses, problems associated with digitization began to emerge. With the previous point-to-point systems, digitization occurred in the host, where a single clock could be used to time stamp when the analog signals from multiple sensors were acquired. With the distributed intelligence required to implement a multidrop network, synchronization of clocks became a critical issue in some applications. This remains an important design parameter for any distributed digital system.

Figure 4. An architecture consisting of a decoder for each channel and a direct-sequence spread-spectrum receiver can perform simultaneous sampling because the same baseband signal goes to each decoder. But the decoders represent a significant cost, power, and size limitation.

The introduction of Ethernet in the mid-80s was a landmark in standardization, if not technological innovation. A group of large companies agreed that the future of computer networking required an open interconnect standard that would allow multiple-vendor systems to work together with minimal difficulty.

Researchers looked closely at the carrier sense multiple access with collision detection (CSMA/CD) protocol when they investigated the behavior of networks under stress. But they considered most industrial applications too time critical for such a nondeterministic protocol. Now, fifteen years later, most factories have converted their shop floor networks to Ethernet because it is the best compromise between cost and performance. Many companies now offer solutions that use Ethernet to implement suitable robust industrial networks.

Wireless systems use the same types of protocols to implement multidrop topologies, simulating hard-wired connections with RF links. The IEEE-802.11 standard was the first wireless standard that promised to bring the interoperability of Ethernet connectivity to wireless networks. Many of these, however, are not compatible at the over-the-air level.

Network Questions


Many issues impact network performance besides the hardware topology. To identify them, you have to ask the right questions.

Q Full duplex vs. half duplex vs. simplex—Can the nodes talk and listen simultaneously?

Devices that can talk and listen at the same time (e.g., telephones) are full-duplex devices. Citizen Band (CB) and other radio formats are normally half duplex—meaning they can talk and listen, but not simultaneously. In this case, some indication is usually required to let the other party know it’s okay to talk (e.g., by saying “over”). RS-232 is a half-duplex data bus, and Ethernet is a full-duplex connection. Many modern devices can simulate full-duplex performance by switching between transmit and receive fast enough (in milliseconds) that humans can’t perceive the delay. Most cell phones are implemented with this fast switching strategy. Simplex systems communicate in one direction only. Half-duplex systems operate like simplex systems and then reverse the roles of transmitter and receiver.

Q Analog vs. digital—In what form does the signal enter the hardware medium?

In analog systems, the modulation technique is continuously variable (e.g., voice). Digital systems use an A/D converter to digitize the signal and send a data packet that uses 1s and 0s to represent the analog value. Digital transmissions offer such advantages as reduced fading, reduced noise, and increased throughput. Analog systems potentially have better resolution, though, because no digitization error is involved. In the old analog telephone systems, the voice modulated the electrical resistance and thus the voltage across the carbon granules packed in the mouthpiece. Digital systems replaced the old analog phone circuits many years ago, except for the phone line from your phone to the first switching station. The 4–20 mA system is an analog channel, and RS-232 and Ethernet are digital.

Q Baseband vs. broadband—Should you use a carrier to increase the number of channels that can be put on the network medium?

If the signal containing information is placed directly on the physical medium, the channel is called baseband. If the signal is placed on a carrier (modulation), the channel is broadband. Because many carriers can be placed on the same medium at different frequencies, a given hardware channel can carry many logical channels. Cable TV is a broadband network, with each TV channel on a different carrier frequency. The TV channel designates the carrier frequency. Optical fibers now carry many more channels than once thought possible by modulating information onto the fiber at different frequencies of the light (colors). Broadband systems are usually more complicated, so the tradeoff must be made with care. Standard Ethernet is a baseband bus, but wireless (radio) buses are usually broadband.

Q Master-slave vs. peer-to-peer vs. broadcast—How should the nodes interact with each other and with the host?

In master-slave protocols, one node gives the commands, and another node or collection of nodes executes them. The host is usually the master, and the sensors and actuators are usually slaves. This protocol allows tight traffic control because no node is allowed to speak unless requested by the master, and no communication is allowed between slaves except through the master.

In a peer-to-peer network, all nodes are created equal. A node can be a master one moment and then be reconfigured at another time. Peer-to-peer configurations offer the greatest flexibility, but they’re the most difficult to control. Any node can communicate directly with any other node.

Broadcast networks are much like master-slave configurations, but the master can send commands to more than one slave at a time. Many industrial protocols (e.g., IEEE-1451) are based on master-slave (with broadcast) protocols. Wireless systems can be implemented in any of these protocols.

Q Circuit switched vs. packet switched—How long should a node own a communications channel?

Again, the old analog phone system is used as an example. These systems were circuit-switched networks. You dialed the number, and a circuit was established between the sender and the receiver. The circuit stayed connected as long as the phone call continued. When the parties hung up, the circuit was released and available for another connection.

Packet-switched networks route digital packets of information as they travel along different paths throughout the network. Each packet contains routing information so that the receiver can reassemble the packets into a complete message when they arrive. Complexity is high, but the potential for flexibility and improved channel use is also high. The Internet and World Wide Web are based on packet-switched networks. n

Web Networks

Figure 5. Simultaneous sampling is more difficult with this receiver architecture. The selected channel codes can be stored and stepped through so that each channel’s data gets to the data system bus.
The promise of the web topology (i.e., when all nodes are connected all the time) had to wait until vendors developed a way to interconnect nodes without the required wiring connections. A network of any appreciable size becomes infeasible if all wires must be connected specifically for the network (see Figure 3). Early star topologies were successful as long as the star wasn’t too large. The World Wide Web illustrates what is possible, though, if you can use wiring that is already in place. The telephone network provides the available connectivity in most parts of the country, although at less than suitable speeds in many locations.

The advantages of web connectivity for sensor networks become clear as the level of intelligence in each sensor increases. Cooperating sensors can form a temporary configuration that provides sufficient capacity to replace the host. Self-hosting networks then become self-configuring and finally, years from now, perhaps even self-aware. But several problems remain and are the topic of significant research, such as size and power consumption reduction, throughput and performance during transmissions, and algorithms for allocating priorities and authority.

In a wireless web network, individual nodes have the potential of being constantly connected (physically) with many other nodes in the network. How the network is configured at any instant becomes a matter of how the software configures it. In a code division multiple access (CDMA) network, the radios can receive all channels at once. Figures 4 and 5 illustrate the two simplest alternatives for implementing a CDMA-based data receiver.

The architecture suggested in Figure 4 requires a separate decoder for each channel. This requires hardware to be dedicated to channels that may not be currently important but could be required later. Figure 5 eliminates the need for dedicated hardware but introduces the problem of simultaneous sampling. The decoder-per-channel implementation samples the data stream looking for a particular channel code embedded in the chip stream. The single decoder will decode a new data stream for each channel unless the data stream is stored and decoded over and over with different candidate codes for each channel. Both implementations represent a compromise and should be implemented carefully, depending on the application.

Network routing is a serious concern in web architectures. Because all nodes can’t reach all other nodes in a single hop, a repeating mechanism is required. The assigned input and output channels dictate to each node which signals are meant for its own use and which should be passed on to the next node. The routing is one of the things that makes web architectures more complicated to implement than the others.

In sensor or mobile phone networks, nodes can come and go frequently. How the network responds to the reconfiguration has a severe impact on performance and reliability. Mobile ad hoc networking is a hot topic in the research community because reconfiguring on the fly makes all networks better. Without this technology, sensor networks will be severely limited in harsh environments, where connections can change quickly as the RF environment changes.

So What?
Network topologies usually work best when they map closely to the topology of the application. If the application looks hierarchical, then a hierarchical (point-to-point or multidrop) topology might be most suitable. But if the application looks like a collection of peers interacting and cooperating, then a web architecture might work best.

The potential for web connectivity in the sensor world seems most tempting. The dynamic nature provides the opportunity for cooperating sensors to form smart clusters that can work together to solve a problem, then reorganize to solve the next one. As the hardware and software technologies mature, you’ll see more and more web implementations showing up on factory floors. Watch for them.