Sensor Agents--When Engineering Emulates Human Behavior

Imagine a future where sensors form a virtual society to monitor the operations of a manufacturing or processing facility, vigilantly watching for conditions that will take the plant?s systems offline,
Web Sites of Interest
OMG Agent Platform Special Interests Group

The Foundation for Intelligent Physical Agents (FIPA)

Control of Agent Based Systems (CoABS)

forming a collective that makes decisions that keep the facility running cost effectively. These sensor agents would discuss among themselves the merits and capabilities of each operational component and the contributions that each can make in keeping the plant running. The sensor agent community would understand the goals and impacts of any decision it makes.

If the sensor agent community sensed uncertainty and ambiguity in its understanding of plant operations because of insufficient data, it would send a gatherer, or runner, to seek information that would increase its understanding of the plant and the decisions that had to be made. This would not only increase confidence in the system?s ability to make correct decisions but it would also reduce information bandwidth by seeking only that information that would correct the ambiguity. This would be an example of mining the plant (in real time) and not just the database. Figure 1 shows the major issues of system reliability that sensor agents could address in complex environments.

Figure 1. Sensor agents can address and resolve issues that impact system reliability in terms of interconnectivity, failure recovery, economics, data bandwidth, complexity, controls, dynamics, and representation.

Development of this behavior in a sensor system will require more than having a reduced instruction set and a fast computer. We?ll need new perspectives and insights into biological reasoning and form and the attributes that are shared by them. Only then can we begin to understand the cognitive processes that elicit intelligence and consciousness and the ways we can invoke them in groups of sensors.

Can such a system be built, or is this just a dream? With recent advances in artificial intelligence (AI) and smart sensors, it?s not unreasonable to think that in the next two to five years we could develop sensor agents that could exhibit complex intelligent reasoning and behavior.

An AI Perspective
Today, many researchers are trying to understand and mimic the decision-making processes of biological systems. These engineers are trying to understand the biological form and structure that permits intelligent reasoning. Their intent is to blur the line between the mechanistic view of computational speed and the irreducible form of biology. The payoff will be a computationally efficient mechanism that could one day emulate certain aspects of human cognition and intelligence.

Why is this important? As researchers develop a greater understanding of biological systems, they?ll begin to appreciate the biological implications of survivability in changing environments. Emulating this behavior in sensor systems will revolutionize manufacturing by providing agility and flexibility in the delivery of goods and services. Sensor agents would be self-aware, understanding the context in which they exist, their responsibility to the whole (society), and the consequence of their actions and decisions.

Can individual sensor agents or groups of sensor agents develop cognitive skills similar to those of biological systems and exhibit levels of intelligent behavior? The answer can be found by examining the following series of related questions.

  • What is an intelligent system?
  • What human capabilities are required by an intelligent system?
  • What technologies can be applied to develop an intelligent system?
  • What form would such a system take?

What Is a Sensor Agent?
From our perspective, a sensor agent combines the qualities of two distinct elements: an intelligent agent and a smart sensor.

An intelligent agent is an entity that can operating alone or as a member of a group, has goals and metrics, understands the consequences of its decisions, has a language (ontology) that provides effective communication with its environment and other agents, and has a model of itself and its environment and the impact that each has on the other.

A smart sensor is a measurement system that has sufficient computational capacity to support the data acquisition, memory, and decision-making necessary to respond to algorithmic instructions. The embedded intelligence allows the sensor to be programmed to respond to calibration verification requests, react to interrogations about its health and status, and provide error estimations. The key is that smart sensors are not goal oriented. They can respond to their environment, but only in the if-then-else sense of conventional programming (see Figure 2).

Figure 2. A smart sensor contains a transducer interface, signal conditioning and filtering, digitization, data storage, data manipulation, and communications, which allow it to solve a defined problem. Notice that dumb sensors have the communications interface at the data level and require more bandwidth. Smart sensors, on the other hand, require significantly lower bandwidth. Adding the agent layer offers the opportunity for further bandwidth reductions.

Smart sensors aren?t new, but breakthroughs in size reduction and power generation and consumption permit smart sensors to be built on a single chip, making the hardware more accessible and economically viable. The value added to these devices most often comes from the software that provides the intelligence. But even using state-of-the-art technology, you have to understand the goal, translate it into specific steps, and then program the responses to off-normal conditions that might be encountered.

A key function of a smart sensor is to eliminate the delay incurred between the sensor and the data processor. By integrating these functions, you can use a smart sensor to solve problems that would be impossible if you had to wait for the sensed information to be transmitted to a central processor and the desired action transmitted back.

For example, smart sensors can provide online temperature compensation, data reduction (e.g., averaging and peak detection), and data fusion. The more intelligence incorporated in the device, the easier it is for the programmer to adapt the sensor to the needs of the application. As technology progresses, computational power grows, size decreases, and costs drop. Adding the agent layer to the architecture represents the next logical step in the smart sensor?s evolution.

Therefore, sensor agents can be represented as the continuum between smart sensors and intelligent agents. In this definition, smart sensors are the firmware basis for the physical makeup of the sensor agent, and they provide the form and structure for the system. The intelligent agents encapsulate the complex reasoning and behavior that would be present in the learning, adapting (emergent) system.

How Sensor Agents Behave
What human attributes must a sensor agent or system of sensor agents have? Current research indicates that ten basic characteristics are needed to describe a sensor agent?s behavior. This may not be an exhaustive list in the formal Äense, but it?s believed that some level of intelligent behavior in a sensor agent or a system of sensor agents can be described by single or joint occurrences of any of the following attributes.

Surviving in a Dynamic Environment. Much like humans, a sensor agent should be able to make informed and reliable decisions in the face of a dynamically changing environment. To deliver this reliability, the device would have to compensate for such things as reduced operational capabilities, a sliding economic scale, a changing context, and reduced support and inventory. In short, it should be able to exist and function in a reduced order system and still maintain its ability to support the mission of the plant.

Autonomous Behavior. A sensor agent must be able to make decisions on its own or as a member of a collective without outside intervention. The decisions could be based on logic as simple as if . . ., then . . . statements or as complex as Bayesian or other inductive techniques.

Learning and Developing Associative and Nonassociative Behaviors. A sensor agent should be able to respond to those things that it knows (associative behaviors), which have been learned over a period of time, and be able to react to unexpected conditions (nonassociative behaviors), which are exhibited as complex emergent behaviors that may evolve over protracted periods of time.

Internal Model of Self, Environment, and Their Effects on Each Other. For sensor agents to be effective tools of change, they must have consistent internal models of self (their goals and acceptable costs), the environment (environmental influences on the sensor agent and the influence of the sensor agent on the environment), and the effects that a single sensor agent has on the collective.

Dynamic Social Organizational Skills and Social Responsibility. For a sensor agent to be effective, it must be reliable. The system or society must have confidence that the agent can be trusted to behave well. Behaving well means that the agent doesn?t exhibit sociopathic behavior or tendencies to sabotage the plant. As with humans, the sensor agent must have social skills that allow it to take charge when necessary, communicate effectively, reason and explain, listen, and act as a team member. It must understand the concept of self-sacrifice.

Context-Dependent Sliding Scale of Economy. A sensor agent must be agile in its ability to sense context changes in its operational domain and make adjustments in its goals and costs. There has to be a high degree of confidence in decisions made in context.

Suspend Beliefs and Extract Motives and Intent. The sensor agent?s reasoning skill set must contain an ability to suspend current beliefs about a situation and consider alternatives that may be contradictive to current thinking. An agent must also be able to extract motive and intent from external influences to ensure that there is no conflict between its goals and the collaborating resource.

Competition, Self-Sacrifice, and Democracy. A sensor agent must understand competition from the standpoint that it may or may not be the best at what is being requested. It will also have to understand that although it can perform a task better than any other agent, it may be more beneficial for the group as a whole if it allows another agent to complete the task and direct its attention to something else. Democracy enters in when the collective makes a decision that a sensor agent disagrees with but abides by anyway.

A Consistent Set of Truths/Models that Are Invariant (Ontology). The sensor agents must have a consistent set of truths/models that they can use in making decisions. This provides a consistent bound on reasoning and helps in establishing points of reference from which logical decisions can be made. Examples of accepted truths would be the presence of gravity, energy, or momentum. Of course, these truths are currently defined at the macro level.

A Dissipative System. Sensor agents and sensor agent systems must define a dissipative network. This system of reasoning and emergent properties that may develop from the interaction of the sensor agents must break any infinite reasoning that develops. This attribute is critical in the development of any system that?s going to make control decisions in the presence of uncertainty and ambiguity.

Current Application Areas
An early use of intelligent agents was a software package that helped you prepare your income tax returns. Instead of having you enter all the data and then figuring out what you owed, the software asked you what you wanted to pay and then it filled out the form accordingly. When you corrected one of its entries, the software would learn from its mistake so it could avoid it the next time. And if you didn?t give much to charity but had a high property tax, it would learn from that information and provide an improved solution the next time. Even though it was a lighthearted application, it was the proving ground for the basic concepts.

The latest agents are more sophisticated than earlier ones, but it?s still difficult to find software that can handle real-time systems, such as manufacturing. A workshop on agent-based manufacturing held in Minneapolis, MN, in 1998 drew substantial inteüest. Manufacturing Agents in a Knowledge-Based Environment Driven by Internet Technologies (MAKE-IT), a research project at the University of Genoa (Italy), is defining and implementing small software architectures, called MAKE-IT agents, for knowledge-based workflow management in manufacturing. To date, intelligent software agents have been successfully used in such applications as data collection and filtering, pattern recognition, event notification, data presentation, planning and optimization, and rapid response implementation.

Research Issues
Intelligent agent research has been going on for about 30 years. Smart sensor concepts have just come into focus in the last five to seven years. Despite the progress that?s been made, we still have to resolve major issues before a true sensor agent can be deployed.

Sensor Agent Language. There must be agreement on the sensor agent language syntax. Sensor agents must have a construct that makes presenting and understanding data clear and concise. There can?t be any ambiguity in the discourse between collaborating sensor agents. Assigning¬these constructs will require the development of standards and protocols. This will be critical in the formulation of agile and/ or decentralized manufacturing processes.

Sensor Agent Ontology. Syntax alone will not guarantee clear communication and consistent reasoning. We need a broadly accepted set of terms and definitions to ensure that sensor agents use the same language to express identical concepts.

Behavioral Protocols. To be effective, sensor agents require a consistent set of behaviors identified as interaction protocols. This will provide a basis for interaction among sensor agents in a group and an effective means for groups to interact among each other.

Codifying Complex Behavior. One of the real challenges facing the successful deployment of sensor agents is the development of complexity in a system of sensors at such a level that intelligent behavior becomes a part of the collection. Understanding this in sufficient detail is a major research thrust.

Sensor Agent Security. Another major barrier to deployment of a sensor agent network is security. Users must believe that a system of sensor agents can share data among its constituents without being compromised. It?s imperative that the agents be able to determine the validiãy of data and establish confidence in any information secured from outside sources. Also, the design of the agents must guarantee that they cannot be changed in a manner that would be detrimental to the manufacturing process.

Sensor Agent Architecture and Mobility. The architecture for multi-agent systems is currently an area of intense research. Distributing agents into small footprint devices (e.g., sensors) holds tremendous potential if a robust, scalable architecture can be developed that can work in real time? Current systems rely heavily on research from distributed systems, hierarchical systems, and flexible or dynamic architectures.

The Foundation for Intelligent Physical Agents (FIPA) was formed in 1996 to produce software standards for heterogeneous and interacting agents and agent-based systems. FIPA gathers input from and collaborates with its membership and those in the field in general to build specifications that can be used to achieve interoperability among agent-based systems developed by different companies and organizations.

What Will Tomorrow?s Sensor Agents Look Like?
Figure 3 provides a hierarchical view of a sensor agent model. Based on this design, you need five internal models (or forms) to develop a sensor agent paradigm.

Figure 3. This proposed sensor agent formulation for model details specific attributes and needs and shows links to the outside environment.

Physical Model. The physical model contains a description of how the process (and the process equipment or system) operates on input materials. The description includes mechanical, thermodynamic, and other physical interactions. The physical model takes inputs from sensors, material databases, and product specifications and requirements to generate the input for the other modules of the enterprise model.

Environmental Model. The environmental model includes not only the effects of environmental changes on the process and product (e.g., humidity and temperature) but the effects of the process on the environment as well (e.g., waste streams and heat).

Self-Reference Model. The self-reference model takes inputs from the other models as well as from the historical database to anticipate changes and generate corrective recommendations before errors are manifested in the proc.ess or the product. This is based on an internal model of the environment and the effect that the system has on it.

Economic Model. This model integrates the business aspects of the enterprise with the production process to determine if the system can perform its task at an acceptable cost. If it cannot, the model decides what actions must be taken to bring the process into spec.

Decision Model. The decision model provides expert assistance to the system by considering resource availability, cost of conducting business, and priority of needs. The system shown in Figure 3 must be able to sense dynamic changes in sensors? or system?s physical attýibutes that lead to failure and state changes. From this, the system can deduce operational degradation and determine the impact of the sensors/ elements on system performance and, in some cases, the system?s impact on the element (attrition). In addition, the system should be able to anticipate operational limitations and restrictions and then dynamically allocate resources, as needed, to continue operations under the mission profile. In the event that continuation is impractical or too restricted, the monitoring system can recommend a safe operational degradation while preventing catastrophic failure. Under these conditions, a system would be 100% operational while having less than 100% functional capability.

A system such as the one described here would possess certain attributes characteristic of its functional requirements and unique to its real-time environment. The baseline functional requirements of the sensor agent model would include sensor-driven analysis and self-validation, sensor and component structural and material models, a 0istributed database with inherent communication capabilities (internal to the net), intranode/internode communications, performance modeling, signal validation, and self-validation (self-referential). Figure 4 shows an implementation scheme that could be used to deploy a sensor agent model in a manufacturing facility.

Figure 4. Shown here is sensor agent implementation that could be used in a manufacturing environment. It highlights the need for economic evaluation of decisions, review of mission needs, and inductive learning for associative and nonassociative behaviors.

In Closing
To have sensor agents exhibit such complex behavior would be remarkable. To achieve this, though, the scientific community will have to resolve significant technological and philosophical differences that now permeate the community. In the scientific arena, a different view of science will have to be adopted. Physicists, biologists, behavioral scientists, and the like will have to put their differences aside and meet on common ground to make such devices a reality. In doing so, researchers working togeter will develop a greater understanding of mathematical biology, which will spawn an alternative mathematics of intelligence.

This will lead to a major breakthrough in the way we think about systems. We?ll have to develop a general language for sensors that will allow one sensor to talk to another. This is critical in forming relationships. In addition, there will have to be common constructs (descriptors) that go beyond simple data transforms to common, acceptable truths that each sensor agent understands and reasons with.

Only when these basic truths are developed can a sensor hope to reason with other sensors in a way that provides meaningful constructs and information. When this is achieved, sensors will move away from being to becoming.

For Further Reading
Walter J. Freeman. 1999. How Brains Make Up Their Minds. Weidenfeld & Nicolson, London.

John H. Holland. 1995. Hidden Order: How Adaptation Builds Complexity. Perseus Books, Reading MA.

Suggested Articles

4D imaging radar helps cars see objects better than before, including bridge and tunnel clearances

Siemens has built rugged industrial PCs on the new Atom x6000E series to add graphics for machine vision on the shop floor

Sidewalk is designed to allow neighbors to share a wireless network for IoT devices