One of the most important steps in the process of selecting a suitable sensor for an application is the identification and assessment of factors influencing measurement and calibration uncertainty. Skip this step, and you may end up with a device that fails to deliver the level of accuracy required to do the job. Here are two approaches to tackling this critical task. Whichever path you choose, the process begins with a series of questions.
Knowing a sensor's calibration uncertainty is only the first step in sizing up the situation. The next is answering the question: What happens to measurement performance in terms of overall measurement uncertainty in the real world? This question, in turn, gives rise to three additional questions:
- What will influence the sensor's performance?
- How much will it influence it?
- How does the measurement uncertainty combine with the basic calibration uncertainty?
The last one is the easiest to answer. Random errors most often simply combine in quadrature—the square of the overall uncertainties is the sum of the square of its individual component uncertainties. For more details, see NIST's "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results".
Answering the first two questions, however, is no trivial matter. Given that many manufacturing and process operations cover large areas in buildings that are neither heated nor cooled, it can be a challenge.
The Systematic Approach
When there is no obvious solution, you have to do the work up front, before the installation, characterizing the likely errors and then seeking a workable, optimized approach that has minimal continuing costs. This means identifying the principle influencing errors and estimating and/or verifying their impact through testing.
Usually, the primary influence is ambient temperature because it can affect the sensitivity of many components in analog and digital circuits. Most manufacturers list the allowed ambient-temperature limits for operation of their devices, but unless the device is covered by a standardized calibration table—such as ASTM E230 for thermocouples and ASTM E1137/E1137M-03 for RTDs —all bets are off as to how a sensor will perform outside its temperature limits. Fortunately, some manufacturers also specify a residual ambient temperature coefficient.
There are many other influencing factors to consider, such as radiant heat load (which will directly influence sensor temperature, not necessarily the ambient surroundings), humidity, moisture, vibration, atmospheric pressure, nuclear or ultraviolet radiation, and the presence of gases. In fact, there are countless influences almost unique in their effect on the sensors. One of the best ways to be certain is to do your own testing.
An alternative to doing the testing yourself is to hire experienced instrument evaluators, but you'll have to go to Europe to have it done. I suggest the International Instrument Users' Association in The Netherlands, SIREP/EI in the U.K., or EXERA in France.
The Brute Force Approach
Sometimes the best way to deal with an ambient-temperature problem is to avoid it completely. Simply create artificial calibration conditions around the sensor so there is no discernible influence from the operating environment.
You often see this with sensitive online devices (e.g., optical sensors) in industrial plants. These devices are mounted inside protective, cooled, and sometimes-heated enclosures, with air purges.
This brute-force approach enables you to get the best possible measurement performance from the sensor. In some applications—such as petrochemical plants, power plants, and high-temperature processing lines—it's the only viable solution for many types of sensors.
There is, however, a price for this level of control. In addition to the extra equipment and installation costs, the process maintenance staff is faced with ongoing preventive work and costs to keep the protective gear, as well as the sensitive sensors, operating within allowed limits.