Sensors and transducers are used throughout the worlds of science, technology, and industry to measure and control physical events. They range from simple devices such as thermocouples to sophisticated sensors used in aerospace applications. Most sensors output their data in an electronic form of one sort or another and it is this signal that forms the analog of the physical quantity being monitored.

When specifying a sensor, its accuracy and its precision are paramount among a multitude of other parameters. These terms are often used interchangeably but it is critical to recognize the fundamental differences between the two. Accuracy, a qualitative concept, indicates the proximity of measurement results to the true value, while precision reflects the repeatability or reproducibility of the measurement.

ISO 3534-1:2006 defines precision to mean the closeness of agreement between independent test results obtained under stipulated conditions and views the concept of precision as encompassing both repeatability and reproducibility. The standard defines repeatability as precision under repeatable conditions, and reproducibility as precision under reproducible conditions.

Nevertheless, precision is often taken to mean repeatability. The terms precision, accuracy, repeatability, reproducibility, variability, and uncertainty represent qualitative concepts and thus should be applied with care. The precision of an instrument reflects the number of significant digits in a reading; the accuracy of an instrument reflects how close the reading is to the true value being measured.

To the Decimal Point
It is common in science and engineering to express the precision of a measurement in terms of significant figures, but this convention is often misused. Armed with an inexpensive calculator, you can produce results to 16 decimal places or more. For example, an electronic calculator or spreadsheet may yield an answer of say 6.1058948 and this implies that we are confident of the precision of the measurement to 1 part in 61,058,948.

Similarly, stating a figure of 6 implies that we know the answer to a precision of 1, whereas we may really know it to a precision of three decimal places if it were written 6.000. Neither of these answers may be accurate because the true value may well be 5 but measured inaccurately as 6 in both instances. An accurate instrument is not necessarily precise, and instruments are often precise but far from accurate.

The chart in Figure 1 illustrates the difference between accuracy and precision pictorially and shows that the precision of the measurement may not be constant but may instead vary in proportion to signal level.

 

Figure 1. Precision vs. accuracy
Figure 1. Precision vs. accuracy

 

Concepts of Accuracy
Sensor manufacturers and users employ one of two basic methods to specify sensor performance:

  • Parameter specification
  • The Total Error Band envelope

Parameter specification quantifies individual sensor characteristics without any attempt to combine them. The Total Error Band envelope yields a solution much nearer to that expected in practice, whereby sensor errors are expressed in the form of a Total Error Band or Error Envelope into which all data points must fit regardless of their origin. As long as the sensor operates within the parameters specified in the data sheet, the sensor data can be relied on, giving the user confidence that all sensor data acquired will be accurate within the stated error band and therefore avoiding the need for lengthy and error-prone data analysis. The diagram in Figure 2 illustrates the total error band concept.

 

Figure 2. Total error band
Figure 2. Total error band

 

However, many manufacturers specify individual error parameters, unless there are legislative pressures compelling them to state the total error band of their sensors. In the weighing industry, for instance, if products or services are sold by weight, the weighing equipment is subject to legal metrology legislation and comes under the scrutiny of weights and measures authorities around the world. The Organization Internationale de Métrologie Légale (OIML) requires that load cells that are used in weighing equipment are accuracy-controlled by enforcing a strict adherence to an error-band performance specification. Typically, such an error band will include parameters such as nonlinearity, hysteresis, nonrepeatability, creep under load, and thermal effects on both zero and sensitivity. The user of such a sensor can rest assured that its measurement precision will be within the total error band specified, provided all the parameters of interest are included.

Unless there is external pressure to comply, manufacturers do not generally specify their products using the error band method, even though it yields results more representative of how the product will respond during real-world use. Instead, deep-rooted commercial pressures result in manufacturers portraying their sensors in the most favorable light when compared to those of their competitors. The commonly used parameter method allows you to make a direct comparison between competing products by examining their specifications as detailed in the product data sheets. If you are selecting a sensor, you must carefully examine all performance parameters with respect to the intended application to ensure that the sensor you ultimately choose is suitable for its specific end use.

Predictable Error Sources
A typical sensor data sheet will list a number of individual error sources, not all of which affect the device in a given situation. Given the plethora of data provided, you may find it difficult to decide whether a given sensor is sufficiently accurate for your desired application. Ideally, the mathematical relationship between a change in the measurand and the output of a sensor over the entire compensated temperature and operational range should include all errors due to parameters such as zero offset, span rationalization, nonlinearity, hysteresis, repeatability, thermal effects on zero and span, thermal hysteresis, and long-term stability. Typically, users will focus on just one or two of these parameters, using them as benchmarks with which to compare other products.

One of the most commonly selected parameters is nonlinearity, which describes the degree to which the sensor's output (in response to changes in the measured parameter) departs from a straight-line correlation. A polynomial expression describing the true performance of the sensor would, if manufacturers provided it, yield accuracy improvements of perhaps an order of magnitude. Many sensors do, in fact, have a quadratic relationship between sensor output and measured value, with a response that is linear to a first-order approximation. Thus, if you substitute the quadratic equation y = ax2+bx+c as an alternative to using the manufacturer's advertised sensitivity data, supplied in the form y = ax+b, you can improve the accuracy. In another example, although many gravity-referenced inertial angular sensors have a sine wave transfer function (the relationship between the output and the measured angle is a sine wave), the manufacturer's data sheets will still list a linear expression, because there is a linear relationship between the sine of the angle and the angle itself.

If the specific thermal effects contributing to both zero and sensitivity errors are stated, then the measurement errors may be minimized by considering the actual errors rather than the global errors quoted on the sensor specification or data sheet, together with the actual temperature range encountered in the application. Often, both errors are quoted in terms of the percentage of Full Range Output (FRO). In reality, sensitivity errors are normally a function of a percentage of reading. Thermal errors may be further minimized by actively compensating for temperature by using a reference temperature sensor installed near to or on the sensor being used. Some manufacturers provide an onboard temperature sensor expressly for this purpose.

It is important to distinguish between the contribution of zero-based and sensitivity errors. Thermal zero errors are absolute errors and are generally quoted as a percentage of full scale (F.S.). In most cases, sensors are not used to their full-scale capacity and therefore, when expressed as a percentage of reading, errors can become very large indeed. For example, a sensor used at 25% F.S. will have a thermal zero error of four times its data sheet value as a percentage of reading. A similar mistake occurs when users specify sensors with an operating range much higher than that which will be encountered in practice "just to be safe."

These examples illustrate that you can improve both accuracy and precision because you can minimize predictable errors mathematically. Stability errors and errors that are unpredictable and nonrepeatable present the largest obstacle to achievable accuracies.

Unpredictable Errors
Unpredictable errors—such as long-term stability, thermal hysteresis, and nonrepeatability—cannot be treated mathematically to improve accuracy or precision and are far more difficult to deal with. While thermal hysteresis and nonrepeatability can be quantified at the point of manufacture under controlled conditions, long-term stability cannot.

Various statistical tools are available to help define long-term stability, but ultimately you have to make a decision that will depend in part on how critical the measurement is. Routine recalibration may be the only reliable way of eliminating the consequences of long-term deterioration in the sensor's performance.

Top Tips for the Specifier


  • Repeatability is the single most important sensor performance parameter; without it no amount of compensation or result correction is going to be meaningful.
  • Consider the environmental temperature range within which the sensor will operate. Thermal errors, particularly those associated with the zero output of the sensor, will dominate.
  • Do not overspecify the operating range of the sensor "just to be safe." Manufacturers state the sensor's safe over-range limits and these should be sufficient in themselves. By overspecifying your sensor, you will reduce its signal magnitude and zero-based errors will increase as a percentage of the measurement range.
  • Do not confuse resolution with accuracy—they have no relation to one another.
  • If the sensor is to be used long-term, consider the effect of the sensor's long-term stability. Progressive deterioration in sensor characteristics can have disastrous consequences and this emphasizes the need for periodic recalibration. Typically, 12 months is an acceptable recalibration period, but you will have to consider both the operating environment and the consequence of the sensor reporting inaccurate data.
  • For any given application, calculate the total error that can be expected from the sensor by referring to the data sheet performance parameters, being careful to include only those that are pertinent to the specific application.

Many sensor users hold quantitative data in awe, particularly when the data are associated with computer-based DA systems. After all, the computer provides numbers that appear, and are commonly assumed, to be unquestionably correct. To avoid costly errors, please carefully study the accuracy parameters pertinent to your particular application before you select the sensor. An error or misjudgement at the outset may prove very costly indeed.

ABOUT THE AUTHOR
Mike Baker is Managing Director at Sherborne Sensors Inc., Wyckoff, NJ. He can be reached at [email protected].