Measurement Accuracy and Uncertainty

E-mail Ray Peacock

In my last essay, I began a series on the step-by-step process careful engineers often follow to select a suitable sensor for an application. In that essay, I explained how to establish measurement span, or range, requirements. This month, I'll look at accuracy and measurement uncertainty.

Definitions
While automation engineers have a clear idea of the meaning of accuracy, it might be a good idea to make sure you understand the term uncertainty as it applies to measurements. This is particularly important because of the prominent role played by measurement uncertainty in statistical analytical methods.

It may be enough to say that accuracy is qualitative and measurement uncertainty is quantitative, but for clarification, consider another source. The Eurochem Web site Qualifying Measurement Uncertainty in Analytical Measurement defines both terms as they are used by ISO in its standards. A look at these definitions will help you understand the distinction between the two.

Eurochem defines accuracy of measurement as "the closeness of the agreement between the result of a measurement and a true value of the measurand." It further states that accuracy is a qualitative concept and that the term precision should not be used for accuracy.

On the other hand, it defines uncertainty (of measurement) as the "parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand." It goes on to add that "the parameter may be, for example, a standard deviation (or a given multiple of it), or the width of a confidence interval," and then states that "uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of a series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information. It is understood that the result of the measurement is the best estimate of the value of the measurand and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards, contribute to the dispersion."

If you're unclear about any aspect of measurement uncertainty, check out the National Institute of Science and Technology's (NIST's) Essentials of Expressing Measurement Uncertainty Web site or the agency's guidelines.

Uncertainty
Measurement uncertainty is one of the parameters you have to deal with—often in depth—to develop a working specification for a measurement device. Unfortunately, the term measurement uncertainty has not progressed fully in the engineering lexicon, whereas accuracy is what most people think about and are certain they understand.

On the plus side, measurement device manufacturers are increasingly using the term. It is related to calibration because calibration uncertainty is the best measurement accuracy that you should expect from a device under carefully controlled calibration conditions. Using a device in the field or a plant imposes more measurement-influencing parameters that increase the achievable uncertainty.

Lessons Learned
I recall the time I asked about the accuracy requirements for temperature measurements in a hot steel rolling process, which covered a span from 1000°F to 2400°F. It was amazing how many people were seeking ±1°F or 2°F accuracy for process line temperatures, where the tightest metallurgical tolerances needed (at the coolest temperatures) were actually closer to ± 20°F.

The best commercial instruments of the day had traceable calibration uncertainties (k = 2 or 4 sigma) typically in the ± 5°F region. There were research devices that could perform near the ± 2°F range, but they were relatively fragile, and costs were an order of magnitude higher for ones with lesser uncertainty.

So, based on my experience, here are two lessons on measurement uncertainty:

  • Be a little skeptical and develop a realistic uncertainty requirement, but make sure all those participating in the project agree with it. You have to sell what you develop.

  • Be sure you have a good handle on the typical traceable measurement uncertainty you can obtain from high-end and mid-range sensors you are likely to specify. It's a great reality check and the rock on which you can build realistic measurement expectations.

In Closing
I know that this is a lot to chew, especially if it's new to you. Start the learning process by reviewing NIST's online resources.

In my next essay, I'll discuss factors that influence measurement conditions. As mentioned above, they can seriously impact the overall measurement uncertainty of the sensor.

See you next month!