Measurement uncertainty is one of the parameters you have to deal with to develop a working specification for a measurement device. Unfortunately, the term is relatively new in the engineering lexicon. So let's get a clearer understanding of the concept's meaning.
Start by understanding that measurement uncertainty is quantitative while accuracy is qualitative. Now take the next step and flesh out a definition. The Eurachem's Qualifying Measurement Uncertainty in Analytical Measurement Web site defines the term as the "parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand." It goes on to add that "the parameter may be, for example, a standard deviation (or a given multiple of it), or the width of a confidence interval." And then states that "uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of a series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information. It is understood that the result of the measurement is the best estimate of the value of the measurand and that all components of uncertainty, including those arising from systematic effects, such as components associated with corrections and reference standards, contribute to the dispersion."
This definition may be hazy to some. If you're still unclear about measurement uncertainty, check out the National Institute of Science and Technology's (NIST's) Essentials of Expressing Measurement Uncertainty Web site or the agency's guidelines.
Moving into the Mainstream
While the concept of measurement uncertainty is unfamiliar to some, it is well defined and understood in science and metrology and is moving into broader use among the engineering community. The primarily reason for this shift is that measurement uncertainty plays a prominent role in statistical process quality control and analytical methods.
One sure indication that it has hit the mainstream is the fact that measurement device manufacturers are increasingly using the term. It is related to calibration because calibration uncertainty is the best measurement capability that you can reasonably expect from a measurement device under carefully controlled, calibration conditions.
As a broader spectrum of the engineering community embraces statistical process control, look for measurement uncertainty to play a bigger role.
I know that this is a lot to absorb, especially if it's new to you. The thing to do is start the learning process by reviewing NIST's online resources.