Although control stability is easy to understand, it is often a confusing specification when considering overall system accuracy. The user of a precision pressure controlling calibrator needs to know exactly how close the pressure is to a setpoint, and to what degree to expect any fluctuation over time. With advances in both control algorithms and pressure regulation components, the degree of fluctuation, or stability, observed around a set point has greatly improved. The stability specification is a measurement of this expected fluctuation.
Control stability is a measurement of the degree of fluctuation around a controlled pressure set point. A detailed explanation of control stability can be found here.
The Mensor CPC6050 Modular Pressure Controller has a control stability specification of 0.003% FS (typically 0.001% FS after 10 seconds) of the active range, which is common in modern controllers. A typical accuracy specification for the CPC6050 is 0.01% of full scale. In this instrument, the stability is 10 times better than the accuracy. So, the question arises; should the stability and the accuracy be added together to get the total uncertainty of the reading, or is it insignificant?
To answer this question, let’s first examine when control stability is not part of the accuracy specification of the instrument. The accuracy specification is often a shorthand for total measurement accuracy. The key term being “measurement”. Measurement is independent of control.
Most pressure controllers have three modes of operation: measure, control and vent. Like their names suggest, measure mode deactivates the pressure controller’s regulation feature and displays the values sensed by the internal transducers. Control mode actively engages the controller to drive to a pre-defined set point using the internal transducers reading for feedback. Vent opens the system including the device under test to atmospheric pressure.
In measure mode, the internal sensor of the controller is directly connected to the device under test (DUT) bypassing the control regulator completely. Therefore, the stability of the controller does not influence the pressure measured by both sensors. In this mode, you can think of the controller as a digital gauge, as it is only measuring and displaying the pressure on the system.
In control mode, there are situations where the control stability may or may not be significant in the accuracy of the measurement.
In a common setup, the pressure controller is connected to the DUT via the Measure/Control port. The controller will drive the system to a few cardinal points in the DUT range, typically 2 to 11 points. Once the controller indicates a stable reading at a particular point, the output of the controller is recorded as the “reference value” and the DUT value is recorded as well. The deviation between these readings represents the “measurement error” in the calibration process.
In certain applications that are not as time sensitive, many readings are taken at each point while the controller indicates a stable value. This is primarily done to avoid any false positives and to overcome any manual recording errors.
Consider a controller like the CPC6050. For discussion sake let’s pick one with a measurement accuracy of 0.01% FS and a control stability of 0.003% FS. For a 100 psi internal transducer, the accuracy is +/-0.01 psi and the control stability is +/- 0.003 psi. The controller will indicate that it is stable when the reading stays within +/- 0.003 psi of the setpoint for a pre-determined, user-selectable, period of time, usually about 2 to 3 seconds. The controller will indicate that it has achieved stability (pressure indication turns green) even if the pressure fluctuates around the 100 psi setpoint from 99.997 to 100.003 psi, however, any controlled point will have an uncertainty of measurement of +/- 0.01 psi. The true value of this measurement will have a normal probability distribution between 99.99 to 100.01 which is much wider than that of the stability window. The controller will continually indicate its current reading regardless of the small fluctuation within the stability window. This pressure will be sensed simultaneously by the internal sensor of the controller and the device under test. Therefore, if the readings from the controller and the DUT are polled simultaneously by an automated recording device, the stability will not have an effect on the accuracy of the measurement.
On the other hand, if the reading is taken by an operator, the time between reading the controller and the DUT may be long enough to allow the fluctuation within the stability window to be significant. In this situation, the stability and the accuracy specifications should be considered. In this case, the control stability and the transducer accuracy must be root sum squared (RSS) together to get the overall system measurement accuracy. To minimize this, the operator can take several readings and get an average.
In conclusion, control stability is just one factor contributing to the overall measurement accuracy of the instrument. When measuring pressure, the accuracy of the reference standard is the only consideration. When actively controlling to a setpoint, the stability of control should be considered in certain instances where the fluctuation defined by the stability can have an effect.
Learn more about Pressure Controllers
Related Reading