Pressure controllers are increasingly becoming the most popular secondary calibration standard due to their overall efficiency and ease of use. Pressure controllers contain a measuring element (transducer) that accurately measures the pressure, and a controlling element (regulator) to accurately sustain or generate pressure.
Control stability is a measurement of the degree of fluctuation around a pressure set point being controlled. A manufacturer’s specification for this is often represented as “% of range” or “% of full scale.” For example, a controller specified at 0.003% of range stability with a measurement range of 100 psi would consider any reading stable within +/- 0.003% of 100 psi i.e. between +/- 0.003 psi of the desired set point. The narrower the control stability window, the better the control performance of the instrument.
In a typical calibration process, measurements are often taken on cardinal points in the range of the device under test (DUT). To reach these cardinal points with a pressure controller, enter the set point in the instrument and let it control pressure to reach that point. The control stability of the instrument determines the closeness of the output to the set cardinal point.
A wide stability window would also increase the risk of unreliable measurements; the operator would have to take numerous measurements within the stability window and then aggregate them to get a mean value. This could greatly extend the time and resources needed to perform accurate calibrations because it would require an elaborate, automatic setup to capture simultaneous static readings from the pressure controller and the DUT.
One of the most desirable features of a pressure controller is its ability to act as a stable pressure generator that is unaffected by supply transients. This feature is directly related to the instrument’s control stability. The narrower the control stability, the more accurate the generated pressure output.
Related Reading: