Calibrating HART Transmitters - Pressure Calibration
For a conventional 4-20 mA instrument, a multiple-point test that simulates the input and measures the output is sufficient to characterise the overall accuracy of the transmitter. The normal calibration adjustment involves setting only the zero value and the span value, since there is effectively only one adjustable operation between the input and output. This procedure is often referred to as a zero and span calibration. If the relationship between the input and output range of the instrument is not linear, then the transfer function must be known before expected outputs can be calculated for each input value. Without knowing the expected output values, you cannot calculate the performance errors.
Calibrating a HART instrument
For a HART instrument, a multiple point test between input and output does not provide an accurate representation of the transmitter's operation. Just like a conventional transmitter, the measurement process begins with a technology that converts a physical quantity into an electrical signal. However, the similarity ends there.
Instead of a purely mechanical or electrical path between the input and the resulting 4-20 mA output signal, a HART transmitter has a microprocessor that manipulates the input data. There are typically three calculation sections involved, and each of these sections may be individually tested and adjusted.
At the beginning the instrument's microprocessor measures some electrical property that is affected by the process variable of interest. The measured value may be millivolts, capacitance, reluctance, inductance, frequency, or some other property. However, before it can be used by the microprocessor,it must be transformed to a digital count by an analog to digital (A/D) converter. The microprocessor must rely upon some form of equation or table to relate the raw count value of the electrical measurement to the actual property (PV) of interest such as temperature, pressure, or flow. The principle form of this table is usually established by the manufacturer, but most HART instruments include commands to perform field adjustments. This is often referred to as a sensor trim. The output of the first box is a digital representation of the process variable. When you read the process variable using a field communicator, this is the value that you see.
Next is a mathematical conversion from the process variable to the equivalent milliamp representation. The range values of the instrument (related to the zero and span values) are used in conjunction with the transfer function to calculate this value. Although a linear transfer function is the most common, pressure transmitters often have a square root option. Other special instruments may implement common mathematical transformations or user-defined, break-point tables. The output of the second block is a digital representation of the desired instrument output. When you read the loop current using a communicator, this is the value that you see. Many HART instruments support a command which puts the instrument into a fixed output test mode. This overrides the normal output of the second block and substitutes a specified output value.
Last is the output section where the calculated output value is converted to a count value that can be loaded into a digital to analog converter. This produces the actual analog electrical signal. Once again, the microprocessor must rely on some internal calibration factors to get the output correct. Adjusting these factors is often referred to as a current loop trim or 4-20 mA trim.
Differs from conventional
Based on this analysis, you can see why a proper calibration procedure for a HART instrument is significantly different than for a conventional instrument. The specific calibration requirements depend upon the application.
If the application uses the digital representation of the process variable for monitoring or control, then the sensor input section must be explicitly tested and adjusted. This reading is completely independent of the milliamp output and has nothing to do with the zero or span settings. The PV as read via HART communication continues to be accurate even when it is outside the assigned output range. For example, a range 2 Rosemount 3051c has sensor limits of -250 to +250 inches of water. If you set the range to 0-100 inches of water, and then apply a pressure of 150 inches of water, the analog output will saturate at just above 20 milliamps. However, a HART communicator can still read the correct pressure.
If the current loop output is not used (that is, the transmitter is used as a digital only device), then input section calibration is all that is required. If the application uses the milliamp output, then the output section must be explicitly tested and calibrated. Note that this calibration is independent of the input section, and, again, has nothing to do with the zero and span settings.
Calibrating input, output sections
To calibrate the input section, the same basic multiple-point test and adjust technique is employed, but with a new definition for output. To run a test, use a calibrator to measure the applied input, but read all the associated output (PV) with a communicator. Error calculations are simpler since there is always a linear relationship between the input and output, and both are recorded in the same engineering units. In general, the desired accuracy for this test will be the manufacturer's accuracy specification.
If it does not pass the test, then follow the manufacturer's recommended procedure for trimming the input section. This may be called a sensor trim and typically involves one or more trim points. Pressure transmitters also often have a zero trim, where the input calculation is adjusted to read exactly zero (not low range). Do not confuse a trim with any form of re-ranging or any procedure that involves using zero and span buttons.
To calibrate the output section, the same basic multiple-point test and adjust technique is employed, but with a new definition for input. To run a test, use a communicator to put the transmitter into a fixed current output mode. The input value for the test is the mA value that you instruct the transmitter to produce. Obtain the output value by using a calibrator to measure the resulting current. This test also implies a linear relationship between the input and output, and both are recorded in the same engineering units (milliamps). The desired accuracy for this test should also reflect the manufacturer's accuracy specification.
If it does not pass the test, then follow the manufacturer's recommended procedure for trimming the output section. This may be called a 4-20 mA trim, a current loop trim or a D/A trim. The trim procedure should require two trim points close to or just outside of 4 and 20 mA. Do not confuse this with re-ranging or a procedure that involves using zero and span buttons.
After both the input and output sections are calibrated, a HART transmitter should operate correctly. The middle block only involves computations. That is why you can change the range, units and transfer function without necessarily affecting the calibration. Also, even if the instrument has an unusual transfer function, it only operates in the conversion of the input value to a milliamp output value and, therefore, is not involved in the testing or calibration of either the input or output sections.
To validate the overall performance of a HART transmitter, run a zero and span test just like a conventional instrument. As you will soon see, however, passing this test does not necessarily indicate that the transmitter is operating correctly.
Many HART instruments support a parameter called damping. If this is not set to zero, it can adversely affect test and adjustments. Damping induces delay between a change in the instrument input and detection of that change in the digital value for the instrument input reading and the corresponding instrument output value.
This damping-induced delay may exceed the settling time used in the test or calibration. The settling time is the amount of time the test or calibration waits between setting the input and reading the resulting output. Adjust the instrument's damping value to zero prior to performing tests or adjustments. After calibration, be sure to return the damping constant to its required value.
While there are many benefits to using HART transmitters, they should be calibrated using a procedure appropriate to their function. If the transmitter is part of an application that retrieves digital process values for monitoring or control, then the standard calibration procedures for conventional instruments are inadequate. At a minimum, the sensor input section of each instrument must be calibrated. If the application also uses the current loop output, then the output section must be calibrated.
Digital Range Change
There is a common misconception that changing the range of a HART instrument by using a communicator somehow calibrates the instrument. Remember that a true calibration requires a reference standard, usually in the form of calibration equipment, to provide an input and measure the output.
A range change does not reference any external calibration standards, so it is a configuration change, not a calibration. Changing the range only affects the second block.
Zero and span adjustment
Using only the zero and span adjustments to calibrate a HART transmitter (the standard practice with conventional transmitters) often corrupts the internal digital readings. There is more than one output to consider. The digital PV and milliamp values read by a communicator are also outputs, just like the analogue current loop.
Consider what happens when using the external zero and span buttons to adjust a HART instrument. Suppose that an instrument technician installs and tests a differential pressure transmitter that was set at the factory for a range of 0 to 100 inches of water. Testing the transmitter reveals that it now has a 1 inch of water zero shift. Thus with both ports vented (zero), its output is 4.16 mA instead of 4.00 mA, and when applying 100 inches of water, the output is 20.16 instead of 20.00 mA. To fix this, the technician vents both ports and presses the zero button on the transmitter. The output goes to 4.00 mA, so it appears that the adjustment was successful. However, if the technician now checks the transmitter with a communicator, the range will be 1 to 101 inches of water, and the PV 1 inch of water instead of 0.
Zero and span buttons only change the range because the instrument does not know the actual value of the reference input. Only a digital command which conveys the reference value enables the instrument to make appropriate internal adjustments.
The proper way to correct a zero shift condition is to use a zero trim. This adjusts the instrument input block so the digital PV agrees with the calibration standard. When using the digital process values for trending, statistical calculations etc., disable the external zero and span buttons and avoid using them entirely.
Loop Current Adjustment
Another observed practice among instrument technicians is to use a hand-held communicator to adjust the current loop so that an accurate input to the instrument agrees with some display device on the loop.
Refer again to the zero shift example. Suppose there is a digital indicator in the loop that displays 0.0 at 4 mA and 100.0 at 20 mA. During testing, it read 1.0 with both ports vented, and 101.0 with 100 inches of water applied. Using the communicator, the technician performs a current loop trim so that the display reads correctly at 0 and 100.
While this appears to be successful, there is a fundamental problem. The communicator will show that the PV still reads 1 and 101 inches of water at the test points, and the digital reading of the mA output still reads 4.16 and 20.16 mA, even though the actual output is 4 and 20 mA. The calibration problem in the input section has been hidden by introducing a compensating error in the output section, so that neither of the digital readings agrees with the calibration standards.