An RCIM sensor is a sensor that results in a change in electrical resistance, capacitance, inductance, electrical field, or magnetic field when it is exposed to a physical, chemical, or biological stimulus. Its output could be a voltage, current, electrical charge, time, or frequency to reflect these changes. Following are the most important characteristics of RCIM sensors:
Characteristics of RCIM Sensors
Transfer Function
The transfer function of an RCIM sensor describes the relationship between its input and its output, and is often expressed by an equation, a graph, or a table.
For example, an Infineon Technologies’ KP125 capacitive pressure sensor has a transfer function of
Vout = (aPin + b)Vdd
This transfer function contains important information about the sensor: the sensitivity a, the offset b, and the relationship between the input pressure Pin and its output voltage Vout under a constant power supply Vdd. This input–output relationship can also be described graphically (Figure 1).
A sensor’s transfer function is also used for the sensor’s calibration or performance prediction. Ideally, a sensor should have a linear transfer function, that is, the sensor’s output is linearly proportional to its input.
In reality, most sensors display different degrees of nonlinearity; thus, linearization of its transfer function at an operating point is involved in sensor design, modeling, and control.
Sensitivity
Sensitivity, the slope of a sensor’s transfer function, is defined as the ratio of a small change in the sensor’s output to a small change in its input. In some sensors, the sensitivity is expressed as the input parameter change required to produce a standardized output change.
In others, sensitivity is described by an output change for a given input change under the same excitation condition or a constant voltage power supply.
Sensitivity error or sensitivity drift is the actual sensitivity deviation from the ideal sensitivity, usually expressed as a percentage:
Sensitivity error or drift = (actual sensitivity – ideal sensitivity) x 100/ideal sensitivity
For many sensors, a change in sensitivity is often caused by temperature fluctuation. A curve that shows how sensitivity changes as the temperature varies is often found in a sensor’s datasheet. If such a sensor is used at different temperatures, a calibration at each of these temperatures must be performed.
Offset
When the measured property is zero but a sensor’s output value is not zero, then the sensor has an offset. Offset is also called zero or null offset, zero or null drift, offset error, bias, DC offset, or DC component.
An offset can be described by either a sensor’s output value at its zero input (Figure 2a) or the difference between the sensor’s actual output value and a specified ideal output value (Figure 2b).
In Figure 2a, the sensor’s offset is indicated by the value b measured from the zero input point (origin) along the vertical (output) axis. In Figure 2b, the sensor’s offset b is indicated by the difference between the sensor’s actual output and its ideal output on the vertical (output) axis.
Offset occurs because of calibration errors, sensor degradation over time, or environmental changes. Among these, temperature change is the primary factor causing the drift.
Usually, the transfer function curve at room temperature (e.g., 25°C) is used as the reference or ideal curve, while curves at other temperatures are considered as actual curves. Offset error can be easily removed from a sensor’s outputs by subtracting the constant b from the sensor’s actual outputs or through a calibration process.
Full Span and Full-Span Output
Full span or full scale (FS), or span for short, is a term used to describe a sensor’s measurement range or limitation. It represents the broadest range, from minimum to maximum, of an input parameter that can be measured by a sensor without causing unacceptable inaccuracies.
Full-span output (FSO) is a term used to describe a sensor’s dynamic range or output range—the difference between a sensor’s outputs measured with the minimum input stimulus and the maximum input stimulus.
Figure 3, taking a piezoresistive pressure sensor as an example, illustrates FS and FSO in an ideal transfer function.
FS is the range between the minimum pressure Pmin and the maximum pressure Pmax that can be measured by the sensor; FSO is the difference between the output voltages Vmin and Vmax corresponding to Pmin and Pmax, respectively.
For an actual transfer function, FSO should include all deviations from the ideal transfer function as shown in Figure 4. The FS of a sensor can be unipolar (either positive or negative values of the measurand) or bipolar. Bipolar ranges may be symmetric or asymmetric.
Some sensor datasheets also provide overload in addition to FS and FSO ranges. The overload can be specified as a value or as a percent of the span. If a sensor is overloaded, it will no longer perform within the specified tolerances or accuracy.
Accuracy
Accuracy is defined as the maximum deviation of a value measured by the sensor from its true value. It can be represented in one of the following forms:
- Absolute accuracy: in terms of measured parameters (e.g., pressure or acceleration), or in terms of output parameters (e.g., voltage and resistance).
- Relative accuracy: in terms of a percentage of the maximum measurement error versus the true value or in terms of a percentage of the maximum measurement error versus the full span.
For example, the accuracy of a piezoresistive pressure sensor may be specified as
- ±0.5 kPa (in terms of measured parameter),
- ±0.05 Ω (in terms of the output resistance value), or
- ±0.5% (relative accuracy to its true value).
Many factors can affect the accuracy of a sensor, including temperature fluctuation, linearity, hysteresis, repeatability, stability, zero offset, A/D (analog-to-digital) conversion error, and display resolution. The overall accuracy of a sensor can be expressed by the following equation:
Overall accuracy = ±√(e12+e22+e32+ …..)
Where e1, e2, e3, … represents each contributing component of accuracy.
Hysteresis
A sensor may produce different outputs when measuring the same quantity depending on the “direction” in which the value has been approached.
The maximum difference in terms of the measured quantity is defined as hysteresis error, denoted as δh, or the maximum deviation in value between the increasing and decreasing cycle measured at the same input point (see Figure 5).
The percentage hysteresis error is determined by taking the maximum deviation and dividing it by FS:
δh = Max(Δh) x 100/FS
Hysteresis errors are typically caused by friction, sensor structure, sensor material properties, or temperature and humidity changes.
Nonlinearity
Nonlinearity describes the “straightness” of a sensor’s transfer function. A nonlinearity can be expressed by the maximum deviation of a real transfer function from its best-fit straight line, Δl , or in percentage of Δl over the full span:
Nonlinearity (%) = Max {Δl} x 100/FS
Nonlinearity should not be confused with accuracy. The latter indicates how close a measured value is to its true value, but not how straight the transfer function is.
Noise and Signal-to-Noise Ratio
All sensors generate noise in addition to their output signals. Sensor noise can be defined as any deviation or fluctuation from an expected value. To describe these fluctuations, a simple average is meaningless since the average of the random variations is zero. Instead, the root mean square (RMS) of the deviations from Vaverage over a time interval T is used:
Where Vn2 is the mean-square noise in voltage and √Vn2 is the RMS noise in voltage.
The extent to which noise becomes significant in a sensing or a measurement process depends on the relative amplitude of the signal of interest to the unwanted noise value, called the signal-to-noise ratio (SNR or S/N ratio). If the noise value is small compared to the signal level, then the SNR is large, and the noise becomes unimportant. In sensor design, one always tries to maximize the SNR.
The SNR can be calculated by
Where Bar Ps and Bar Pn are the average power of the signal and noise, respectively, and Vs and Vn are the RMS voltages of the signal and noise, respectively.
Another quantitative measurement of noise is noise factor, Fn, defined as
Fn = SNR at input/SNR at output
The noise factor Fn specifies how much additional noise the sensor contributes to besides the noise already received from the source. Ideally, Fn = 1.
Resolution
The resolution of a sensor specifies the smallest change of input parameter that can be detected and reflected in the sensor’s output.
In some sensors, when an input parameter continuously changes, the sensor’s output may not change smoothly but in small steps. This typically happens in potentiometric sensors and infrared occupancy detectors with grid masks. Resolution can be described either in absolute terms or relative terms (in percentage of FS).
For example, a Honeywell’s HMC100 magnetic sensor has a resolution of 27 μgauss (microgauss) in terms of an absolute value, meaning that a minimum change of 27 μgauss can be detected by the sensor.
An Analog Devices’ ADT7301 digital temperature sensor has a resolution of 0.03°C/LSB (least significant bit), meaning that one bit can distinguish 0.03°C temperature change; and a SENSIRION’s SHT1 humidity sensor has a resolution of 0.05% relative humidity (RH), meaning that the sensor’s resolution is 0.05% of its span.
The resolution of a sensor must be higher than the accuracy required in the measurement. For instance, if a measurement requires an accuracy within 0.02 μm, then the resolution of the sensor must be better than 0.02 μm.
Factors that affect resolution vary from sensor to sensor. For most capacitive sensors, the primary affecting factor is electrical noise. Taking a capacitive displacement sensor as an example, even though the distance between the sensor and the target is constant, the voltage output will still fluctuate slightly due to the “white” noise of the system. Assume no signal conditioning, one cannot detect a shift in the voltage output that is less than the peak-to-peak voltage value of noise.
Because of this, most resolutions of capacitive sensors are evaluated by the peak-to-peak value of noise divided by sensitivity:
Resolution = peak-to-peak value of noise/sensitivity
Precision and Repeatability Error
Precision refers to the degree of repeatability or reproducibility of a sensor. That is, if the exactly same value is measured a number of times, an ideal sensor would produce exactly the same output every time. But sensors actually output a range of values distributed in some manner relative to the actual correct value. Precision can be expressed mathematically as
Where xn is the value of the nth measurement and Bar x is the average of the set of n measurements.
Repeatability error describes the inability of a sensor to recognize the same value under identical conditions. It may be caused by thermal noise, buildup charges, material plasticity, and friction. Usually, repeatability error is expressed in a percentage of full scale
δr = Max (Δr) x 100/FS
Figure 6 shows the two runs of the same measurement performed by sensor A and sensor B. Apparently, sensor B has a less repeatability error than sensor A.
Calibration and Calibration Error
Many sensors require calibration prior to their use. For some sensors, calibration is to determine the coefficients in their transfer functions; for others, calibration means to convert the sensor’s electrical output to a measured value.
Although detailed calibration procedures vary from sensor to sensor, the general calibration procedure involves taking a number of measurements over the full operating range and comparing the measured values with actual values (or plotting the sensor’s input–output curve against a standard/reference curve), and then adjusting the parameters to match the actual values (or the standard curve).
The actual values (or the standard curve) may be obtained by using a more accurate sensor as the calibration reference or provided by the sensor’s manufacturer. Smart or advanced sensors may have self-calibration and adaptive features to adjust themselves to different operating conditions.
Calibration error is the amount of inaccuracy permitted when a sensor is calibrated. This error is often of a systematic nature, meaning that it can occur in all real transfer functions. It may be constant or it may vary over the measurement range, depending on the type of error in the calibration.
Response Time and Bandwidth
Strictly speaking, sensors do not immediately respond to an input stimulus. Rather, they have a finite response time or rise time tr to an instantaneous change in stimulus. The response time can be defined as the “time required for a sensor’s output to change from its previous state to a final settled value within a tolerance band of the correct new value”.
Usually, tr is determined as the time required for a sensor signal to change from a specified low value to a specified high value of its step response. Typically, these values are 10% and 90% of its final value for overdamped, 5% and 95% of its final value for critically damped, and 0% and 100% of its final value for underdamped step response.
The time constant τ represents the time it takes for the sensor’s step response to reach 1 − 1/e ≈ 0.632 or 63.2% of its final value.
Figure 7 indicates a sensor’s response/rise time tr and its time constant τ under a positive step stimulus in an overdamped case.
Overshoot is a sensor’s maximum output over its final steady output in response to a step input (see Figure 8). In this case, tr is the time required for a sensor’s output to rise from 0% to 100% of its steady-state value; peak time tp is the time required for the sensor to reach the first peak of its step response; settling time ts is the time required for the sensor’s output to reach and stay within ±2% of its final value.
Steady-state error is the error between the desired and actual output value.
The overshoot in percentage (called percentage overshoot) can be obtained by
P O (percentage overshoot) = (maximum peak value – final value) x 100/final value
Sensor Lifespan
All sensors have a finite life, indicated by operating/service life, cycling life, continuous rating, intermittent rating, storage life, or expiration date. Several factors may affect a sensor’s life: its type, design, material, frequency and duration of use, concentration levels to be measured, manufacturing process, maintenance efforts, application, storage, and environmental conditions.
Sensors that consume internal materials during the sensing process (e.g., certain glucose or oxygen sensors) can be used only once or just a few times. Some gas sensors (e.g., Biosystems’ CO and H2S sensors), although nonconsumptive, only have a 2–4 year life limit due to the factors such as evaporation (drying out), leakage, and catalyst contamination.
Most mechanical sensors have a long lifespan under normal operating conditions. Many temperature sensors have a service life of over 10 years.
Aging also affects some sensors’ accuracy and cause sensors to slowly lose sensitivity over time. Most chemical and biosensors’ accuracy depends on their ages.
In addition, rough handling of a sensor could also shorten its useful life. A sensor that is repeatedly installed and removed will have a shorter life than a sensor that is installed and left in place.
The best way to extend sensors’ lives is to store them properly, regularly test and verify their accuracy, and recalibrate them whenever necessary.
More advanced sensors today are equipped with sensor life monitoring systems to remind the users when these sensors need to be replaced. Sensors should be replaced when they can no longer be calibrated or zeroed easily.
Other Sensor Characteristics
Many sensors have their unique characteristics and terminologies to describe their performance. We will discuss these specific characteristics when using these sensors.