Calibration is the topic I have been working on for the last few years. In synthetic aperture radar (SAR), one of the calibration tasks is radiometric calibration, which ensures that gray scale values in SAR images are comparable and compatible across image acquisitions and missions.
A good calibration requires one to be accurate, both in measurements and in terminology. This post is about the second, specifically about the (from my point of view wrong) usage of internal calibration in SAR.
tl;dr: Go to the wrap up if you just want to know my suggestion of how to improve used terminology.
Let’s turn to definitions. Maybe the best known technical standard in the world of metrology is the Guide to the Expression of Uncertainty in Measurement (GUM), which is the only internationally accepted standard on measurement uncertainties. In the accompanying International Vocabulary of Metrology (VIM), calibration is defined as follows:
operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication
The definition of calibration might be a bit wordy, but it stresses the importance of a measurement standard, a reference with which the calibration is conducted.
So how are the terms internal and external calibration currently used in SAR?
Internal calibration refers to the process of estimating and correcting for amplitude and phase drifts of the SAR instrument. The drifts are mostly caused by temperature changes, but component aging or mechanical movement (e. g. of cables) are other further causes.
The process of internal calibration is specific to a single SAR instrument. Internal calibration results (i. e., corrections) are never carried over to another SAR instrument.
For external calibration, (calibrated) measurement standards like corner reflectors or transponders are distributed and aligned on ground. After image acquisition by a SAR instrument, the responses of the known radar targets are used to estimate SAR system calibration factors (most notably the radiometric calibration constant).
After external calibration, metrological traceability is established; measurement results from different SAR instruments become comparable.
The problem is that the term internal calibration is used for a procedure which does not comprise any calibration standard. No calibration is actually happening. In contrast, the term external calibration describes an actual calibration in accordance with the GUM definition given above.
By the way: A transponder is an active device (it has an amplifier loop) and, like a SAR instrument, it suffers from gain and phase drifts due to temperature changes or component aging. It also requires what is currently called an internal calibration loop to monitor and correct these drifts. So whenever I talk about internal calibration in this post, I mean the internal calibration of SAR satellites and SAR calibration transponders alike.
What is currently known as internal calibration is a correction for a systematic effect (temperature drift or component aging). The VIM defines correction as:
compensation for an estimated systematic effect
This is exactly what is going on in a SAR instrument or a transponder: A systematic effect is estimated by routing signals with a known amplitude through the RF loop. Subsequent analysis reveals by how much the amplitude or phase has drifted in comparison to an earlier measurement. The estimated drift is then either compensated directly (in the case of a transponder) or during later data processing (for SAR satellites, where drifts are currently only corrected on-ground after the data has been downloaded).
Consequently, what we currently call internal calibration should actually be called (internal) drift correction and the internal calibration loop should rather be called (internal) drift correction loop.
Some measurement instruments (other than SAR instruments or calibration transponders) actually have a measurement standard built in. The process of internal calibration then refers to measuring this internal standard in an automated fashion in order to update the relationship between a quantity value of a measurement standard and a measurement result.
An examples of such a measurement instrument is a precision weighing scale like the GP-20K built by A&D Weighing. An internal measurement standard and a motor allow automatized calibrations.
For SAR, including an internal measurement standard is not an option. The quantity that needs to be measured is by necessity in the far field of the instrument and therefore cannot be internal to it.
Practically speaking though, the terms internal and external calibration are used similarly for scales and SAR instruments: In both domains is internal calibration used frequently and external calibration only periodically to ensure compatible measurement results with low measurement uncertainties.
In the context of synthetic aperture radar:
Adopting an accurate terminology avoids confusions and ensures compatibility with other fields of metrology. From experience, it certainly helps in not misleading students which are new to the field of SAR calibration.