Using calibration to reduce the cost of satellite design and test
StoryJune 19, 2017
What is now known as "NewSpace" introduces business challenges that older satellite programs did not encounter. One example is the creation of constellations of low Earth orbit (LEO) satellites - much smaller and lighter (
Traditionally, satellite programs have been high-cost ventures with very low tolerance for risk. This is a direct consequence of the nature of geosynchronous (geo) satellites. These are large (typically 1,000 to 10,000 kg), expensive to produce, and challenging to transport to space. Once in orbit, typical lifetimes are eight to 10 years, limited by onboard power. Changing or repairing a design while in orbit is generally not feasible due to the twin difficulties of getting there and then doing the work in a vacuum. The need for sufficiently thorough and accurate preflight testing strains program schedules and budgets.
Geo satellites have been used for communications, broadcast, and weather monitoring. In recent years, however, opportunities have grown from the advent of newer imaging technologies, advancements in communications technologies, and the drive for mass internet connectivity. To realize these opportunities, engineers are using low Earth and middle Earth orbit satellites. These small orbiters can have shorter lifespans due to advancements in reusable rockets and reductions in overall costs.
The entrepreneurs pursuing these NewSpace opportunities are willing to balance greater risk with lower cost of development and shorter time to market. However, because NewSpace is still about launching complex devices into space and relying upon their proper functioning, cost and time reduction at minimal risk will remain highly beneficial.
Reducing costs with instrument calibration
Testing is an essential part of any satellite program. One way to improve total program cost is to reduce the cost of test. Eliminating tests may seem tempting, but the risk of device failure increases significantly. Maintaining instrument and system accuracy reduces the cost of test and shortens the program schedule in multiple ways. For example, the acceptable margin for a satellite payload system is defined by the allowed margin for each of its components. If test-system uncertainty drifts, then each component must have a tighter acceptable margin, a move that incurs additional time and cost during component development.
Maintaining system performance using traceable standards and calibration minimizes the time spent validating test results and troubleshooting instrument inconsistencies at each stage of the design-and-test life cycle.
Traceability is in fact the primary way to ensure instrument accuracy. During calibration, instrument accuracy and performance are verified using traceable standards, which are instruments with known measurement uncertainties that can be traced directly back to the international system of units (via national metrology institutes). Calibration is typically performed at regular intervals recommended by the manufacturer. Maintaining those intervals is crucial to ensuring that the instruments are performing with traceable accuracy.
One advantage of using the original equipment manufacturer (OEM) to calibrate the instrument is the in-hand availability of procedures such as automatic replacement of internal components that wear out more quickly. Also, if an instrument fails verification testing within its specifications or guardbands, the OEM can make proprietary adjustments to bring performance back within test limits.
Examining the benefits of calibration
As an example, consider the measurement of gain, amplitude, and phase linearity through the high-power amplifier in a satellite transponder. A vector network analyzer (VNA) is often used to characterize the nonlinear behavior of these amplifiers. The accuracy of this measurement is paramount to the design of the amplifier and the inline components that depend upon its performance. (Note: When managing high-powered signals with VNA measurements, consider using external attenuators, couplers, or preamplifiers to better control the signal power levels.)
It may be necessary to account for the additional noise, drift, and complexity that modifications will introduce. During in situ normalization procedures for gain and linearity measurements, accuracy is improved through estimation and mathematical removal of some systematic measurement errors, including the linearity or “dynamic accuracy” of the VNA’s receiver.
The traceable standard used to normalize to the device under test is either a mechanical kit or electronic calibration module (ECal). Often, these modules are assumed to be stable over time. To investigate whether this was true, Keysight led a case study focused on testing the performance of 2,000 ECal modules, including those from customers testing high-powered components. The results: ECal modules that had not been calibrated for three years were three times as likely to be out of tolerance than those calibrated annually.
Figure 1 shows an example test result with a Smith chart displaying complex impedance. The blue line is a newly manufactured ECal module and represents the impedance used for normalization procedures; the orange line is the actual result for an out-of-tolerance module. Performing any normalization with the latter module would add significant error to the measurement, ultimately impacting cost of test through system failures, rework, and retesting.
Figure 1: Smith chart showing Ecal modules within tolerance and out of tolerance.
Adhering to recommended calibration intervals
Consider another example: a passive intermodulation (PIM) measurement of the satellite-payload system. PIM is a problem because it limits payload receiver sensitivity and creates interfering signals in channels adjacent to the downlink. Nonlinear mixing of two high-powered signals at junctions between passive components (e.g., cables, connectors, or filters) generates intermodulation distortion (IMD). This IMD is difficult to remove because the signals are often generated after the signal-conditioning elements in the payload receiver chain.
The test for PIM is performed using two signal generators: a PIM-free power combiner, to create a potential for PIM in the uplink receiver channels; and a signal analyzer to monitor the resulting PIM in the downlink. Varying the frequencies and tone spacing of the signal generators enables the signal analyzer to detect PIM. The sensitivity and nonlinear performance of the signal analyzer must not inhibit the ability to see the PIM results.
Third-order intercept (TOI), a measure of the analyzer’s linearity, must be monitored through regular instrument calibration to ensure that the analyzer’s internally generated distortion does not mask the generated PIM. Regular calibration will minimize the impact of analyzer sensitivity on measurement results, especially if the service provider verifies the noise floor of the analyzer at all payload downlink frequencies.
To minimize the risk of these specifications drifting out of tolerance, a one-year calibration cycle is recommended for high-performance signal analyzers. Additionally, because PIM measurements on satellite transponders require a high-performance signal analyzer, the use of two signal generators to verify the TOI is essential. The reason: a single signal generator producing a two-tone signal introduces IMD products that are much larger than the internal distortion of the high-performance signal analyzer. (Note: if the calibration report lists only a single signal generator used for TOI verification testing, then TOI was either not verified or was verified incorrectly.)
Figure 2 shows the proper setup for verification testing of signal analyzer TOI. The signal generators and power sensor/power meter should be within their calibration cycles and have traceable measurement uncertainties. This ensures consistent results, especially when using multiple signal analyzers in multiple test stations.
Figure 2: TOI distortion verification test setup for carrier frequencies greater than 3.6 GHz.
Reduce cost, risk
Keeping pace with new satellite applications requires modernization of the processes used to design and launch satellites. Reducing the cost of test can significantly enhance the total cost envelope of a satellite program, enabling profitable ventures in these new applications. Ensuring accuracy by maintaining regular calibration of the key parameters, as in the preceding examples, helps reduce risks in NewSpace ventures.
Scott Leithem is with Keysight Technologies’ Services Integration and Business Development Service Solutions Group. Scott has worked for Keysight for more than five years and is responsible for application engineering and services solution development for the defense and aerospace industry. He previously was a product marketing engineer with emphasis on midrange analyzers, leading product launches, driving order growth, and owning product life cycles. He has also worked as a support engineer, focusing on signal sources and analyzers and application software.
Keysight Technologies www.keysight.com