Computers for Interstellar Missions – Part 1
by Dimos Homatas
In this article, we aim to open up discussions on hardware electronics and computers for the Icarus and interstellar space missions in general. Introduction In today’s society, everybody is witness to the enormous advances in computer technology from ENIAC’s day until today. Home computers routinely implement features which a few decades ago were found only in mainframe systems. Virtualization, multi-core processing and even the existence of personal computers are but a few examples. To briefly quantify these advances, let consider IBM’s classic mainframe computer S/360 which was a great commercial success, since its introduction in 1964. After 10 year of continuous technological evolution, its benchmarks peaked at approximately 1.7 MIPS (Million Instructions Per Second). Compare this to a modern commercial home CPU, such as Intel’s I7 processor, capable of 147,000 MIPS and we see five orders of magnitude improvement! Note we are comparing the performance of a mainframe with a commercial CPU. The main drive behind this evolution is the integrated circuit. The continuing miniaturization of ICs led to the coining of various terms that quantify transistor density, such as Small Scale Integration, Medium SI, Large SI, Very Large SI and finally Ultra Large SI. An empirical observation, which turned into a trend and was finally branded as a Law, is Moore’s Law : “The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years”. Now, let’s make clear that the term “Law” is indeed a law in a sense: it has become an industry standard, dictating the targets for microprocessor development. To that end, Moore’s Law serves as the most credible tool that will help us predict the advancement of complex electronics. In the original Project Daedalus papers , factors such as processor speed, memory and storage are taken into account using the standards of that time. A cursory comparison reveals the limited computing resources they had available. In turn, these led to significant concerns relating to issues such as (a) problems arising from the time needed to compile a program, (b) the need for working memory and (c) the cost of software development to mention but a few. Do those problems still apply today? Are they relevant? Well, the answer seems to be both yes and no. For example, the same code that used to put a computer in an endless loop 50 years ago can still be used today to produce the same situation. The only difference is the processing speed. An endless recursion will produce a stack overflow error all the same. The difference is that in modern computing environments the error will surface much faster. On the other hand, higher processing speed, larger and faster working memory and huge amounts of storage provided the flexibility to write progressively complex software using various programming techniques like object oriented programming and adding more and more levels of abstraction. It seems that with modern computers, many problems that were present in Daedalus (but still faced with ingenuity) are of no relevance today. Still, the task of creating a mostly autonomous system that is faced with decades of space travel in the hostile environment of space, reaching a target star, gathering and transmitting back information, is not solved by raw processing speed and huge storage. Icarus is not a simple communication satellite. The complexity and the duration of its mission raises the bar for the computing needs. In other words, we need to pack in a so far unprecedented amount of computing resources. Hardware considerations – The Radiation Hazard Of all the hazards a computer may face in space, the biggest threat is radiation. Radiation which comes in various flavors as well (take your pick): protons, electrons, heavy ions and cosmic rays, encountered in a variety of different space environments, from Low Earth Orbit to Interstellar Space. There are two main categories of effects on electronics (a) Total Ionizing Radiations and (b) Single Event Effects . As an example, let’s take the Single Event Effect (SEE). A common error is the bit flip (the change of state of a single bit from 0 to 1 or vice versa). A SEE can be caused by a single charged particle passing through a circuit. Now, a bit flip in allocated memory space can be catastrophic for the execution of a program, or could significantly distort data. There are several techniques of handling this issue, such as cyclic redundancy checks, cross referencing memory and any number of “safe” digital encoding/decoding methods which monitor data integrity. All of which have limitations, which must be scrutinized by the Icarus Computer and Data Management design team. One solution for this and other kinds of problems is the technique of Radiation Hardening (RH). RH is simply this: use circuits with a scale of integration closer to the low end rather on the high end, thus reducing the possibility of the particle flipping a bit. So, the general principle so far is that the more low-tech we go, the safer we are. This principle seems to be in conflict with the Icarus design philosophy, which calls for advanced solutions resulting from projections of the current state-of-the-art. We really do want all the high-tech goodies we can fit in. For example we could easily design computers that use cheap, commercially available Error Correcting Code (ECC) Memory. The situation is different regarding processors, but there are solutions there as well. For instance, three identical systems running the same programs concurrently and voting for the accepted result. Or even better a distributed design with no single point of failure . By the way, both have their uses – smaller voting systems could be placed at robotic equipment, while the main computing facility would work using a distributed model. We can count several other examples of possible failures due to different kinds of radiation and the effect on electronics, such as channel degradation and transistor over-doping, to name a few. So how might the Icarus protect itself during its almost century-long journey? Some early thoughts point towards the Icarus employing some type of magnetic shielding . Much like the Earth’s or the Sun’s magnetosphere, such a system would be able to deflect dangerous cosmic rays from inflicting damage to the electronics of Icarus. Would this be enough? Well, perhaps it is sufficient for the duration of the travel. But what happens when Icarus reaches the target star and possibly deploys multiple probes, orbiters or even landers? Suddenly the task of radiation shielding becomes more crucial, since several new factors make their appearance. For example the target star’s solar wind. Is it sufficient to deflect cosmic rays? Or an orbiter going through a planet’s Van Allen belt. It seems that radiation shielding in one form or another for the electronics is inevitable. NASA has performed some important research on “Commercial Microelectronics Technologies for Applications in the Satellite Radiation Environment” , essentially looking for ways to use commercial off-the-shelf (COTS) technologies for satellite applications. A notable example is the 1997 Mars Pathfinder mission which used an COTS Motorola radio modem for its communications with Earth . The NASA taskforce proposed a methodology for addressing design specifications regarding radiation hazards for hardware:
- Define the hazard
- Evaluate the hazard
- Define requirements
- Evaluate device usage
- Engineer with designers
- Iterate process as necessary
The methodology in itself is pretty solid. In fact, it shares some modern software development methodologies. The main problem with applying this methodology to Icarus comes with the first step: Define the hazard. Can we do this accurately? To what extent can we safely ascertain the characteristics of a radiation environment that is light years away? These are all considerations that are being addressed as part of Project Icarus – an extremely complex and multi-faceted spacecraft systems engineering challenge. Our objective is to rigorously inform the Icarus designers, allowing for the latest available state of the art technology that may be available through reasonable projections. However, having current computing technology on-board Icarus is a goal on its own: so far most spacecrafts use technology much older than any of us have at home. As the Mission’s 2nd Term of Reference  states: “The spacecraft must use current or near future technology and be designed to be launched as soon as is credibly determined”. It would be an irony if we had to use older technology for Icarus. After several modifications (some small, some not so small) we should be able to give Icarus as much computing power as needed in order to provide the facilities required by every mission module. The fact that this technology will be obsolete upon arrival, should be considered as irrelevant References
- Moore, Gordon E. “Cramming more components onto integrated circuits”, 1965.
- T.J. Grant. “Project Daedalus: The Computers”, 1978.
- Kenneth A. LaBel, Allan H. Johnston, Janet L. Barth, Robert A. Reed, Charles E. Barnes: “Emerging Radiation Hardness Assurance (RHA) issues: A NASA approach for space flight programs”, 1998.
- Andreas C. Tziolas, “Distributive Computing Architectures for Icarus Daedalus-Like Interstellar Missions”, Presentation at BIS Symposium: Daedalus 30 years later, September 2009.
- Adam Crowl, “Shields for Icarus: Part 2 – Navigational Deflectors for Real“, Project Icarus Blog Article, 2010.
- Kenneth A. LaBel, Michele M. Gates, Amy K. Moran, Paul W. Marshall, Janet Barth, E.G. Stassinopoulos, Christina M. Seidleck, Cheryl J. Dale: “Commercial Microelectronics Technologies for Applications in the Satellite Radiation Environment”, 1997.
- Scot Stride, Microrover Radio Modem, NASA-JPL, 2007.
- Project Icarus: Terms Of Reference, 2009.