Interstellar Maintenance – part 2

posted by Pat Galea on January 14, 2011

Introduction – Software maintenance

Following on from the recent blogs by Philipp Reiss and Robert Freeland II, I thought I would add a little of my own perspective and experience, as a deep space software engineer. Let’s remember: the only item of a spacecraft that is routinely subject to maintenance after launch is the on-board software. This can also be used to work around other system problems. For example, when the Voyager scan platform suffered a failure the on-board software was reprogrammed to roll the whole spacecraft, to achieve an equivalent scan of the camera pointing direction. Similarly following the failure of the upper stage of Hipparcos the onboard software was substantially reconfigured, firstly for fault investigation, and recovery attempts, and then subsequently for a revised mission approach. In the end this sucessfully recovered the mission from a completely different orbit. Furthermore many spacecraft have been characterised so completely when in space that progressive refinements of the software have provided ever greater operating capabilities.  This approach to software updates has now become so sophisticated that some spacecraft are launched without anything other than their basic safety and cruise phase software loaded. This allows subsequent development and refinement of the mission software post launch. It is therefore highly likely that the bulk of the on-board software for Icarus, and for its scientific payloads, will be written post launch. We must expect these to be extensively tested in a simulated environment on earth – running many thousands of mission scenarios. Given the autonomy requirements this testing will probably approach the training and simulation regime used in current ground segment operational work-ups. This means injecting multiple faults into the operational scenarios and checking that the operational approach recovers the mission. Without human beings in the loop we must aim to use “fail operational” or fault tolerant systems, with fault protection algorithms, and possibly some Artificial Intelligence elements. However the largest challenges for maintenance will not be software. I think it is worth looking at three categories of components that may need repair.. :

  • Electronics
  • Mechanicals
  • Energy


Integrated circuits have a history of less than a hundred years, so how can we be confident that we can build any circuits which will last for that long? The truth is, that at the launch of Icarus it is unlikely that we will know for sure the true probability of the electronics functioning successfully. We currently have no models of long term electronic failure, in the ways we have characterised materials, stress factors and metal fatigue.  It may be that crystalisation, or doping migration, or some other mechanism, limits the life of the electronics we produce – leading to a similar “bathtub” curve to that seen in mechanical systems. At present we neither know the shape of the failure curve after 50 years, or the time to reach the “tipping point”.  However there are techniques, such as accelerated ageing which can help us predict. Research in this area will be key to the Icarus design. This general problem suggests that “conservative” designs should be used. As with military devices the actual hardware will be at least a generation behind the latest consumer electronics – as that time is needed to characterise devices, and build to a robust specification. It is for this reason that I doubt that either quantum computers or nanotechnology will be making a significant contribution to the Icarus electronics. In selecting devices, and producing designs there are a number of key items to remember. Firstly operating (powered) electronics are subject to many stresses and strains that items switched off do not experience. The lifetime of electronics is measured mainly in terms of the “operating lifetime”. Secondly there are technologies which are inherently more suited to the space environment – devices such as core memory, silicon on sapphire chips, and larger dimensioned devices are all less susceptible to radiation effects. Thirdly that storage conditions are also relevant. I personally have seen the lifetime of equipment dramatically reduced by high temperatures, or possibly temperature fluctuations during storage. This happened when a set of workstations and servers were left unprotected in their boxes in the desert – the building to house them was not finished when they were delivered  The most shocking aspect of this was not that some of the units failed, it was that they ALL failed, within two months of each other, after a year of service. Icarus must therefore be built with these factors considered. Each piece of equipment must operate for a period that does not exceed its realistic lifetime, so spares, or additional systems are required. Furthermore the non operating equipment should be kept in a controlled environment – certainly for temperature and atmosphere – ideally for radiation and vibration as well, though that may not be possible for this mission.


The biggest challenges in spacecraft design are the things that move.  This is such a challenge that large amounts of effort go into eliminating moving parts.  Most spacecraft parts move only once – at deployment. Of parts that move “in the space environment”, scan platforms, antenna and solar panel pointing, and rover parts, the current best lifetimes vary from a year (for the rovers) to maybe fifteen years (for reaction wheels and gyroscopes). Lubrication is a difficulty if you are surrounded by vacuum, and exposed to deep space temperatures of only a few degrees. Mechanical stresses are a problem if thermal cycling occurs, for example orbiting into light and shade, or getting thermal radiation from an engine when operating. For Icarus to work some mechanical devices are unavoidable. Certainly the payloads will be radically limited if reaction wheel assemblies cannot be produced that can lie dormant for c60 years, and then work reliably for 10-20 years. Some of the approaches for mechanical devices also apply here. The biggest challenge however may be deploying the payload after 60 years. Will explosive bolts or knife cutters or mechanical springs function after so long, or will all elements be “vacuum welded” into a single mass? Remember that devices must be firmly attached during the acceleration and deceleration phase. In my own experience I know that the entry-decent-landing (EDL) systems of the Huygens probe were some of the most uncertain during analysis. Not even the experts knew how the gunpowder in the parachute mortar would behave after only seven years in space, and there was real  concern that the fabric of the parachute might decompose under the radiation environment and thermal cycling. Separation from the Cassini orbiter relied on a mechanical spring and short screw thread to impart just the right velocity and spin. The spring was the most well understood component, as the human race has significant long term experience with clockwork systems. Even this however will be challenged by a 60+ year timeframe


Energy relies on electronics, and often mechanics, for effective production and distribution, and here again the timescales and environment of interstellar exploration push the boundaries of technology and understanding. While the engines of Icarus will provide abundant energy during operation the Icarus operation will require non-engine based primary power, energy storage, and fuel and energy systems for payload probes.  The current assumption is that primary energy during coast flight will be provided by fission reactors. In this respect I was heartened by recent descriptions of British nuclear submarine production. The current generation of attack submarines are built with reactors fuelled for a 25 year lifetime. It is conceivable therefore to extend this to 60+ years. However our submarine reactors rely on convection – little use in zero gravity, so this exact reactor design will be inappropriate for zero-g.  I must admit to concern that the control rods of any reactor will require functioning mechanical systems – unless a “self moderating, self adjusting” design can be found (perhaps where the fuel burns from one end of the reactor to the other, catalysed or moderated by neutrons from the main reaction locus). Perhaps surprisingly nuclear fission appears an easier thing to expect to last for 100 years than either electronics or mechanical systems. For energy on the payload probes the best solution would appear to be Radioisotope Thermoelectric Generators (RTGs) [1]. Using radioactive decay to generate heat, and thermocouples to convert that to electricity. This results in a stable, long term, but relatively low energy density power source. RTGs have powered the Voyager probes for over thirty years, and the energy output of the nuclear material is well understood and can be calculated.  The usual plutonium fuel has a half life of about 90 years, meaning it will sink to half the output in this time. However the thermoelectric elements also degrade, and their performance after 50 years is an unknown. One advantage of RTGs is that the heat they produce – which is 10 times their electrical output – can be used to keep spacecraft warm. In fact small pieces of plutonium can be distributed in the interior of spacecraft to provide an evenly distributed heat source, without relying on electrical energy at all. This can dramatically reduce the electrical power requirements for deep space craft needing to keep warm. It is interesting to note that the suggested alternative fuel for RTGs (Americium) has a half life of  about 450 years, but a quarter the energy density – this translates to a better energy output only after about 220 years. Primary energy can also be provided by solar panels.  Within the solar system existing scientific satellites for planetary exploration basically fall into the two categories of “inner solar system craft” which use solar panels – usually out to Mars, and “outer solar system craft” which use RTGs. Solar panels can provide substantial power levels for a modest weight.  However solar panels are fragile, subject to many mechanical failure modes, rely on sun pointing, and degrade over time. In earth orbit this degradation is 25% over a 15 year GEO satellite lifetime [2].  In addition solar panels tend to be optimised both for the wavelengths of light available, and for the level of radiation expected. It may be that there are simply too many constraints to make a reliance on solar panels for any Icarus elements at all desirable In addition to primary power sources energy storage is highly desirable. Energy storage allows low powered primary sources to be accumulated for short burst of high power operation – for example long distance, or high bandwidth data transmissions, multiple simultaneous instrument operations during close approach or flybys, or specific experiment operating modes (e.g. high temperature heating). At present modern battery designs have less history than electronics, so their long term properties are unknown, however it seems that some technologies, such as lithium-ion batteries, can be stored indefinitely if discharged and cooled [3]. Finally there is the issue of fuel. Volatile chemicals may well decompose over time, and cryogenic fuels may well “boil off”. Less energetic, but more stable fuels may be preferred. For this reason Ion engines, rather than chemical thrusters, may be better for the Icarus payload probes. The low thrust may not be an issue given deep space deployment, and the stable monopropellant reaction mass may be easier to manage..

Strategies for maximizing reliability at target system

The long period of cruise means that at deployment of the payload solar system exploration probes the hardware will be of the order of 60+ years old. In many cases this will be a period of no operation, but a variety of degradations will likely have occurred.  In seeking to minimise the impact of these we have seen that non-operation, and controlled storage may be key to providing components with sufficient remaining operational life. There is however one further technique which may be applicable – that of on board fabrication. Construction close to usage time means that the devices concerned will be relatively “pristine”. Simple elements may be relatively easy to create – fuel can be fabricated from less volatile or reactive precursors, for example by electrolysis of water.  Batteries may be constructed fresh by mixing chemicals kept separate during the flight. RTGs may be refreshed/activated or even manufactured by using the output of the on-board fission reactor to irradiate precursor fuels (or even using waste products of the reactor itself). Probes can be constructed from modular components. I feel however that the biggest leap though will come with manufacturing technologies such as three-d printers [4]. Such devices can in theory produce any new mechanical parts required, and given the right scaling and feedstock might also produce electrical items. The way they keep themselves working is also novel – they can make their own replacement parts, or even copies of themselves. I think this is a key strategy for long term missions, it should allow generic feedstock to be used in place of specific spare parts.  The only problem I foresee is the many a science fiction stories that have raised the fear of self-replicating machines… References: [1] Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) Program Overview, Ritz, F and Peterson C. E.  from [2] Solar Panel Degradation from [3] Method for powering a spacecraft with extended-life battery operation,  Stanley J. Krause, US patent number: 6027076, Issue date: 22 Feb 2000 [4] Open source 3D printer copies itself, by Ulrika Hedquist from

Be Sociable, Share!

7 Responses to Interstellar Maintenance – part 2

  1. Mike W. says:

    Dear Mr. Barrington-Cook,

    You argue from analogy with contemporary and recent un-crewed planetary and scientific missions that: ‘it is therefore highly likely that the bulk of the on-board software for Icarus, and for its scientific payloads, will be written post launch’. Whilst your position has some validity in respect of the scientific payload I do not think that it can be applied to the bulk of the software required by the Icarus vessel.

    The most complex ‘traditional’ software on the vessel is almost certainly going to be the engine control software. If one thinks simply in terms the old Daedelus vessel’s engines, we have a software control system that has to manage an Inertially Confined Fusion (ICF) process that involves extraordinarily violent and complicated physical phenomena. Some of the more recent proposals for fusion engines are much more complicated than the Daedelus engine and involve both ICF and Magnetically Confined Fusion (MCF) not to mention some even more exotic concepts. The software to control all of this will be horrendously complicated. The job of the control software will be made even more difficult by the far from benign environment in which the engine will operate, it includes huge temperature gradients that will need to be managed, continual vibration that will cause alignments to change all the time, that will also have to be managed and lastly there may well be a radiation hazard that is likely to produce frequent bit flip errors in the data or indeed software itself.

    The control software for the engines will be mission critical. It would have to be written in such a way that a whole engine system (all stages – independently) could be certified in some way to reach the equivalence of IEC 61508 SIL (System Integrity Level) 4. Without such confidence in the vessel it would be inconceivable that anyone would fund it, even 89 years in the future (end of the 21st century is the last specified Icarus launch date). There is a huge cost implication in certifying software to these levels; it is a very difficult exercise. I am aware of only two instances of such an exercise, one of which is for Sizewell B, although I am sure there are others.

    The first stage (there may or may not be others) Icarus engine would be needed at the very start of the mission*. Thus the control software would have to have been, designed, written, tested (with the hardware), certified and in place for the start of the mission. The engine control software then has to operate, ideally continuously, for the duration of the boost phase. Updating that software during engine operation to enhance it is an unthinkable risk. Shutting the engine down and starting it again is a similarly unthinkable risk just for an update. Updating the control software to correct any errors found during engine operation would a very high risk exercise which would have to be very carefully considered.

    It might be possible to update engine control software for a second or subsequent stage engine during flight but would you really want to unless you found a critical error? How would you test the engine with the software unless you had a spare engine in the Solar System you could use? The level of fidelity needed to re-certify the system is such that running the software against a simulation of the engine would not be enough to assure oneself of it acceptability.

    As for the engine, so too for the communications systems, secondary power system controls, power distribution system controls and thermal management/ECS system controllers and similar systems, they all need to be ready from the start of the mission.

    The idea of writing the software, or at least updating it for the science packages/probes during the mission is sound but it does bring in some very dangerous risks which would have to be managed, and quite significant additional overall costs to the project.

    First, it makes communication from home TO the Icarus vessel much more important than it would be were the software all written prior to launch and the vessel was effectively autonomous. It almost compels the designers to build redundancy into the reception of messages, updates and commands from home in order to minimise the risk of not having a fully functioning science package/probes at the target. In other words you need two physical communication systems to minimise the risk if you are writing the science software as the vessel travels to the target destination.

    Second, having communications channels that allow the Icarus vessel to receive software updates opens up the opportunity for both accidental and malicious corruption of the software delivered. To mitigate this risk there have to be security and safety systems in place possibly at home and certainly on the Icarus vessel to manage the sending and reception of software. This implies either quite a large organisation at home which has to be funded and supported for the duration of the Icarus vessel’s journey, which is not something I would have thought could be guaranteed or more likely that the Icarus vessel itself needs sufficient autonomy to decide on what software it will upload and which, if any it wants to ignore.

    Third, there is a significant cost issue associated with writing software after the vessel has launched. From a cost point of view the earlier the software is written the better. The labour cost of writing software 20, 30 or 40 years after launch will be considerably more than writing it prior to launch.

    Turning now to ‘less traditional software’ by which I mean software that exhibits some level of autonomy and non-determinism in its decision making processes. There are two systems where some level of autonomy of this type might be desirable; first, as to some extent you have discussed a Vehicle Health Management System (VHMS) or Systems and a supervisory system for the science packages and any probes. The need for the latter system would be dependent upon the deceleration chosen for the Icarus vessel and hence the amount of scientific data it is intended and would be possible to collect. If the proposed collection were to be very sparse there would be little need for any autonomy, on the other hand a thorough survey of the target system after a complete deceleration would likely require a very sophisticated level of autonomy capable of making well reasoned priorities.

    Clearly unlike the situation with contemporary planetary probes it will not be realistic for the home project team to repair problems with the Icarus vessel by uploading software updates except at very early stages of the voyage. There are however several approaches that could be taken with a VHMS to keeping the Icarus vessel operational one is the ‘blueprint’ approach where prior to launch the principal failure modes of the various physical systems have been investigated and modified operational processes developed for when physical redundancy as an approach fails. The limitation of this approach is that it is simply not possible for the engineering team devising the system to review all possible failures and develop modified operational processes for them especially when control software is present**. An alternative approach is phenomenological whereby the VHMS attempts to recover the function of a failed system by modifying the available inputs to optimise the required outputs. This is effectively the equivalent of an aircraft pilot with severely degraded control surfaces wiggling the controls in the hope of finding a setting that will keep the aircraft in the air.

    Both of the VHMS types described above would be capable of being upgraded in the course of a mission and a case could be made that the benefits of upgrade would out-weigh the risks. The big problem with all non-deterministic methods however is how one would go about certifying it?

    I hope my comments have been of some value to you, they are offered with a constructive intent. I hope to be able to comment on some of the hardware reliability issues that you have raised at a later date.

    Best regards,

    Mike W.

    * I can foresee certain missions/launch technologies where this may not be the case but they are less likely than simply firing up the engines on day one.
    ** Software errors being systematic rather than random it is simply not possible to predict what software errors will be and how they will be propagated. If the Icarus vessel ends up, as is likely, a highly distributed network of computers running software on any convenient node or nodes the opportunities for software errors to propagate in unexpected ways will proliferate.

  2. Pat Galea says:

    Hi Mike.

    Many thanks for your comprehensive comment. I haven’t had time to read it thoroughly yet, but I just wanted to let you know that we (the team) are aware of your post, and will consider your points carefully.

  3. Jardine Barrington-Cook says:

    Hi Mike,

    Let me add to Pat’s thanks for your comments. I’m pretty much in agreement with all your sentiments, and quite happy to be wrong about the detail. However I think the nature of deep space missions, and our current experience are guides which should help us arrive at the best model. …and I’m very keen on post-launch maintenance, from what I know of the current state of the art.

    Let me start by saying I would want Icarus launched with a full “default” software load, intended to complete the mission autonomously with the highest possible chance of success, if all communication from the Earth failed. However if you can improve, or deal with the unexpected then I would want to have that opportunity.

    Let’s consider the engine software. I’m working at one remove from the team so I don’t know the Icarus design, and the original Daedalus was designed to work with much less computer power than we now expect. I think the engines will be highly dependant upon software, and I doubt that a full up test of main engine for more than a day will have occurred before launch. Clearly we could go straight to a 2 year burn, if there is no restart capability. However given the extreme nature of the design I would expect something more like “order of magnitude” steps. Burn for 24 hours, characterise the engines, optimise and adjust for real operating characteristics, burn for 10 days repeat, burn for 100 days, repeat, burn for the remainder of mission. Lessons from the first stage should also be applied to the deceleration stage software. Even after 100 days the distance to Earth will be such that I would want the option to have Earth based analysis and recoding. We are only talking light-days. Note I say “recoding” deliberately. We might get away with just parameter adjustment and optimisation if all goes well. However if there is some effect we didn’t previously understand, or some major damage to the vessel we want to be able to code round it. For example if we have asymmetric thrust we may want to change ignition frequencies to reduce vibration, and rotate the craft while varying thrust to cancel the effects. We could try to write a goal seeking AI to run on board to solve this with genetic algorithms, but I would still want to have an option for ground uplink, and use it if available. The lesson of many previous missions (Voyager, Galileo, Hipparcos, SOHO, and Cassini/Huygens) is that you want to change things after launch. I really consider this a more reliable approach than a straight 2-year all or nothing burn. I’d even go with live software updates during the burn if there is no restart capability. (BTW I have also suggested either varying the burn, or adding chemicals with distinct spectral lines as a (very low bandwidth) status telemetry. The primary lesson of many missions is that you always want to know what happened when things go wrong)

    In terms of the stresses on software caused by the environment this has to be designed around, but I would point out that the techniques of error correction and hardware resilience still exceed the state of the art in software design. It is relatively easy (though expensive, and with a power and mass penalty) to have multiple hardware devices which are inherently rad-hard, and mathematically error correcting. You can even make the ALU error correcting for single bit flips. What you cannot prevent easily is a single point failure due to software design or implementation – the first flight of Ariane V is a case in point. Both of the redundant attitude controls failed – essentially simultaneously – due to a value overflow. My own experience with Huygens was that the software was tested more rigorously than anything else I have every been involved in, but the very small size of the software made that possible. The design itself was inherently fault-tolerant, and critical elements were also specially protected in software. The test software was 10 times the size of the flight software, and the software itself was written in simulation while the hardware was designed. What I found both amazing and gratifying was that the very first time hardware and software met the software ran, produced telemetry, and tried to run a (simulated) mission. It was designed like that – whenever it started it assumed it was “in mission”, and it had to be explicitly commanded out of that mode for test purposes. None the less its boot sequence involved checking EEROMs for software updates, verifying them and patching them in. … and these were used, and the software significantly patched after launch.

    In terms of testing I think that the power of simulation is sufficient for these tests, especially after you have calibrated the models with real data. ESA’s approach is to purchase a “full-up” simulator available for the mission duration for each scientific satellite launched. In the Huygens case this involved the fight spare equipment connected to the integration test equipment simulating all external sensors, and controlled by ground based computers with orders of magnitude more processing power than the on-board devices. Think of the “brain in jar” thought experiments – The software literally doesn’t know if it is running the real mission or a simulation. It is even possible to build hardware to induce failures, and much of the integration and system testing involves failure modes. Operator training pre-launch (the simulation campaign) also involves simulated failures and recovery procedures.

    I particularly liked your example of the “blueprint” approach to the VHMS system. This is a prime example of where continuous on-ground simulation after launch can provide more and more useful data to upload to an on-board autonomous decision making system. You launch with VHMS, and a basic set of blueprints, and then spend the next 30 years developing more characterisation – each one enhancing the vessel’s fault tolerance.

    Also modular, hierarchical techniques can be used to prevent software errors propagating either horizontally (to other software on the same level) or vertically (to higher level oversight software). Such (relatively conventional and even traditional) software practices are, in my own view, more practical than non-deterministic AI approaches for this sort of mission. I hope time proves me wrong, because the world would be much more interesting, but general purpose AI seems no closer now than it did when Turning wrote his first papers. As mentioned above the main problem I see with software failure is commonality. The lack of propagation is no comfort if every scientific sub-probe fails at the same point. Previous exercises in software engineering (including formal proofs using schemes like Z, or multiple coding teams as tried on the Airbus) have yet to provide a full solution for this, but I would expect to use every tool available for the coding of Icarus software.

    You also mention other subsystems. Again I agree that basic launch software must be sufficient, but would want the option to upgrade. Consider the comms software – If a new decoder technique allows me to double the bandwidth received with a change to the transmission software, or if the attitude control is so good I can increase the data rate because I’m getting the centre beam strength at all times, then I’d love to increase the science return. If some unknown deep space effect reduces signal strength then I want to tell the probe, so I can get half the data reliably rather than risk losing it all. I would expect multiple telemetry options for both upload and download – even in the solar system it is common to high, medium and low bandwidth modes, and multiple antennae. This is not a cost or reliability penalty for the craft, as this subsystem is needed anyway. Again the exact approach to comms for Icarus remains to be optimised, but I expect that all the planetary sub-probes will be using the main Icarus vessel as a comms relay. This implies that Icarus will need to be able to operate reception from perhaps 20 sources scattered around the target system, so it is likely to have multiple steerable high gain antenna for this purpose. What can pick up a few watts of transmitted power across a solar system should also serve for much higher power transmissions from Earth.

    The issue of satellite hijacking has been considered extensively by industry, and encryption and verification command approaches are already used. Actually I think this is less of an issue for Icarus than it is for something like a geosynchronous comms satellite. A denial of service attack on those is relatively easy, and blackmail has been used on the web to make denial of service threats into a profitable form of illegal business.

    There will need to be an organisation on Earth to manage Icarus throughout its flight and encounter phases. I for one hope for cruise phase science as well as encounter science. However it is likely to be frighteningly small, if recent trends are any guide. Quite possibly all the people involved will be part-time, and have “day jobs”. One key aspect of after-launch software development is to give a team a set of structured activities that keep them fully familiar with the systems. This “knowledge maintenance” activity is significant with 10 year missions, it will be crucial with Icarus. In this respect I see post launch development and testing as a bonus, not a penalty to the overall mission costs. It also spreads the cost over years. The cost for writing software also need not necessarily go up, as additional computer power on the ground, and new tools and development approaches may reduce this. As an extreme case we could look to open source models, with just management and oversight being funded, or a sponsorship model with companies funding in return for advertising. Exploration and exploitation rights could also be granted, though that may be undesirable, or politically unacceptable.

    In terms of the useful time window for possible software updates there are a few scenarios I have considered. Firstly if Icarus orbits the target star, I would expect a 10 year lifetime in that position – potentially allowing updates after first local data return. This might include the NASA classic of selecting a “death target” for Icarus to crash into, but potentially allows for a multitude of serendipitous science. Even if the encounter phase is much shorter that this relative velocities and sensing capabilities suggest update possibilities. Consider a target star at a 5 ly range. At 0.1c that is 50 years flight time, but data sensed by Icarus at one light year out can potentially lead to updates before the encounter. That implies useful maintenance on the spacecraft for 80% of its lifetime.

    Thank you again for your comments – it is good to debate these issues positively. I hope we all get the chance to see some of this in action!

    Jardine Barrington-Cook, FBIS

  4. Peter Popov says:

    I have one comment regarding the engine software control system. I cannot imagine a situation in which a unique fusion engine will be designed from the ground up for the purpose of sending the first starcraft ever. If something like Icarus ever happens it will be based on continuous accumulation of engineering knowledge and practices over decades. By the time someone decides to build an interstellar craft, there will be decades of experience with fusion propulsion systems of every conceivable sort. So, no one will actually write the software from the ground up. What will happen is that an existing design, qualified, tested and used in routine interplanetary travel with hundreds of thousands of hours of operation, will be upgraded, hardware and software for a new purpose. To make the point clear, all of possibilities listed here: “Fusion Energy in Space Propulsion, T. Kammash, AIAA, 1995”, and more, will be tried, tested, some adopted and then used for commercial purposes. Then, one will be scaled-up for interstellar travel.

    A more general remark is that throughout the blog I see this recurrent theme of designing a starcraft on the premise of an earth-bound civilization with no interplanetary experience. This is simply impossible. A necessary condition for even the simplest interstellar probe attempt is an established, earth-centered or otherwise, interplanetary industrial civilization.

  5. Pat Galea says:

    Peter, those are good comments.

    Regarding your final paragraph, this is actually one of the debates that we’re having in the team. Clearly the original Daedalus design of the 1970s required a very significant interplanetary civilization in order to implement the He3 mining of Jupiter to supply the fuel for the craft. One of the issues that we want to understand better is whether this is a necessary requirement for an interstellar probe, or merely a result of the particular design configuration that they came up with.

    I personally lean toward the former; that any interstellar probe design will require significant interplanetary infrastructure in place. But it is nevertheless interesting to push the design to establish how little would be required.

    I also agree that a real interstellar probe would probably evolve out of other craft. One of the things we want to establish is a plausible roadmap of precursor missions and technologies that would be needed before a real craft is launched. We’ll have more on this later in the project! Watch this space.

  6. Peter Popov says:

    Well, if you pose the design problem that way, then I would suggest to consider D-T fueled system. In the above mentioned volume there is a detailed engineering analysis of a steady state (magnetically confined) D-T and D-He3 systems. The goal has been to see which one is more viable as an enabling technology for interplanetary commerce. It turned out that the D-He3 engine is orders of magnitude more massive that the D-T. If memory serves me right the primary reason was the cooling requirements due to the much higher bremsstrahlung of D-He3 plasmas.

    I have to admit that I have no idea how expensive it is to breed T, but a D-T system will eliminate the need for a He3 mining operation. So theoretically, you can launch an interstellar probe in an Apollo-style program. I mention this because setting up a He3 mining operation has to make commercial sense. Otherwise it will be horrendously expensive (c.f. the 90-Day Mars study of 1990 with its huge infrastructure requirements and associated cost estimate).

    Likewise, from a performance point of view both D-T and D-He3 are good enough for interplanetary flight (but the second cannot realistically be built without existing space infrastructure). However I do not know if D-T is adequate enough for an interstellar probe.

  7. Adam says:

    Hi Peter
    The power balance issue for advanced fuels like D-He3 are certainly challenging, so we’re labouring over fusion ignition concepts to try to figure out what works best. Any star-probe which hopes to reach other star-systems in 100 years or less needs a power/mass ratio of >1 MW/kg. But it also needs a fuel which can be stored in the long-term, especially if we’re planning on using fusion propulsion to stop and explore the target system. D-T has the 325 W/kg decay heat, and the short half-life, of tritium to handle, neither of which is easy. There’s been a lot of work on D-He3 and bremsstrahlung doesn’t seem intractable, but I think you might be right about a long development being required for interstellar capable designs using steady-burn fusion.

Leave a Reply

Your email address will not be published. Required fields are marked *