Why cutting-edge spacecraft use ancient computers

NASA illustration of the Moon landing
(Image credit: Getty Images)

It’s often said that our mobile phones are far more powerful than the computers used to put Neil Armstrong and Buzz Aldrin on the moon in 1969. In fairness, that’s to be expected. Half a century is a long time in technology.

More surprising is that the computer on board the US space agency’s New Horizons probe, which reached Pluto in 2015, was also less smart than the best smartphones. So too is the computer on board NASA’s Orion spacecraft that will, one day, send humans to Mars.

It may appear rather strange. NASA has been at the forefront of technology for decades, giving us cordless vacuum cleaners, solar cells, wireless headsets and air purifiers among a host of other goodies. Yet it also tends to lean towards the tried and tested.

“For us, it’s all about reliability,” says Alan Stern, principal investigator of the New Horizons mission. “After all, you can’t fit technology on the way to Pluto, so reliability and long parts and operations experience far trumps the need to use the fastest, newest computers in the spacecraft we send off to the planets.”

The original 1994 Sony PlayStation

In the case of New Horizons, which launched in 2006, astronomers made use of a 32-bit Mongoose-V RISC processor with a clock speed of just 12MHz. Created by Synova, it was based on the MIPS R3000 CPU introduced in 1988 – a CPU that actually got used in the original PlayStation a few years later. To ensure the chip could withstand its long and arduous journey into space (the probe is now more than 50 times farther from the Sun than Earth is), the processor was radiation-hardened.

Even though it perceivably lacked power, the computer has been responsible for the bulk of the spacecraft’s processing capability, guided by intricate flight software. It collects and processes instrument data, distributes operating commands to each of the probe’s subsystems and runs algorithms that check for problems, correcting them if necessary. It has been nothing short of a success.

“When we plan a mission, there’s always an involved process of parts selection,” Stern says. “We set technology requirements and then find a flight computer that meets our needs, same for all missions.” And yet at one time, NASA did rely purely on cutting-edge tech for its spacecraft.

The birth of Silicon Valley

Although the New Horizons computer is far more powerful than the one used aboard Apollo 11 that landed the first humans on the moon, the Apollo Guidance Computer (AGC) was advanced for the time. Then, again, computing was still in its infancy so it had to be.

Developed by the Massachusetts Institute of Technology (MIT), the AGC contained thousands of silicon chips yet it only had 74KB of ROM, 4KB of RAM and it operated at 0.042MHz. The use of such chips was relatively new. The first working integrated circuit had only been demonstrated in July 1958 by American electrical engineer Jack Kilby while working for Texas Instruments and a patent was filed the following year.

NASA’s Apollo Program seized the opportunity to make use of the technology, becoming the largest consumer of semiconductor chips between 1962 and 1967 – snapping up more than 60% of those made in the United States. It took 300 people seven years at a cost of $46 million to create the mission software, including the on-board programme Luminary, yet the move was pivotal.

NASA’s fierce appetite for integrated circuits led to the rapid rise of Silicon Valley and it helped the price of the chips to plummet so much, they were selling for $15 by the time Neil Armstrong was making his one small leap.

The important thing is that the technology – while primitive by today’s standards and no better than a pocket calculator – was more than sufficient. It not only got humans to the Moon, it ensured they arrived back.

Astronomers would use a calculator-style keyboard to input commands using numerical codes (the digits representing verbs and nouns) and these would tell the spacecraft what to do. It was a method that worked and the performance of the computer was so great, it would be another eight years or so before consumers could start to enjoy similar technology in the likes of the Apple II and Commodore PET.

Heavy-duty machinery

Apollo’s mission control computers on the ground were a tad more advanced, however. The engineers and flight technicians used five IBM System/360 Model 75 mainframes released in 1965, among other systems, and they were among the most powerful computing systems in the world.

One of these machines was used to calculate the lift-off data used in Apollo 11 for the flight back to Earth, according to IBM. “We use up-to-date computers in our Mission Control, too, because they can be repaired or replaced easily,” says Stern.

But that’s not always the case. James Mason, who today works as a research scientist and engineer at the Applied Physics Laboratory at John Hopkins University in Baltimore, Maryland, US, recalls a visit to the US government’s White Sands Test Facility in New Mexico. The laboratory contains computers that support the sensors on board NASA’s Solar Dynamics Observatory which launched in 2010.

A NASA control rom

As reported in the journal Nature, data was streaming to a desktop computer dating back to the 1980s. “It was a TD-plus and I’ve spent a lot of time trying to find information about it,” Mason says. “But I’ve seen this kind of thing at NASA before, as well as at the US National Institute of Standards and Technology, because, well basically, if it ain’t broken, don’t fix it.

“If we had an abundance of time and money then we’d keep those systems up to date but we’ve just got other things to do that are more directly related to the end-goal of why we are at such facilities in the place – to get a new instrument calibrated or to launch a telescope into space to gather the data.

“Even without keeping all of the resources, like the computers, constantly up to date, and the whole team trained on what’s changed, we have months of work leading up to and after each of these events. So that’s why these systems get used.”

It suggests an underlying ethos that proven technology should be retained until it proves unreliable. Once a computer looks to be on its last legs or if there is a compelling reason to upgrade to do a job better and equally reliably, then it will be replaced – assuming it isn’t millions of miles away.

As further proof, Commodore Amigas were number crunching and analysing images in NASA’s research offices from the 1980s until 2004, long after the range was discontinued in 1996. The computers proved to be a good way of processing launch vehicle telemetry until something better came along.

RELATED RESOURCE

The digital workplace roadmap

A leader's guide to strategy and success

FREE DOWNLOAD

Is the old 1980s computer at White Sands, though, still in use? Alas no. It ended up being ditched in 2015 because a connector and ribbon cable – “one of those old, wide rainbow ones,” said Mason – were starting to fail. Yet it would have made life easier had it continued to work well.

“The connector in particular was not staying connected and we’d get intermittent dropouts,” he adds. “But I have to say that working with the replacement has been a mixed bag. Yes, it’s got a really cool display that looks a lot like the Hollywood depiction of NASA but it’s also vastly more complex and it took us multiple weeks just to get it configured correctly at our most recent launch in September last year.

“The TD-plus was super simple and it never changed year-to-year so it would take about five minutes to get it set up for each launch. It didn’t look like much but that time we didn’t spend fiddling with something new was time we could spend on more pressing parts of the rocket schedule.”

Space invaders

The replacement would have been introduced after much research into its suitability, however, although personal preference and cost is also taken into account. An insightful video dating back to 1998, for example, in which Gary Jones, then NASA’s principal systems engineer, shows Amiga computers at Hanger AE. The machines were supporting the launch of the Space Shuttle Endeavour to the Russian Mir space station, but Jones says the first choice had been a selection of Apple Macs, until NASA realised they were too much of a closed system.

Jones also reveals the space agency had wanted to use PCs running Windows 95 and NT instead, but the engineers kept insisting they weren’t fast enough. DEC Alphas, based on 64-bit RISC architecture, were proposed but deemed too expensive.

Oddly, it was felt the Amiga didn’t cost enough, but they made it into the building nevertheless. As if to show how widely they were used, one Commodore computer – an Amiga 2500 released in 1987 – was also put up for sale on eBay 30 years later, having been used by NASA’s Telemetry Lab. It sold for more than $5,000.

But those are machines on the ground. The computers sent skywards as integral parts of spacecraft are generally tested much more thoroughly. “There’s an entire branch of NASA called the Space Technology Mission Directorate that, among other things, flies a lot of satellites specifically to see how new technologies will perform and ‘de-risk’ them for future science missions,” Mason says. “But even in those cases, most of the satellite will be heritage components so that we can be sure to get data back about the new technology in question.”

How to run classic versions of Windows on modern PCs

NASA is certainly more than happy for the Orion spacecraft to contain processors that are getting on for 20 years old. After all, it needs the craft to journey to Mars through the Van Allen belts above Earth where it will be bombarded with radiation. The Honeywell flight computer runs on two IBM PowerPC 750X single-core processors that have been around since 2002. Originally used in Boeing 787 jets, the processors are considered to be more than up to the job.

“In general, it takes about three years to build flight hardware and get it ready for flight,” Matt Lemke, NASA Orion avionics, power and software manager says. “We are limited in what processors we can select due to the need for extreme reliability as well as the radiation environment that ground systems don’t need to be concerned with.

“Once we have gone through the expense of selecting a processor and building it into the flight hardware, we don’t change unless there is a compelling reason. The selected processor for Orion has all the computing power we need and it has been working great. So far it has been used in our pad abort flight test, the Exploration Flight Test -1 mission, the upcoming Artemis I flight, and we anticipate using this processor all the way through Artemis XII.”

Long-range photo of NASA's Artemis I test flight

Even so, precautions have been taken. “We build them into a ruggedised custom design,” says Lemke. Indeed, the computer has been placed within a larger case and the hardware and circuit boards are thicker. Just in case the worst happens, though, three computers are going to be on board – the third one acting as a backup should the other two go kaput or at least end up have to be reset (if this happens, they’d be down for 20 seconds and that can be a rather long time when travelling at great speed).

Personal computing out of this world

Astronauts also use personal computers in space and these, too, are carefully selected. The first laptop to fly on the Space Shuttle was the GRiD Compass, which, after beginning development in 1979, was released three years later. Starting with model 1101, it used a clamshell case made of magnesium alloy and it weighed 4.5kg, which was less than half that of its main competitor, the Osborne 1.

Packing a 16-bit Intel 8086 chip, 340KB of magnetic bubble memory and a 1,200bits/sec modem, the computer also had a 320 x 240 8.5in plasma screen. It included a magnetic storage system, too, and the lack of moving parts made it ideal for the rigours of space, which is why NASA sent it on board the Space Shuttle Columbia on 28 November 1983.

RELATED RESOURCE

Modernise the workforce experience

Actionable insights and an optimised experience for both IT and end users

FREE DOWNLOAD

Codenamed SPOC (or Shuttle Portable On-Board Computer), astronauts used it for on-board navigation, tracking fuel and checking other data on board the shuttle. Plus for planning their time, including photo shoots. NASA’s faith in the GRiD Compass was proven in January 1986 when it survived the terrible Challenger shuttle disaster, which claimed the lives of all seven crew members aboard. Once recovered, it continued to work.

There have been many other computers on board spacecraft since including a few Raspberry Pi devices. In 1997, Asus saw its P6300 laptop launched into space and two P6100 laptops lasted the full 600 days aboard the Mir space station the following year. Asus marked the 25th anniversary of this moment in its history by launching the ZenBook 14X OLED Space Edition this year. It says the new machine could also withstand going into space having been put through its paces by NASA and that standards have improved greatly over the years.

“The testing procedure is now rigorous and includes being tested at high altitudes, extreme temperatures (high and low), extreme shock pulses and being exposed to sand, dust, fungus and solar radiation,” says Ciprian Donciu, Asus UK country manager who adds that the device has to continue working throughout.

“But this was not the case 25 years ago. The requirements for space testing were completely different yet our laptops lasted. After the Mir station experienced electrical and some technical issues, the laptop itself was one of the few items onboard that was able to be successfully used by the Russian astronauts consistently. It was so reliable that the cosmonaut personally thanked Asus.”

It’s likely that space agencies will continue to mix old with new for some time to come, selecting tech entirely on its merit rather than age. Although NASA could afford the most cutting-edge computer components, there is simply too much at stake. “We have to work with a known quantity,” Mason says. “It simply poses a lower risk.”