Intelligent spacecraft are still a few years away, but robots and automated systems can meanwhile play a large part in extending space exploration

The spaceflight industry has just one year year left to emulate Arthur C Clarke's HAL, the spacecraft computer that became too intelligent in 2001: A Space Odyssey. But an "intelligent spacecraft", even an unmanned one, is still the stuff of science fiction.

Advances have been made, however. One of the most publicised is the success of Remote Agent - the autonomous controller of the Deep Space 1 planetary spacecraft, built by Spectrum Astro for NASA's New Millennium programme. A precursor of future autonomous spacecraft, Deep Space 1 celebrated its first year in space in October.

But while we are a few years away from intelligent spacecraft, increasing automation is becoming a reality as designers seek to achieve more with less. The high cost of manned spaceflight means that the exploration of space is likely to be accomplished robotically for some time.

NASA is looking to robotic systems to reduce the costs and increase the safety of a manned mission to Mars. The US Department of Defense is studying automated vehicles that could manoeuvre in space to service and refuel - or inspect and disable - orbiting satellites. Commercial companies are proposing robot probes that could return marketable rock samples from the moon or near-Earth asteroids.

Probes are a good example of the benefits of autonomous spacecraft. Even within the confines of our solar system, it can take hours for signals to flow to and from the Earth and the spacecraft, making it difficult to control the vehicle - particularly if it encounters some unexpected problem or opportunity.

Artificial intelligence

An artificial intelligence software system like Remote Agent can extend the boundaries of space exploration, autonomously monitoring spacecraft health, troubleshooting problems and even replanning the mission to accommodate an instrument failure and still provide valuable science data. The system would be able to repair faulty payloads and even take advantage of unplanned science opportunities by reacting to unexpected phenomena, such as an asteroid-emitting uncharacteristic radiation.

"Automated reasoning," as NASA terms it, promises production benefits, allowing "the control systems of spacecraft to be plugged together like Lego blocks". A planetary rover, for example, could be programmed by selecting from a library of software models for specific behaviours, like "find the next most interesting rock close by and move to it". The prolonged process of checking out a spacecraft before launch could also be streamlined, with the models being flexible enough to accommodate last-minute changes in a mission plan. These would normally require a complete rewrite and revalidation of the software.

NASA claims this method of "goal-directed, model-based" programming has demonstrated its usefulness on the Deep Space 1 mission. "Over three days last May, Remote Agent planned Deep Space 1 activities on board and carried out the plan without ground intervention," the agency says. Taking command of the probe, the artificial intelligence software "detected, diagnosed and fixed simulated problems, showing that it can make decisions to keep a mission on track", NASA says.

Remote Agent is made up of three components, each of which play a significant part in controlling the spacecraft. The Planner and Scheduler (PS) component produces flexible plans, specifying the basic activities that must take place to accomplish mission goals. The Smart Executive (EXEC) carries out the planned activities, while the Mode Identification and Recovery (MIR) programme monitors the health of the spacecraft and attempts to correct problems.

Mission goals

All three parts work together to make sure the spacecraft accomplishes its mission goals. The EXEC requests a plan from the PS, which produces a plan for a given time period based on general mission goals and the current state of the spacecraft. The EXEC receives the plan, fills in the details - for example, what spacecraft system actions must take place to complete planned activities - and commands systems to act.

The MIR constantly monitors the state of the spacecraft, identifying failures and suggesting recovery actions. The EXEC executes the recovery actions or requests a new plan from the PS which takes into account the failure.

The parts of Remote Agent are constantly communicating, both with each other and with external components of the spacecraft. While the MIR receives information about the status of different systems from monitors located throughout the spacecraft, the PS must receive information from system "planning experts" to generate a plan of action.

For example, the navigation system reports to the PS on the probe's current position and the attitude control system tells the PS how long it will take to turn the spacecraft to a new position. Finally the EXEC, acting on the PS's plan, sends commands to other pieces of flight software, which in turn control the spacecraft's systems.

There is little point in giving spacecraft "intelligence", however, unless they are equipped with capabilities, such as remote manipulation, that give them the flexibility to perform varied missions. So NASA is looking at ways to merge robotics and teleoperations to create telerobotics - the human-controlled supervision and remote operation of manipulator-equipped robots.

The agency's telerobotics programme ranges from basic scientific research to specific mission applications, such as remote sampling of planetary materials. NASA has been studying three main application areas: on-orbit assembly and servicing of space platforms using free-flying or attached robots; planetary surface exploration; and robotic tending of science payloads, which seems to have been put on the back burner.

NASA's goal for space telerobotics is "to develop, integrate and demonstrate the science and technology of remote manipulation such that by the year 2004, 50% of the spacewalking-required operations in orbit [and unmanned exploration of planetary surfaces] may be conducted telerobotically".

Teleoperation requires a strong interaction between the human operator and the controlled robot, necessitating vision systems and proximity sensors to supply the operator with sufficient information about the environment in which the robot is operating. On-orbit assembly and servicing will require virtual reality "telepresence", using advanced display technologies and sensor simulations to immerse the operator in the robot's environment.

NASA has identified a suite of development programmes using free-flying and attached robotic servicers, one of which is the Ranger Telerobotic Shuttle Experiment. Originally planned as a free-flying experiment, the Ranger is a four-armed telerobot which is set to fly in the Shuttle payload bay late this year, and will be used to demonstrate tasks required for robotic servicing of the ISS. The device will consist of two dextrous manipulators, each with wrist cameras, a permanently attached positioning leg and a mechanical arm that is equipped with stereo video cameras.

Claimed to be the most advanced space robot yet, the Ranger will be controlled from a ground station at the University of Maryland and will provide data for correlation with ground-based experiments conducted in a neutral buoyancy simulator. When the basic mission objectives have been met, NASA says, the Ranger will attempt to emulate spacewalk spacecraft servicing and work site preparation to explore the limitations of telerobots.

Future shuttle and space station crews, meanwhile, could get their own robot companions. NASA's Ames Research Centre is developing the Personal Satellite Assistant (PSA), a softball-sized robot equipped with sensors to monitor the spacecraft's atmosphere, wireless communications connections to link the astronaut with the spacecraft's data network and a camera for video conferencing. The device will have its own navigation and propulsion systems and will be able to operate autonomously throughout the spacecraft.

"We're developing an intelligent robot that essentially can serve as another set of eyes, ears and nose for the crew and ground-support personnel," says NASA. It hopes to launch a PSA in about two years on the Space Shuttle and about a year later on the space station. "This will be an evolving prototype to test and evaluate different hardware, software and sensor suites," the agency says.

NASA foresees the robotic assistant taking on other tasks, including remotely monitoring payloads when crew members are unavailable. Multiple PSAs could collaborate to troubleshoot spacecraft environmental problems, flying in formation to locate a pressure leak, for example. The autonomous device could also be used for routine housekeeping chores, such as environmental sensor checks.

Beyond the PSA is Robonaut, a robot that could be operated outside the space station. It would be able to mimic operations being performed by an astronaut inside using virtual reality, and would be available for use within minutes of a problem occurring outside the spacecraft. This would eliminate the normal delay of several hours, including spacesuit donning and pre-breathing time, while astronauts prepare for a spacewalk.

Refuelling satellites

The US Defense Advanced Research Projects Agency plans to demonstrate the Advanced Space Transportation and Robotic Orbiter (ASTRO) in 2001. This would have a variety of uses, ranging from placing small payloads into orbit at short notice to refuelling and reboosting military satellites already in orbit. The ASTRO would be placed into a "cheap" orbit by a small launch vehicle and would manoeuvre to its required orbit.

On a larger scale, the US Air Force's Space Manoeuvre Vehicle programme is looking at a reusable spacecraft that could remain in orbit for up to a year before returning to Earth. Such a vehicle could manoeuvre in orbit to inspect other spacecraft, a task that is to be demonstrated in 2001 on the second shuttle deployment of the NASA/Boeing X-37 reusable space vehicle technology demonstrator.

NASA's main target of opportunity to make full use of robotics, however, is in the exploration of Mars. Robotic reconnaissance and surveying rovers will be needed for a proposed unmanned sample collection and return mission which could be staged as early as 2005. The agency says autonomous systems with automated reasoning will be required for the in situ production of propellant for the sample return mission and any "economically feasible" plan for the manned exploration of Mars.

Robotic surveyors will be required as a precursor to a manned landing. Long-range autonomous rovers will have to explore potential landing sites, place instruments on the surface, and take samples for analysis and return. They will require a high level of autonomy, including the ability to identify areas of potential scientific interest.

NASA's Jet Propulsion Laboratory is testing a prototype of a future mobile Martian explorer that will be able to travel for 50km over the surface of the Red Planet with an autonomous sensing, perception, navigation and manipulator system. Called Rocky 7, the vehicle is based on the Sojourner microrover that landed with the Mars Pathfinder in July 1997. A simpler version, called Marie Curie, will fly on the Mars Surveyor mission in 2001 to collect samples, which may eventually be returned to Earth later in the decade. A larger 60kg (132lb) rover will probably fly on a 2003 mission, equipped with autonomous traverse and navigation, remote visual designation and acquisition systems, and scientific instruments able to make intelligent selection and analysis of samples. An improved rover will fly in 2005 on the planned sample return mission.

Manned mars landing

Looking further ahead, NASA is working with university researchers on advanced space robots which could further cut the costs and increase the safety of a manned landing on Mars. One concept is the University of Nebraska-Lincoln's Mars Surface Modular Robotic System. Recognising that the tasks facing a robot could range from handling scientific instruments to bulldozing Martian rocks, this proposed system consists of individual modules that can be assembled into different configurations, enabling the robot to be reconfigured in situ to perform different tasks.

Another concept is the Massachusetts Institute of Technology's (MIT) Self-Transforming Robotic Planetary Explorer. This would be able to change its configuration to overcome physical obstacles and perform a range of tasks. Conventional heavy physical components such as gears, motors and bearings would be replaced by lightweight compliant elements with embedded muscle-like actuation. Like many other advanced space robot concepts, MIT's Explorers imitate biological organisms in their execution and operation. It seems that insects and animals still have a lot to teach us, even in space.

Source: Flight International