Imagine an interstellar probe where the spacecraft can pick its own orbit, take its own pictures, and send probes down to the surface of a far-away planet without human help. Or imagine a mission that hitchhikes on a comet, scanning the sky and picking out the most interesting targets among millions of locations without guidance from engineers back on Earth sitting in a control room.

These are two examples of how NASA hopes to use artificial intelligence. As far-fetched as the concept sounds, the agency is already using AI in missions on both Earth and Mars. And there are other missions in the works that could see AI exploring icy moons in search of life.

This bot-friendly future stands counter to some of the fuss in the press this past week, after Facebook shut down an experiment because two artificially intelligent bots began communicating in a shorthand language instead of English. Many in the media portrayed the bots as coming up with their own language.

NASA Jet Propulsion Laboratory‘s Steve Chien says that the reality is more subtle: The bots were not rewarded for using English, so they just sought out the most efficient route possible to communicate with one another. NASA, he added, takes robot safety very seriously. Space station astronauts occasionally work alongside Robonaut 2, a simple machine that can flip switches and do other menial tasks. In the future, he said, NASA astronauts could work with more intelligent robots on Mars, with the robots scouting sites and telling humans the most interesting locations to survey.

“NASA is very risk-averse [about crewed missions],” said Chien, who is technical group supervisor of the artificial intelligence group at JPL. “It’s a high-profile mission, and with a crewed program there’s even more of an obsession with safety than with robotic ones — as there should be.”

When thinking of autonomous robots working in space, a person might recall scary examples from movies — HAL from 2001: A Space Odyssey, for instance. But AI robots are already working in space, and these are all helpful bots — more like GERTY in the 2009 film Moon, which works alongside astronaut Sam Bell on a lunar colony.

NASA’s Mars rovers are already equipped with artificial intelligence, which makes some decisions independently — a useful feature since communication between a rover and Earth might take 20 minutes because of the vast distance. The most famous example is the Curiosity rover, which has an automatic targeting system that helps direct its cameras — and its laser — at rocks and other objects that the system considers worthy of inspection. A more primitive version was installed on the older Opportunity and Spirit rovers — and Opportunity is still running 13 years after landing on Mars.

Closer to home, NASA used artificial intelligence on its Earth Observing-1 satellite, which completed its mission earlier this year after operating since 2003. The instrument was called the Autonomous Sciencecraft Experiment (ASE) and helped scientists look for interesting events on Earth’s surface like volcanoes, which meant they could send out alerts to the public faster than humans working on the surface.

There are also two ongoing experiments that scan for interesting events such as supernovas and select the “best of the best” data for scientists to evaluate. The first is V-FASTR (an acronym that refers to radio transients or events, such as pulsar pulses) and the second is the Intermediate Palomar Transient Factory (iPTF), which looks for supernovas or other neat things in optical wavelengths. Work from iPTF helped establish the existence of gravitational waves, as there were no supernovas in the sky that affected the results first seen by the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 2016, Chien said.

RELATED ARTICLE:   Three New Crew Members Arrive at International Space Station

The sky is a big place, and you might see thousands of things at any given moment peering into the night. Before these experiments were available, Chien said, scientists arbitrarily picked 50 things to look at. Now, they can look at the 50 objects that AI instruments determined to be the most interesting.

Here’s the exciting thing: These existing artificial intelligence bots are the technology of yesterday — particularly in the case of the teenaged Opportunity mission on Mars. As powerful as that technology was back in the early 2000s, researchers today can do so much more with computing.

The Mars 2020 rover is expected to depart for the Red Planet in three years. Multiple instruments on the rover will have autonomous imaging capability, Chien said, and the targeting will be a lot smarter. Not only will the rover be clever enough to choose an interesting target, but also to pick the best approach for scanning it and obtaining information relevant to researchers. Mars 2020 could even change its schedule of tasks on the fly if it finishes something ahead of time, allowing scientists to squeeze the most they can out of the mission.

NASA’s planned Europa Clipper mission is scheduled to perform multiple flybys of an icy moon of Jupiter, one that has been observed spouting what appears to be water geysers — at least in the eyes of the Hubble Space Telescope. Clipper will operate in an extremely harsh radiation environment that is expected to reset or crash its computer several times in a single flyby. It takes hours to send instructions to and from Jupiter, so the Clipper will be equipped with a computer system that can diagnose problems and fix them before engineers back on Earth even knew a problem occurred.

Other projects remain in the proposal stage, but no less exciting when considering the possibilities for artificial intelligence. NASA hopes to one day land a device on Europa, or perhaps Enceladus — another water geyser-spouting moon orbiting Saturn. (The agency is quite interested in these “ocean worlds,” as NASA calls them, since the moons could harbor microbial life.) Early studies suggest that space agencies could put a submarine in an ocean on one of these moons. But it would be a solo voyage because, Chien said, we can only communicate with the little submarine for perhaps a month.

Artificial intelligence on board a submarine would be tasked with figuring out where to go safely. But other considerations might include: How to avoid obstacles? Which targets have the most potential for observation? Or at what temperature is it safe to travel? Chien pointed out that on Earth, if we take a typical robotic submersible from temperate waters to the Arctic, recalibrations are always needed to adjust for the change in temperature.

RELATED ARTICLE:   NASA’s Cassini probe has broken apart in Saturn’s atmosphere — ending its 20-year journey 14

The Comet Hitchhiker project — funded by NASA’s Advanced Innovative Concepts program that gives early-stage funding to far-off mission concepts — could develop a spacecraft capable of catching a ride with a comet on its way to the outer solar system. With the craft operating so far from Earth, it would take hours for it to communicate back and forth with engineers. So an AI bot would be more efficient in picking targets by itself and sending data back to Earth. Self-selecting AI would also be useful for another mission concept that would send 100 small CubeSats to nearby asteroids; it would help the little spacecraft decide how to orbit the asteroids and what to image on the surface.

Breakthrough Starshot is a $100 million research and development program, aiming to establish proof of concept for a ‘nanocraft’ – a fully functional space probe at gram-scale weight – driven by a light beam.

Last year, the Breakthrough Starshot initiative, which includes billionaire Yuri Milner and physicist Stephen Hawking, proposed to send a tiny nanocraft to our nearest star system, Alpha Centauri, around the year 2038. The nanocraft would travel at an incredible 15- 20 percent the speed of light, allowing it to get to the star in only a couple of decades.

Chien pointed out that an interstellar mission would be a perfect use of AI. It could figure out by itself what type of planets are in a system, how to navigate the craft into orbit, what types of data to collect, and where to deploy probes, if the world looked habitable. But science observations of this kind would not work with Breakthrough Starshot’s current design because the mission is not supposed to slow down. However, some proposals from other groups suggest slowing it down would be possible. But regardless of the mission architecture, an interstellar probe would be best served using AI because humans cannot anticipate everything, Chien said.

“When an interstellar probe gets there, it will have lots of information,” he said. “Let’s assume the planet has oceans and we have probes that we can drop from orbit to sample those oceans and take measurements.” The question then, he said, is where to deploy the probe, and AI could make that decision quickly.

The JPL scientist encouraged a healthy respect and concern in the public when thinking about using AI in future missions, but added that fear of AI is irrational as long as we make sure that people familiar with the technology are involved in its development and are consulting with the public.

“There was a time when to make a phone call, a human had to be involved. When you rode the elevator, a human had to be involved,” Chien said. “Now we would say that’s insane. These are the wheels of progress. It’s going to happen, we need to get used to it, and we need to do it in a reasonable and rational fashion.”