NASA will keep safety front of mind while harnessing the ever-growing power of artificial intelligence, agency officials stressed.
Artificial intelligence (AI) technology is advancing rapidly, as the emergence of tools such as ChatGTP shows. The burgeoning field could help NASA make transformative discoveries, agency officials say — but there are potential pitfalls as well.
"There's a lot of risk with AI, because if it's employed in ways that are not for the betterment of humankind, then it could be disastrous," NASA Administrator Bill Nelson said today (May 22) during an AI town hall the agency held with its employees.
"AI can make our work more efficient," he added during the livestreamed event. "But that's only if we approach these new tools in the right way, with the same pillars that have defined us since the beginning: safety, transparency and reliability."
Related: How AI could help find alien planets and asteroids
NASA is no stranger to AI; the agency has been using the technology in various capacities for decades, Nelson stressed. But AI's capabilities are improving rapidly these days, so NASA is stepping up its efforts to understand the tech, as well as properly develop and deploy it.
Last week, for example, NASA announced the appointment of its first-ever AI chief — David Salvagnini, who had been serving as the agency's chief data officer. And he and his colleagues aim to get NASA's entire workforce more AI-literate soon.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
"Part of what we'll be doing — and you'll see the announcement soon — is the 'Summer of AI,' which is a training initiative where everyone at NASA is going to have an opportunity to learn more about AI," Salvagnini said during today's town hall.
"It's literally a campaign," he added. "It's going to be kind of a surge, if you will, of training opportunity."
Salvagnini also discussed AI safety. Responsible use of the technology begins with a mindset that keeps humanity central and accountable, he said. Indeed, Salvagnini said that he'd prefer the term "assistive intelligence" over "artificial intelligence," as it keeps us in the driver's seat.
AI "is a resource that I now have access to that can help me in my decision process," Salvagnini said. "The AI is not accountable for the outcome. The person is; the human is."
He pointed to weather forecasters' modeling of possible hurricane tracks as an analogy for the responsible use of AI. Modelers present multiple potential tracks, because they're aware of the limitations of the datasets they're analyzing. In other words, they're using their judgment.
"So, then, how do we be safe about this?" Salvagnini said. "We understand our responsibility as the ultimate accountable person as it relates ... to our work products. And then if we happen to use AI as part of the generation of a work product, that's fine, but just understand its capabilities and limitations."
AI safety wasn't the only topic during today's town hall, however; agency officials spent a fair bit of time extolling the promise of the technology as well.
"AI is going to help us in so many areas," said NASA Deputy Administrator Pam Melroy.
She cited the technology's power to sift through huge amounts of information quickly and efficiently — a capability that could lead to big discoveries in heliophysics, Earth science and astronomy.
"We don't even know yet what new insights we're going to get by using these new techniques to look at old data in new ways," Melroy said.
Some of those insights could be an indirect benefit of the technology, she and other town hall speakers said: AI could increasingly take over mundane, labor-intensive data-analysis tasks, freeing up NASA employees to tackle more difficult and complex problems.
Melroy ended her prepared remarks today with a qualified endorsement of AI, striking a similar tone to that set by Nelson and Salvagnini.
"So, as I close, I just want to emphasize it is a powerful, ingenious and very exciting tool," she said. "But if we don't manage it responsibly, we're going to open ourselves up to a world of risk that jeopardizes our credibility and our mission."
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.
Michael Wall is a Senior Space Writer with Space.com and joined the team in 2010. He primarily covers exoplanets, spaceflight and military space, but has been known to dabble in the space art beat. His book about the search for alien life, "Out There," was published on Nov. 13, 2018. Before becoming a science writer, Michael worked as a herpetologist and wildlife biologist. He has a Ph.D. in evolutionary biology from the University of Sydney, Australia, a bachelor's degree from the University of Arizona, and a graduate certificate in science writing from the University of California, Santa Cruz. To find out what his latest project is, you can follow Michael on Twitter.