A Generative AI Model has been Deployed in Space

By: Jordan Buck, 2024-08-04

A modern-day technology has just been announced to have been deployed in space by Booz Allen Hamilton. They have deployed a generative AI large language model (LLM) on the International Space Station using Hewlett Packard Enterprise’s Spaceborne Computer-2. This supercomputer, launched in February 2021, supports advanced experiments by allowing for data processing in orbit before sending insights back to Earth, reducing data constraints. It has been used in various research fields, including the sequencing of DNA, processing of images, and additive manufacturing. While only an experiment for now, this LLM started operations in July and its operator hopes to expand in the future. Currently, astronauts face many tasks day-to-day onboard the ISS, involving many detailed repairs and experiments. A generative AI that understands the technical manuals for all the everyday processes and tools could assist astronauts by providing rapid, relevant responses to any questions that they have. This could potentially accelerate repair processes and avoid downtime when confusion strikes. Having local processing also means that if generative AI is used extensively in the future, astronauts do not have to wait for long processing times while the data goes on a roundabout trip via the ground. This deployment is a big step in leveraging AI for space missions.

Cybersecurity

With no mention of security, the deployment of a generative AI large language model on the ISS begets several cybersecurity thoughts. Firstly, astronauts are sometimes in communications blackouts for certain periods of time. This means that if a generative AI model is deployed in space, and an astronaut consults this model, then they will rely upon the answer without further consultation from ground resources. Any propagated errors, or faults, would leave the astronaut to act alone and independent of any of the considerably larger ground processing. Next, consider the AI development itself. LLMs are inherently large models that take billions, if not trillions, of parameters to deliver their response to the user. They act as black box devices, so clear cut sequences of information deduction cannot be drawn often. This means it is both difficult to tell how the information was conceived for the user and allows for an attacker to exploit this shrouded decision-making. Large neural networks can have unintended obfuscation because of the sheer amount of data needed to reach a decision. Employing such a model without proper consideration could have disastrous consequences for those already in intense situations.