Lacking Trust in AI Models for US Intelligence Officials

The U.S. deputy director of national intelligence, Stacey Dixon, stated during the 2024 GEOINT Symposium that “people are going to try to steal your secrets” while calling for a greater investment in cybersecurity. Dixon noted that as the U.S. continues to innovate, we must take caution in allowing adversaries to obtain that same technology. If adversaries do obtain new technologies that the U.S. develops, it opens new threat vectors allowing those same adversaries to use the technology against us. Not only will those who steal the technology use it for themselves, but a more troubling attack would involve the exploitation of the innovation. This idea of exploitation is being applied to the Artificial Intelligence realm.

As the AI industry continues to grow, U.S. agencies are looking to see how it can be used in their workflow. Specifically in the intelligence field to allow analysts to skirt mundane and repetitive tasks. The increase in efficiency does come at a cost in the form of risk. The core of this risk lies within the data sets used to train and continuously train the AI models. How is the data validated and how is it verified?

The aerospace industry has a long standing saying of “garbage in, garbage out.” Say you are running a basic script to calculate a desired parameter, but all of your input values are wrong. Your results will also be wrong. Similarly for AI models, if they are trained with false data, what is being asked of them will also be inaccurate.

Circling back to threat vectors, the idea of false data going into and coming out of AI models can be used as an attack. A well thought out attack using these models could feed biased, incorrect, or otherwise targeted data into an AI model. This would then provide information to end-users that could steer them down a more dangerous road if not validated and verified. This method of attack can almost be described as a silent backdoor attack. One which the end user does not know they are being steered by the attackers into opening potentially more devastating attacks. This type of attack would most likely come in multiple stages for the attackers to reach their end goal or target.

Dixon stresses the idea of traceability for AI models in order to combat these potential vulnerabilities. More transparency into the training and use data, will allow for a more defensible cybersecurity posture by those who use the models.