AI Satellite Imagery Processing

Space companies are investing in AI to help speed up satellite imagery processing. The goal is to analyze more data faster and use the quick turnaround time to anticipate geopolitical events, such as droughts and war. With AI just recently emerging into the scene, the capabilities are basic. It is simple for AI to identify things like cars and ships, but it is much more difficult to identify trends with the large amounts of data being processed. Planet, a private satellite imaging company, is looking to use large language models to equip AI to include these capabilities. To accomplish this, the goal is to take the data received by a sensor to inform future collection or to combine with open-source information about the observed area to get new insight. Planet also aims to utilize satellite-to-satellite communication and mesh networking to know all the actives happening. Another company, BlackSky, wants to be able to use satellite imagery to track objects as they move around. Both companies aim to reduce latency and human effort by utilizing onboard processing, such that AI directly receives information from the satellite and the AI processing is done prior to the distribution of the imagery to customers. This will allow the user to get intelligence ASAP. 

There are a couple of cybersecurity implications with incorporating AI into satellite imagery processing. Since the models need to be trained, bad actors could potentially poison this training data, which would result in false positives. How do we know that the AI is not acting on various sources of open-source information that were generated by bad actors? Since the goal of utilizing AI is to reduce human guidance, training the models on bad data would provide the user with the wrong information that would be used to make globally impacted decisions. Also, if the AI model is following key objects through collaboration between satellites, a bad actor could result in the wrong object being followed. This skewed information could then be sent over to military personnel during the decision-making process. Additionally, If the human requests to only be informed when the object in question is doing something suspicious, a bad actor could also make it such that the user is never notified.

Source: https://www.defenseone.com/technology/2024/06/how-ai-turning-satellite-imagery-window-future/397520/