Intelligence analysts confront the reality of deepfakes

While this is a bit of an older article, it seems like an interesting topic to me and possibly a dangerous weapon if in the wrong hands. In the article experts in AI from the National Geospatial Intelligence agency, discuss a deepfake or AI generated video of what appears to be the pentagon under attack. Specifically of note, the article mentions that deepfakes are not just limited to videos and can be translated to fake images and possibly cause AI run image correlation software to fail when trying to analyze these images.

Deepfakes and AI are an interesting tool, that can be used under the right circumstances. But when left to the general public, are likely to stray in a dangerous direction. Deepfakes pose many interesting threats to cyber security. The most dangerous of these is the ability for someone to quite literally put words in someone else’s mouth. These videos can then be used to confuse or trick others into doing what the video says since they are convincingly realistic.