stub ‘Deep Fakes’ Could Soon Make It Into Geography - Unite.AI
Connect with us

Artificial Intelligence

‘Deep Fakes’ Could Soon Make It Into Geography

Updated on

The concerns over ‘deep fakes’ are starting to expand into other areas, such as geographical information science (GIS). Researchers at Binghamton University are now beginning to address this potential problem. 

The team includes Associate Professor of Geography Chengbin Deng, and four other colleagues, including Bo Zhao and Yifan Sun at the University of Washington, and Shaozeng Zhang and Chunxue Xu at Oregon State University. 

The new research was published in Cartography and Geographic Information Science, titled “Deep fake geography? When geospatial data encounter Artificial Intelligence.”

In the paper, the team explores how false satellite images could be constructed and detected. 

“Honestly, we probably are the first to recognize this potential issue,” Deng said.

Geographic Information Science (GIS) and GeoAI 

Geographic information science (GIS) is used for a lot of different applications, including national defense and autonomous vehicles. Through the development of Geospatial Artificial Intelligence (GeoAI), AI technology has made an impact on the field.

GeoAI uses machine learning to extract and analyze geospatial data. However, GeoAI could also be used to fake GPS signals, locational information on social media, fabricate photographs of geographic environments, and for a wide array of other dangerous applications.

“We need to keep all of this in accordance with ethics. But at the same time, we researchers also need to pay attention and find a way to differentiate or identify those fake images,” Deng said. “With a lot of data sets, these images can look real to the human eye.”

Constructing False Images

The first step to detecting an artificially constructed image is to construct one, so the team relied on the common technique for creating deep fakes called Cycle-Consistent Adversarial Networks (CycleGAN). CycleGAN is an unsupervised deep learning algorithm that can simulate synthetic media. 

Generative Adversarial Networks (GAN), which are a type of AI, require training samples of the content they are being programmed to produce. For example, the GAN could generate content for an empty spot on a map by determining the different possibilities.

The researchers set out to alter a satellite image of Tacoma, Washington, and they interspersed elements of Seattle and Beijing while making it appear as realistic as possible. However, the researchers warn against such tasks. 

“It's not about the technique; it's about how human beings are using the technology,” Deng said. “We want to use technology for the good, not for bad purposes.”

Following the creation, the team compared 26 different image metrics to determine whether there were any statistical differences between the true and false images, and they registered such differences on 20 of the 26 indicators (80%). 

The differences included the color of roofs, where the colors in the real images were uniform, while those in the composite were mottled. The team also found that the fake satellite image was less colorful and more dim, but it also had sharper edges. According to Deng, the differences were dependent on the inputs used to develop the fake.

This research lays the foundation for further work, which could enable geographers to track different types of neural networks to see how they generate fake images, which also leads to better detection. The team says that systematic methods will need to be developed in order to detect deep fakes and verify trustworthy information in this field. 

“We all want the truth,” Deng said.

 

Alex McFarland is a tech writer who covers the latest developments in artificial intelligence. He has worked with AI startups and publications across the globe.