Connect with us

Deep Learning

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements

Published

 on

A group of researchers has used artificial intelligence (AI) in order to identify light sources. The new method requires drastically fewer measurements than what is traditionally required.

Many photonic technologies including lidar, remote sensing, and microscopy are developed in part by identifying sources of light. Some of these sources include sunlight, laser radiation, and molecule fluorescence. Identifying them normally requires millions of measurements, which is especially true in low-light environments, making it extremely difficult to implement quantum photonic technologies. 

The work was published in Applied Physics Reviews, from AIP Publishing. It is titled “Identification of light sources using machine learning.”

Artificial Neuron

Omar Magana-Loaiza is an author of the paper.

“We trained an artificial neuron with the statistical fluctuations that characterize coherent and thermal light,” said Magana-Loaiza.

The artificial neuron was first trained with light sources, which led to it being capable of identifying certain features that are associated with specific types of light. 

Chenglong You is a fellow researcher and co-author of the paper. 

“A single neuron is enough to dramatically reduce the number of measurements needed to identify a light source from millions to less than hundred,” said You.

Applications and Benefits

Because there are such fewer measurements required in order to identify light sources, it can be done much faster. Besides being faster, there can be a reduction in light damage. For example, light damage can be limited in microscopy since the sample does not need to be illuminated as much as when many measurements are required. 

Roberto de J. León-Montiel is another co-author of the paper. 

“If you were doing an imaging experiment with delicate fluorescent molecular complexes, for example, you could reduce the time the sample is exposed to light and minimize any photodamage,” said León-Montiel.

Another area that will benefit from this technology is cryptography, where millions of measurements are often required to generate keys to encrypt messages or emails. 

“We could speed up the generation of quantum keys for encryption using a similar neuron,” said Magana-Loaiza.

Laser light, which is important in remote sensing, could also benefit. A new family of smart lidar systems could be developed, capable of identifying intercepted or modified data that is reflected from a remote object. Lidar is a remote sensing method that illuminates a target with laser light. It then measures the reflected light with a sensor in order to measure the distance to a target. 

“The probability of jamming a smart quantum lidar system will be dramatically reduced with our technology,” Magana-Loaiza continued. In addition, the possibility to discriminate lidar photons from environmental light such as sunlight will have important implications for remote sensing at low-light levels.

 

Spread the love

Deep Learning

Researchers Use AI To Investigate How Reflections Differ From Original Images

mm

Published

on

Researchers at Cornell University recently utilized machine learning systems to investigate how reflections of images are different from the original images. As reported by ScienceDaily, the algorithms created by the team of researchers found that there were telltale signs, differences from the original image, that an image had been flipped or reflected.

Associate professor of computer science at Cornell Tech, Noah Snavely, was the study’s senior author. According to Snavely, the research project started when the researchers became intrigued by how images were different in both obvious and subtle ways when they were reflected. Snavely explained that even things that appear very symmetrical at first glance can usually be distinguished as a reflection when studied. I’m intrigued by the discoveries you can make with new ways of gleaning information,” said Snavely, according to ScienceDaily.

The researchers focused on images of people, using them to train their algorithms. This was done because faces don’t seem obviously asymmetrical. When trained on data that distinguished flipped images from original images, the AI reportedly achieved an accuracy of between 60% to 90% across various types of images.

Many of the visual hallmarks of a flipped image the AI learned are quite subtle and difficult for humans to discern when they look at the flipped images. In order to better interpret the features that the AI was using to distinguish between flipped and original images, the researchers created a heatmap. The heatmap showed regions of the image that the AI tended to focus on. According to the researchers, one of the most common clues the AI used to distinguish flipped images was text. This was unsurprising, and the researchers removed images containing text from their training data in order to get a better idea of the more subtle differences between flipped and original images.

After images containing text were dropped from the training set, the researchers found that the AI classifier focused on features of the images like shirt callers, cell phones, wristwatches, and faces. Some of these features have obvious, reliable patterns that the AI can hone in on, such as the fact that people often carry cell phones in their right hand and that the buttons on shirt collars are often on the left. However, facial features are typically highly symmetrical with differences being small and very hard for a human observer to detect.

The researchers created another heatmap that highlighted the areas of faces that the AI tended to focus on. The AI often used people’s eyes, hair, and beards to detect flipped images. For reasons that are unclear, people often look slightly to the left when they have photos taken of them. As for why hair and beards are indicators of flipped images, the researchers are unsure but they theorize that a person’s handedness could be revealed by the way they shave or comb. While these indicators can be unreliable, by combining multiple indicators together the researchers can achieve greater confidence and accuracy.

More research along these lines will need to be carried out, but if the findings are consistent and reliable then it could help researchers find more efficient ways of training machine learning algorithms. Computer vision AI is often trained using reflections of images, as it is an effective and quick way of increasing the amount of available training data. It’s possible that analyzing how the reflected images are different could help machine learning researchers gain a better understanding of the biases present in machine learning models that might cause them to inaccurately classify images.

As Snavely was quoted by ScienceDaily:

“This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK? I’m hoping this will get people to think more about these questions and start to develop tools to understand how it’s biasing the algorithm.”

Spread the love
Continue Reading

Deep Learning

Researchers Believe AI Can Be Used To Help Protect People’s Privacy

mm

Published

on

Two professors of information science have recently published a piece in The Conversation, arguing that AI could help preserve people’s privacy, rectifying some of the issues that it has created.

Zhiyuan Chen and Aryya Gangopadhyay argue that artificial intelligence algorithms could be used to defend people’s privacy, counteracting some of the many privacy concerns other uses of AI have created. Chen and Gangopadhyay acknowledge that many of the AI-driven products we use for convenience wouldn’t work without access to large amounts of data, which at first glance seems at odds with attempts to preserve privacy. Furthermore, as AI spreads out into more and more industries and applications, more data will be collected and stored in databases, making breaches of those databases tempting. However, Chen and Gangopadhyay believe that when used correctly, AI can help mitigate these issues.

Chen and Gangopadhyay explain in their post that the privacy risks associated with AI come from at least two different sources. The first source is the large datasets collected to train neural network models, while the second privacy threat is the models themselves. Data can potentially “leak” from these models, with the behavior of the models giving away details about the data used to train them.

Deep neural networks are comprised of multiple layers of neurons, with each layer connected to the layers around them. The individual neurons, as well as the links between neurons,  encode for different bits of the training data. The model may prove to be too good as remembering patterns of the training data, even if the model isn’t overfitting. Traces of the training data exist within the network and malicious actors may be able to ascertain aspects of the training data, as Cornell University found during one of their studies. Cornell researchers found that facial recognition algorithms could be exploited by attackers to reveal which images, and therefore which people, were used to train the face recognition model. The Cornell researchers discovered that even if an attacker doesn’t have access to the original model used to train the application, the attacker may still be able to probe the network and determine if a specific person was included in the training data simply by using models was that were trained on highly similar data.

Some AI models are currently being used to protect against data breaches and try to ensure people’s privacy. AI models are frequently used to detect hacking attempts by recognizing the patterns of behavior that hackers use to penetrate security methods. However, hackers often change up their behavior to try and fool pattern-detecting AI.

New methods of AI training and development aim to make AI models and applications less vulnerable to hacking methods and security evasion tactics. Adversarial learning endeavors to train AI models on simulations of malicious or harmful inputs and in doing so make the model more robust to exploitation, hence the “adversarial” in the name. According to Chen and Gangopadhyay, their research has discovered methods of combatting malware designed to steal people’s private info. The two researchers explained that one of the methods they found to be most effective at resisting malware was the introduction of uncertainty into the model. The goal is to make it more difficult for bad actors to anticipate how the model will react to any given input.

Other methods of utilizing AI to protect privacy include minimizing data exposure when the model is created and trained, as well as probing to discover the network’s vulnerabilities. When it comes to preserving data privacy, federated learning can help protect the privacy of sensitive data, as it allows a model to be trained by without the training data ever leaving the local devices that contain the data, insulating the data, and much of the model’s parameters from spying.

Ultimately, Chen and Gangopadhyay argue that while the proliferation of AI has created new threats to people’s privacy, AI can also help protect privacy when designed with care and consideration.

Spread the love
Continue Reading

Deep Learning

Microsoft to Replace Dozens of Journalists With AI

Published

on

Microsoft has cut dozens of journalists in what is the latest example of the replacement of human jobs by automation and robots. There are 50 individuals in the United States and 27 in the United Kingdom that will be laid off by June 30th, and their jobs will be replaced by artificial intelligence (AI) software. 

The layoffs were not related to the ongoing COVID-19 pandemic. Instead, they are a direct result of the current shift taking place in the economy, one that is replacing human labor with robots and AI technologies. 

According to a Microsoft spokesperson, “Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic.”

The Replaced Team

The individuals that will be replaced are responsible for the news homepages of Microsoft’s MSN website and Edge Browser. The decision comes as Microsoft is currently undergoing a larger reform to use more AI technology in the selection of news. 

The 27 individuals in the UK are employed by PA Media, which used to be the Press Association.

One staff member who is part of the team spoke about the transition. 

“I spend all my time reading about how automation and AI is going to take all our jobs, and here I am – AI has taken my job.”

AI News Selection

The staffer has concerns about how the technology will handle news selection, since humans followed “very strict editorial guidelines.” These guidelines were in place to prevent violent or inappropriate content from making it to users. 

The team was tasked with selecting stories that were produced by other news organizations, and they would edit them to fit a certain format. Microsoft’s website then hosted the articles and shared advertising revenue with the original publishers. The team was also responsible for making sure the headlines were clear and formatted in the correct manner. 

According to a spokesperson for PA Media, “We are in the process of winding down the Microsoft team working at PA, and we are doing everything we can to support the individuals concerned. We are proud of the work we have done with Microsoft and know we delivered a high-quality service.”

Larger Trend

The replacement of Microsoft’s team is not an isolated incident, the automation of news and journalism is expected to spread. 

Just last month, China’s state media announced the newest version of its AI news anchor. It follows the same behaviors and mannerisms of a human anchor, and it can be broadcast to the public.

All of this is taking place as many media organizations are facing financial problems and have resorted to looking elsewhere in order to come up with news stories. 

Microsoft in particular has already been implementing AI into their news curation. Over the past few months, the company has been encouraging the use of AI tools in order to scan, process, and filter content.

 

Spread the love
Continue Reading