Connect with us

Regulation

Risks And Rewards For AI Fighting Climate Change

mm

Published

 on

Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Artifical Neural Networks

AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation

mm

Published

on

AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation

Dealing with misinformation in the digital age is a complex problem. Not only does misinformation have to be identified, tagged, and corrected, but the intent of those responsible for making the claim should also be distinguished. A person may unknowingly spread misinformation, or just be giving their opinion on an issue even though it is later reported as fact. Recently, a team of AI researchers and engineers at Dartmouth created a framework that can be used to derive opinion from “fake news” reports.

As ScienceDaily reports, the Dartmouth team’s study was recently published in the Journal of Experimental & Theoretical Artificial Intelligence. While previous studies have attempted to identify fake news and fight deception, this might be the first study that aimed to identify the intent of the speaker in a news piece. While a true story can be twisted into various deceptive forms, it’s important to distinguish whether or not deception was intended. The research team argues that intent matters when considering misinformation, as deception is only possible if there was intent to mislead. If an individual didn’t realize they were spreading misinformation or if they were just giving their opinion, there can’t be deception.

Eugene Santos Jr., an engineering professor at Dartmouth’s Thayer School of Engineering, explained to ScienceDaily why their model attempts to distinguish deceptive intent:

“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes. To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts.”

In order to construct their model, the research team analyzed the features of deceptive reasoning. The resulting algorithm could distinguish intent to deceive from other forms of communication by focusing on discrepancies between a person’s past arguments and their current statements. The model constructed by the research team needs large amounts of data that can be used to measure how a person deviates from past arguments. The training data the team used to train their model consisted of data taken from a survey of opinions on controversial topics. Over 100 people gave their opinion on these controversial issues. Data was also pulled from reviews of 20 different hotels, consisting of 400 fictitious reviews and 800 real reviews.

According to Santo, the framework developed by the researchers could be refined and applied by news organizations and readers, in order to let them analyze the content of “fake news” articles. Readers could examine articles for the presence of opinions and determine for themselves if a logical argument has been used. Santos also said that the team wants to examine the impact of misinformation and the ripple effects that it has.

Popular culture often depicts non-verbal behaviors like facial expressions as indicators that someone is lying, but the authors of the study note that these behavioral hints aren’t always reliable indicators of lying. Deqing Li, co-author on the paper, explained that their research found that models based on reasoning intent are better indicators of lying than behavioral and verbal differences. Li explained that reasoning intent models “are better at distinguishing intentional lies from other types of information distortion”.

The work of the Dartmouth researchers isn’t the only recent advancement when it comes to fighting misinformation with AI. News articles with clickbait titles often mask misinformation. For example, they often imply one thing happened when another event actually occurred.

As reported by AINews, a team of researchers from both Arizona State University and Penn State University collaborated in order to create an AI that could detect clickbait. The researchers asked people to write their own clickbait headlines and also wrote a program to generate clickbait headlines. Both forms of headlines were then used to train a model that could effectively detect clickbait headlines, regardless of whether they were written by machines or people.

According to the researchers, their algorithm was around 14.5% more accurate, when it came to detecting clickbait titles than other AIs had been in the past. The lead researcher on the project and associate professor at the College of Information Sciences and Technology at Penn State, Dongwon Lee, explained how their experiment demonstrates the utility of generating data with an AI and feeding it back into a training pipeline.

“This result is quite interesting as we successfully demonstrated that machine-generated clickbait training data can be fed back into the training pipeline to train a wide variety of machine learning models to have improved performance,” explained Lee.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Big Data

New Data Analysis Tools Uses Natural Language Based Interface

mm

Published

on

New Data Analysis Tools Uses Natural Language Based Interface

Recently, a team of engineers and researchers from NYU’s Tandon School of Engineering Visualization and Analytics (VIDA) lab created a data visualization tool called VisFlow as well as an extension for the tool dubbed FlowSense. As ScienceDaily reported, FlowSense enables users to create data exploration pipelines simply by providing natural language commands to the device.

Data is everywhere, constantly being collected and generated by machines, researchers, systems and even regular people. Being able to manipulate and analyze data is key to solving problems like predicting weather, controlling street traffic, monitoring the spread of disease, and improving crop growth. The utilization of machine learning tactics and tools is a critical part of creating complex data models based on massive datasets. However, there are many researchers, scientists and engineers that lack the required skills in computer science.

Claudio Silva is a professor of computer science and engineering affiliated with the VIDA lab at NYU. Silva and his team wanted to create a tool for those who are able to visualize and analyze data in important ways, yet lack the required computer science skills. For this reason, they created the VisFlow framework. In addition to this, FlowSense is an extension for VisFlow that lets the user interact with VisFlow to create data analysis pipelines simply through a natural language based interface.

The VIDA lab already has an established history of leading and collaborating on data modeling projects. Projects resulting from collaborations with the VIDA lab include Open Space and Motion Browser. Open Space is a data modeling project being used in museums and planetariums around the world in order to create possible models of the universe and solar system. Meanwhile, Motion Browser is a project aiming to find new treatments for brachial nerve injuries. The Motion Browser – Visualizing and Understanding Complex Upper Limb Movement under Obstetrical Brachial Plexus Injuries – is a collaboration between rehabilitation physicians, orthopedic surgeons, and computer scientists. Finally, VIDA has led to the creation of Effect of Color Scales on Climate Scientists’ Objective and Subjective Performance in Spatial Data Analysis Tasks is a study that analyzes the effectiveness of utilizing color scales in the context of geographic maps.

Adding to this list of projects, VIDA created VisFlow  in 2017, produced with funding from DARPA’s Data Driven Discovery of Models program, among other funding sources. VisFLow lets the user of the program create models and visualize data based on many pre-defined analytical concepts like geographical location, networks, and time series. The VisFlow program has a simple  drag and drop interface, and the FlowSense add-on gives the user even more ways to visualize data. Bowen Yu, a researcher who worked in Silva’s lab and worked on the VIDA project explained to ScienceDaily that a user could just type or say a command to the program and activate a data analysis function.

“This capability would make non-experts more comfortable users, while providing experienced users with shortcuts. We believe that with natural language support we can mitigate the learning curve for a system like this and make data flow more accessible” explained Yu to ScienceDaily.

Both FlowSense and VisFlow will be made available to researchers as an open-source framework. The researchers hope that FlowSense will stimulate more collaboration and more innovation when it comes to making dataflow platforms easier to use.

Though VisFlow/FlowSense is probably the first program that enables users to make use of natural language to visualize data, there are other new data visualization tools that also aim to make data analysis easier. The global advisory firm Deloitte recently created a powerful public data analysis and visualiation tool, dubbed the Open Source Compass (OSC). According to Deloitte, the OSC is intended to let developers better understand technology trends. OSC evaluates code commits and assists its users with exploring and understanding relevant languages and development platforms.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Big Data

AI Could Help Keep Coffee Affordable and Accessible In The Face Of Climate Change

mm

Published

on

AI Could Help Keep Coffee Affordable and Accessible In The Face Of Climate Change

If you’re a lover of coffee, it will come as unpleasant news that the price of coffee could potentially spike in the near future. Climate change and deforestation are threatening some of the biggest coffee species in the world, but AI could potentially help keep coffee relatively affordable.

The combined forces of deforestation and climate change are threatening the production of many species of coffee, including the common Arabica species, which can be found in many of the most prolific blends and brews. Coffee farmers around the globe are having to deal with rising temperatures and the problems that are associated with them, such as periods of drought. One recent study published in the journals Global Change Biology and Science Advances found that there were substantial risks to many wild coffee species, with around 60% of 124 different wild coffee species being vulnerable to extinction.

As reported by Inside Climate News, Aaron P. Davis, one of the authors on the study and a senior research leader at England’s Royal Botanic Gardens, explained that domesticated coffee is adapted from these wild species.

“We should be concerned about the loss of any species for lots of reasons,” explained Davis, “but for coffee specifically, I think we should remember that the cup in front of us originally came from a wild source.”

Domesticated coffee is composed of primarily two bean varieties: arabica and robusta. Wild plants are bred with these species to increase their quality, serving as a genetic library that allows scientists to create hardier plants. Biologists look to wild coffee varieties to find species with resistance to these threats, but as the climate continues to warm, this becomes more difficult.

Much as the wild strains of coffee are under pressure, cultivate coffee crops are also experiencing strain. Severe droughts and longer, more intense outbreaks of pests and disease are threatening cultivated crops. A fungal disease is taking advantage of the warmer conditions and higher humidity to proliferate amongst coffee crops, and the coffee borer beetle is possibly spreading faster thanks to climate change. Climate change also makes weather patterns more extreme, with more severe droughts and rainstorms. Either too much or too little rain can degrade coffee production. Further, it is estimated that around half of all wild coffee plants will disappear over the next 70 years.

Despite the recent problems climate change has brought to coffee farmers, demand for coffee is only likely to increase. The overall demand for food across the globe is expected to increase by around 60% by 2050, and small farmers produce most of the globe’s food supply, around 70%.

Amidst the growing threat of climate change, AI could help coffee farmers compensate for things like drought and pests. Researchers associated with the Financial and Agricultural Recommendation Models project, or FARM, intend to assist coffee farmers by providing them with techniques that can boost yields. Project FARM is initially going to be tested on coffee farmers throughout Kenya, where it will apply data science techniques to big datasets gathered from coffee farms. The FARM platform aims to bring automated farming systems and data science-backed techniques to small farms across Kenya, which should help boost yield. Project FARM is driven by the decreasing price of sensors and the accompanying availability of large datasets gathered by these sensors.

The AI-based farming methods can provide farmers with valuable insights that can help them optimize production. Machine learning algorithms can be used to predict weather patterns and take precautions against inclement weather, while computer vision systems can recognize crop damage and possible signs of spreading fungus or parasites. Gaining this valuable info can help farmers guard against these damaging forces. Farmers can even be alerted via SMS if a storm is expected the next day.

One company, Agrics, is able to use data gathered by sensors to predict risks that may impact individual farmers. Violanda de Man, Innovation Manager at Agrics East Africa, explained that the data can be used to give farmers location-specific services and products that can reduce farm risks and improve both the income and security of rural populations

As crops and farmers around the globe contend with the challenges of climate change, AI seems poised to play a major role in the struggle to contend with these challenges.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading