Connect with us

Big Data

AI Could Help Keep Coffee Affordable and Accessible In The Face Of Climate Change

mm

Published

 on

AI Could Help Keep Coffee Affordable and Accessible In The Face Of Climate Change

If you’re a lover of coffee, it will come as unpleasant news that the price of coffee could potentially spike in the near future. Climate change and deforestation are threatening some of the biggest coffee species in the world, but AI could potentially help keep coffee relatively affordable.

The combined forces of deforestation and climate change are threatening the production of many species of coffee, including the common Arabica species, which can be found in many of the most prolific blends and brews. Coffee farmers around the globe are having to deal with rising temperatures and the problems that are associated with them, such as periods of drought. One recent study published in the journals Global Change Biology and Science Advances found that there were substantial risks to many wild coffee species, with around 60% of 124 different wild coffee species being vulnerable to extinction.

As reported by Inside Climate News, Aaron P. Davis, one of the authors on the study and a senior research leader at England’s Royal Botanic Gardens, explained that domesticated coffee is adapted from these wild species.

“We should be concerned about the loss of any species for lots of reasons,” explained Davis, “but for coffee specifically, I think we should remember that the cup in front of us originally came from a wild source.”

Domesticated coffee is composed of primarily two bean varieties: arabica and robusta. Wild plants are bred with these species to increase their quality, serving as a genetic library that allows scientists to create hardier plants. Biologists look to wild coffee varieties to find species with resistance to these threats, but as the climate continues to warm, this becomes more difficult.

Much as the wild strains of coffee are under pressure, cultivate coffee crops are also experiencing strain. Severe droughts and longer, more intense outbreaks of pests and disease are threatening cultivated crops. A fungal disease is taking advantage of the warmer conditions and higher humidity to proliferate amongst coffee crops, and the coffee borer beetle is possibly spreading faster thanks to climate change. Climate change also makes weather patterns more extreme, with more severe droughts and rainstorms. Either too much or too little rain can degrade coffee production. Further, it is estimated that around half of all wild coffee plants will disappear over the next 70 years.

Despite the recent problems climate change has brought to coffee farmers, demand for coffee is only likely to increase. The overall demand for food across the globe is expected to increase by around 60% by 2050, and small farmers produce most of the globe’s food supply, around 70%.

Amidst the growing threat of climate change, AI could help coffee farmers compensate for things like drought and pests. Researchers associated with the Financial and Agricultural Recommendation Models project, or FARM, intend to assist coffee farmers by providing them with techniques that can boost yields. Project FARM is initially going to be tested on coffee farmers throughout Kenya, where it will apply data science techniques to big datasets gathered from coffee farms. The FARM platform aims to bring automated farming systems and data science-backed techniques to small farms across Kenya, which should help boost yield. Project FARM is driven by the decreasing price of sensors and the accompanying availability of large datasets gathered by these sensors.

The AI-based farming methods can provide farmers with valuable insights that can help them optimize production. Machine learning algorithms can be used to predict weather patterns and take precautions against inclement weather, while computer vision systems can recognize crop damage and possible signs of spreading fungus or parasites. Gaining this valuable info can help farmers guard against these damaging forces. Farmers can even be alerted via SMS if a storm is expected the next day.

One company, Agrics, is able to use data gathered by sensors to predict risks that may impact individual farmers. Violanda de Man, Innovation Manager at Agrics East Africa, explained that the data can be used to give farmers location-specific services and products that can reduce farm risks and improve both the income and security of rural populations

As crops and farmers around the globe contend with the challenges of climate change, AI seems poised to play a major role in the struggle to contend with these challenges.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Big Data

Risks And Rewards For AI Fighting Climate Change

mm

Published

on

Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Big Data

Researchers Are Starting To Train Artificial Intelligence To Combat Hate Speech Online

mm

Published

on

Researchers Are Starting To Train Artificial Intelligence To Combat Hate Speech Online

Fake news and hate speech online are becoming not a daily, but a-minute-by-minute problem online. The IkigaiLab reports that Facebook and Twitter only recently had to close more than 1.5 billion and 70 million amounts respectively just to try to at least curb the spread of fake news and hate speech around the world.

Still, at the moment, such a task requires enormous human power and almost constant working hours to just take a tip of the hate speech iceberg. To resolve the problem, researchers in numerous labs are starting to train artificial intelligence (AI) to help with this humongous task.

Ikigai cites the Rosetta system that Facebook is using to understand the authenticity of the news, images or other content that is uploaded on that social media. As is explained, what Rosetta does is scan “the word, picture, language, font, date of the post amongst other variables and tries to see if the information being presented is genuine or not.” After the system gathers the information and having in mind that AI is still not fully “adept at understanding innuendoes, references, slights and the contexts in which the content was posted,” the human moderators take over and guide the AI system to discover hate speech and fake news.

To try to further develop the ability of the AI systems to be able to cover all the possible nuances  that characterise hate speech, a team of researchers at the UC Santa Barbara and Intel, as TheNextWeb (TNW) reports, “took thousands of conversations from the scummiest communities on Reddit and Gab and used them to develop and train AI to combat hate speech.”

According to their report, to do so, the joint group of researchers created a specific dataset  featuring “thousands of conversations specially curated to ensure they’d be chock full of hate speech.”They also used a list of the groups on Reddit that are mostly characterized by the use of hate speech compiled by Justin Caffier of Vox.

The researchers ended up collecting “more than 22,000 comments from Reddit and over 33,000 from Gab.” They discovered that the two sites show similar popular hate keywords, but the distributions are very different.

They noted that due to these differences it is very hard for social media, in general, to intervene in real-time since the flow of hate speech is so high that it would require countless real persons to follow it.

To take the problem, the research team started to train AI to intervene. Their initial database was sent to Amazon Turk workers to be labeled. After identifying the individual instances of hate speech, the workers came up with phrases that AI would be used “to deter users from posting similar hate speech in the future.”

Based on that, the team “ran this dataset and its database of interventions through various machine learning and natural language processing systems and created a sort of prototype for an online hate speech intervention AI.”

The results produced were excellent, but since the development is still at an early stage, the system is not ready yet to be actively used. As it is explained, “the system, in theory, should detect hate speech and immediately send a message to the poster letting them know why they shouldn’t post things that obviously represent hate speech. This relies on more than just keyword detection – in order for the AI to work it has to get the context right.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe

Published

on

Scientists Use Artificial Intelligence to Estimate Dark Matter in the Universe

Scientists from the Department of Physics and the Department of Computer Science at ETH Zurich are using artificial intelligence to learn more about our universe. They are contributing to the methods used in order to estimate the amount of dark matter present. The group of scientists developed machine learning algorithms that are similar to those used by Facebook and other social media companies for facial recognition. These algorithms help analyze cosmological data. The new research and results were published in the scientific journal Physical Review D

Tomasz Kacprzak, a researcher from the Institute of Particle Physics and Astrophysics, explained the link between facial recognition and estimating dark matter in the universe. 

“Facebook uses its algorithms to find eyes, mouths or ears in images; we use ours to look for the tell-tale signs of dark matter and dark energy,” he explained. 

Dark matter is not able to be seen directly by telescope images, but it does bend the path of light rays that are coming to earth from other galaxies. This is called weak gravitational lensing, and it distorts the images of those galaxies. 

The distortion that takes place is then used by scientists. They build maps based on mass of the sky, and they show where dark matter is. The scientists then take theoretical predictions of the location of dark matter and compare them to the built maps, and they look for the ones that most match the data.

The described method with maps is traditionally done by using human-designed statistics, which help explain how parts of the maps relate to one another. The problem that arises with this method is that it is not well suited for detecting the complex patterns that are present in such maps. 

“In our recent work, we have used a completely new methodology…Instead of inventing the appropriate statistical analysis ourselves, we let computers do the job,” Alexandre Refregier said. 

Aurelien Lucchi and his team from the Data Analytics Lab at the Department of Computer Science, along with Janis Fluri, a PhD student from Refregier’s group and the lead author of the study, worked together using machine learning algorithms. They used them to establish deep artificial neural networks that are able to learn to extract as much information from the dark matter maps as possible. 

The group of scientists first gave the neural network computer-generated data that simulated the universe. The neural network eventually taught itself which features to look for and to extract large amounts of information.

These neural networks outperformed the human-made analysis. In total, they were 30% more accurate than the traditional methods based on human-made statistical analysis. If cosmologists wanted to achieve the same accuracy rate without using these algorithms, they would have to dedicate at least twice the amount of observation time. 

After these methods were established, the scientists then used them to create dark matter maps based on the KiDS-450 dataset. 

“This is the first time such machine learning tools have been used in this context, and we found that the deep artificial neural network enables us to extract more information from the data than previous approaches. We believe that this usage of machine learning in cosmology will have many future applications,” Fluri said. 

The scientists now want to use this method on bigger image sets such as the Dark Energy Survey, and the neural networks will start to take on new information about dark matter.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading