Connect with us

Regulation

Risks And Rewards For AI Fighting Climate Change

mm

Published

 on

Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Big Data

Computer Scientists Tackle Bias in AI

Published

on

Computer Scientists Tackle Bias in AI

Computer scientists from Princeton and Stanford University are now addressing problems of bias in artificial intelligence (AI). They are working on methods that result in fairer data sets containing images of people. The researchers work closely with ImageNet, which is a database of more than 13 million images. Throughout the past decade, ImageNet has helped advance computer vision. With the use of their methods, the researchers then recommended improvements for the database. 

ImageNet includes images of objects, landscapes, and people. Researchers that create machine learning algorithms that classify images use ImageNet as a source of data. Because of the database’s massive size, it was necessary for there to be automated image collection and crowdsourced image annotation. Now, the ImageNet team works to correct biases and other issues. The images often contain people that are unintended consequences of ImageNet’s construction.

Olga Russakovsky is the co-author and an assistant professor of computer science at Princeton. 

“Computer vision now works really well, which means it’s being deployed all over the place in all kinds of contexts,” he said. “This means that now is the time for talking about what kind of impact it’s having on the world and thinking about these kinds of fairness issues.”

In the new paper, the ImageNet team systematically identified non-visual concepts and offensive categories. These categories included racial and sexual characterizations, and the team proposed removing them from the database. The team has also developed a tool that allows users to specify and retrieve image sets of people, and it can do so by age, gender expression, and skin color. The goal is to create algorithms that more fairly classify people’s faces and activities in images. 

The work done by the researchers was presented on Jan. 30 at the Association for Computing Machinery’s Conference on Fairness, Accountability, and Transparency in Barcelona, Spain. 

“There is very much a need for researchers and labs with core technical expertise in this to engage in these kinds of conversations,” said Russakovsky. “Given the reality that we need to collect the data at scale, given the reality that it’s going to be done with crowdsourcing because that’s the most efficient and well-established pipeline, how do we do that in a way that’s fairer — that doesn’t fall into these kinds of prior pitfalls? The core message of this paper is around constructive solutions.”

ImageNet was launched in 2009 by a group of computer scientists at Princeton and Stanford. It was meant to serve as a resource for academic researchers and educators. The creation of the system was led by Princeton alumni and faculty member Fei-Fei Li. 

ImageNet was able to become such a large database of labeled images through to the use of crowdsourcing. One of the main platforms used was the Amazon Mechanical Turk (MTurk), and workers were paid to verify candidate images. This caused some problems, and there were many biases and inappropriate categorizations. 

Lead author Kaiyu Yang is a graduate student in computer science. 

“When you ask people to verify images by selecting the correct ones from a large set of candidates, people feel pressured to select some images and those images tend to be the ones with distinctive or stereotypical features,” he said. 

The first part of the study involved filtering out potentially offensive or sensitive person categories from ImageNet. Offensive categories were defined as those that contained profanity or racial or gender slurs. One such sensitive category was the classification of people based on sexual orientation or religion. Twelve graduate students from diverse backgrounds were brought in to annotate the categories, and they were instructed to label a category sensitive if they were unsure of it. About 54% of the categories were eliminated, or 1,593 out of the 2,932 person categories in ImageNet. 

MTurk workers then rated the “imageability” of the remaining categories on a scale of 1 to 5. 158 categories were classified as both safe and imageable, rating 4 or higher. These filtered set of categories included more than 133,000 images, which can be highly useful for training computer vision algorithms. 

The researchers studied the demographic representation of people in the images, and the level of bias in ImageNet was assessed. Sourced content from search engines often provide results that overrepresent males, light-skinned people, and adults between the ages of 18 and 40.

“People have found that the distributions of demographics in image search results are highly biased, and this is why the distribution in ImageNet is also biased,” said Yang. “In this paper we tried to understand how biased it is, and also to propose a method to balance the distribution.”

The researchers considered three attributes that are also protected under U.S. anti-discrimination laws: skin color, gender expression, and age. The MTurk workers then annotated each attribute of each person in an image. 

The results showed that ImageNet’s content has a considerable bias. The most underrepresented were dark-skinned, females, and adults over the age of 40.

A web-interface tool was designed that allows users to obtain a set of images that are demographically balanced in a way that the user chooses. 

“We do not want to say what is the correct way to balance the demographics, because it’s not a very straightforward issue,” said Yang. “The distribution could be different in different parts of the world — the distribution of skin colors in the U.S. is different than in countries in Asia, for example. So we leave that question to our user, and we just provide a tool to retrieve a balanced subset of the images.”

The ImageNet team is now working on technical updates to its hardware and database. They are also trying to implement the filtering of the person categories and the rebalancing tool developed in this research. ImageNet is set to be re-released with the updates, along with a call for feedback from the computer vision research community. 

The paper was also co-authored by Princeton Ph.D. student Klint Qinami and Assistant Professor of Computer Science Jia Deng. The research was supported by the National Science Foundation.  

 

Spread the love
Continue Reading

Big Data

Data Science Companies Use AI To Protect Environment And Fight Climate Change

mm

Published

on

Data Science Companies Use AI To Protect Environment And Fight Climate Change

As the nations of Earth attempt to invent and implement solutions to the growing threat of climate change, just about every option is on the table. Investing in renewable sources of energy and dropping emissions around the globe are the dominant strategies, but utilizing artificial intelligence can help reduce the damage done by climate change. As reported by Live Mint, artificial intelligence algorithms can help conservationists limit deforestation, protect vulnerable species of animals from climate change, fight poaching, and monitor air pollution.

The data science company Gramener has employed machine learning to help get estimates of the number of penguin colonies across Antarctica by analyzing images taken by camera traps. The size of penguin colonies in Antarctica has decreased dramatically over the course of the past decade, impacted by climate change. In order to help conservation groups and scientists analyze image data of Antarctic penguins, Gramener employed convolutional neural networks to clean up the data, and once the data was clean it was deployed through Microsoft’s data science virtual machine. The model developed by Gramener makes use of penguin density in the captured images in order to achieve estimates of penguin populations faster and more reliably. Gramener also used similar techniques to estimate salmon populations in various rivers.

As LiveMint reported, there are other animal conservation projects that make use of AI as well, such as the Elephant Listening Project designed by Conservation Metrics. Populations of elephants throughout Africa have been suffering because of illegal poaching. The project utilizes machine learning algorithms to identify the vocalizations of elephants, distinguishing them from sounds made by other animals. By training machine learning models to recognize unique sound patterns and then using data from sensors distributed throughout elephant habitat, the researchers can develop a system that alerts them to potential poaching or deforestation. They can have a system that listens for things like vehicles, sounds, or guns, and if these sounds are detected alerts can be sent out to authorities.

Machine learning algorithms can also be used to predict the damage that can be done by severe weather events like thunderstorms and tropical cyclones. For instance, IBM has produced a new high-resolution atmospheric forecasting model intended to track potentially damaging weather events.

Jaspreet Bindra, the author of The Tech Whisperer and expert on digital transformations explained to LiveMint that machine learning is necessary to keep up with the changes caused by climate change. Bindra explained:

“Global warming has changed the way climate modeling is done. Using AI/ML is very important as it will make things happen faster. All this will require lots of computing power and, going forward, quantum computers might play an important role.”

Blue Sky Analytics, based in Gurugram, India, is another example of using machine learning algorithms to protect the environment. An application developed by Blue Sky Analytics is used to monitor industrial emissions and air quality in general. Data is gathered and analyzed through satellite data and sensors at ground level.

It requires a substantial amount of computer power in order to analyze and understand the environmental effects of issues like climate change, poaching, pollution. UC Berkeley is trying to speed up research by crowdsourcing the computation of environmental data using smartphones and PCs. The crowdsourcing project is called BOINC (Berkley Open Infrastructure for Network Computing). Those who want to assist in the crowdsourced data analysis just have to install the BOINC software on a chosen device, and when that device isn’t being used the CPU and GPU resources available will be leveraged to carry out computations.

Spread the love
Continue Reading

Big Data

Garth Rose, CEO of GenRocket, Inc – Interview Series

mm

Published

on

Garth Rose, CEO of GenRocket, Inc - Interview Series

Garth is the Co-Founder & CEO of GenRocket. He is an expert at launching and building technology startups. He has held numerous senior leadership roles in startups over the past 25 years including President & CEO of Concentric Visions (VC backed + acquired), VP Sales & VP Business Development at Indus River Networks (VC backed + acquired), VP Sales & Marketing at Digital Products (acquired) and National Sales Manager at Leading Edge Products.

In 2012 you Co-founded GenRocket a company that specializes in enterprise test data automation.  What was the initial vision that inspired this?

I met GenRocket Co-Founder Hycel Taylor in 2011 and he educated me about the need for accurate, conditioned test data for effective software testing. Hycel had done a lot of research and found a huge gap when it came to test data solutions. Hycel decided to architect his own platform that was low cost, really fast and flexible.

What are some of the benefits of using test data versus production data?

Proper software testing means not just testing “positive” conditions of an application but also testing “negative” conditions as well as permutations and edge cases. Production data is useful for data analytics but has limitations for many test cases. One of our financial services customers shared that their production data can only fully satisfy 33% of their testing requirements.

The speed of data generation is important, what are the speeds that GenRocket can deliver?

For a typical automated test case we deliver test data in about 100 milliseconds. For volume data GenRocket generates at a rate of about 10,000 rows of data per second. For big data applications we can use multiple GenRocket instances in parallel to generate millions to billions of rows of data in minutes.

There’s always a learning curve when it comes to generating both test and production data. Do you offer any type of user training?

GenRocket University was created in 2017 to educate our customers and channel partners on GenRocket. We offer multiple on-line training courses at no cost including our “GenRocket Certified Engineer” training course.

You currently serve enterprise customers in over 10 verticals. What are these different types of enterprise customers?

Major banks, numerous global financial services companies, major U.S. healthcare providers, major manufacturers, global supply chain firms, data information services firms are some of our customers across the world.

Our most active industry verticals are banking, financial services, insurance, healthcare and manufacturing.

How does GenRocket differ from other Test Data Management tools?

Traditional Test Data Management (TDM) solutions copy, mask and refresh production data. These solutions tend to be expensive and complex and production data also has limitations for software testing. GenRocket flips the TDM paradigm by quickly and accurately generating most of the required data and querying the small amount of production data that is needed for some of the tests. The GenRocket Test Data Automation (TDA) approach is faster, lower cost and easier to implement and use than TDM.

Could you tell us a little bit about the ability for test data framework compatibility?

Every organization has their own testing framework or testing tools so GenRocket has the flexibility to integrate into every customer’s environment. GenRocket can integrate with just about any testing framework in any language and any testing tool like Jenkins or Selenium. GenRocket can also insert data into any database, and can send data over web services. GenRocket also offers integration with Salesforce and can support complex data feeds like NACHA in banking and EDI and HL7 for the health care industry.

Is there anything else that you would like to tell us about GenRocket?

We rely on an extensive network of trained channel partners to introduce and deliver GenRocket test data solutions into our global customers. Partners like Cognizant, HCL, Wipro, Hexaware, Mindtree and UST Global are actively working with GenRocket.

To learn more visit GenRocket.

Spread the love
Continue Reading