Connect with us

Regulation

Risks And Rewards For AI Fighting Climate Change

mm

Published

 on

Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Big Data

Warner Bros. To Start Using AI Analysis Tool To Assist In Greenlighting Movies

mm

Published

on

Warner Bros. To Start Using AI Analysis Tool To Assist In Greenlighting Movies

Hollywood has been embracing digital technology and computational algorithms in order to movies for a while now, using CGI to de-age actors and enhance shots in other ways. Just recently, one Hollywood company announced its intention to use AI to analyze movie data and assist in making a decision regarding greenlighting projects. As reported by The Hollywood Reporter, the AI firm will be providing Warner Bros. a program intended to simplify aspects of distribution and give projections regarding pricing and possible profit.

The system developed for Warner Bros. will utilize big data to guide decision-making during the greenlight phase of a project. The system can reportedly return analyses regarding star power for a given region and even predict how much money a film is likely to make in theaters and through other distribution methods. Cinelytic has reportedly been engineering and beta-testing their predictive platform for over three years, and in addition to Warner Bros, several other companies, such as Ingenious Media and Productivity Media, have partnered with the company.

The AI platform is predicted to be especially useful when it comes to film festivals, where companies must make bids on films after only a few hours of deliberation.

Tobias Queisser, the founder of Cinelytic, stated that the value of the platform is that it will be able to quickly make the types of calculations that would take human analysts much longer to complete. Queisser also acknowledges that while the idea of giving AI influenced over what projects get produced can be unnerving, the AI itself won’t be making any decisions.

“The system can calculate in seconds what used to take days to assess by a human when it comes to general film package evaluation or a star’s worth,” says Queisser. “Artificial intelligence sounds scary. But right now, an AI cannot make any creative decisions,” says Queisser. “What it is good at is crunching numbers and breaking down huge data sets and showing patterns that would not be visible to humans. But for creative decision-making, you still need experience and gut instinct.”

Despite Queisser’s assurances that humans will still be in charge of any important decisions, some people are concerned about how the AI will be used. For instance, Popular Mechanics noted that the entire Marvel film franchise was based on the willingness of executives to take a chance on Iron Man and Robert Downey Jr., who was considered “box office poison” at one time. The fear is that using AI algorithms to minimize risk could lead to situations where original and/or high-quality films are passed over. To be sure, AI tools can potentially extend our own biases if there aren’t systems in place to control them.

Of course, one could make the argument that the technology behind Cinelytic’s analysis tool could be used to give more deserving projects a chance, instead of projects that are likely to fail. As QZ notes, Cinelytic was tested last year when it predicted that the Hellboy film would end up being a box office bomb, and it was proven correct. The film had a $50 million dollar budget and it made only about $21.9 million at the box office after Cinelytic’s tool predicted that it would make around $23.2 million. A correct prediction like this could mean that executives could take that money and invest it in projects that have more potential, making those resources available to other films. It could potentially even make choosing new investments in new IPs less scary and uncertain for those greenlighting projects.

Looking beyond Cinelytic, if AI algorithms are ever used to recommend films, the algorithms could also be used to control for human biases in decision making. Depending on what features the AI selects for, it could be instructed to recommend stories about underrepresented minorities more often, reducing some of the disparity in representation often seen in Hollywood films.

Ultimately, the AI device tool developed Cinelytic is a tool, and much like any tool it can be used properly or misused. Regardless, it seems likely that automating repetitive and time-consuming calculations is something the movie industry is only going to continue to invest in.

Spread the love
Continue Reading

AI 101

What is Big Data?

mm

Published

on

What is Big Data?

“Big Data” is one of the commonly used buzz words of our current era, but what does it really mean?

Here’s a quick, simple definition of big data. Big data is data that is too large and complex to be handled by traditional data processing and storage methods. While that’s a quick definition you can use as a heuristic, it would be helpful to have a deeper, more complete understanding of big data. Let’s take a look at some of the concepts that underlie big data, like storage, structure, and processing.

How Big Is Big Data?

It isn’t as simple as saying “any data over the size ‘X ‘is big data”, the environment that the data is being handled in is an extremely important factor in determining what qualifies as big data. The size that data needs to be, in order to be considered big data, is dependant upon the context, or the task the data is being used in. Two datasets of vastly different sizes can be considered “big data” in different contexts.

To be more concrete, if you try to send a 200-megabyte file as an email attachment, you would not be able to do so. In this context, the 200-megabyte file could be considered big data. In contrast, copying a 200-megabyte file to another device within the same LAN may not take any time at all, and in that context, it wouldn’t be regarded as big data.

However, let’s assume that 15 terabytes worth of video need to be pre-processed for use in training computer vision applications. In this case, the video files take up so much space that even a powerful computer would take a long time to process them all, and so the processing would normally be distributed across multiple computers linked together in order to decrease processing time. These 15 terabytes of video data would definitely qualify as big data.

Types Of Big Data Structures

Big data comes in three different categories of structure: un-structured data, semi-structured, and structured data.

Unstructured data is data that possesses no definable structure, meaning the data is essentially just in one large pool. Examples of unstructured data would be a database full of unlabeled images.

Semi-structured data is data that doesn’t have a formal structure, but does exist within a loose structure. For example, email data migtht count as semi-structured data, because you could refer to the data contained in individual emails, but formal data patterns have not been established.

Structured data is data that has a formal structure, with data points categorized by different features. One example of structured data is an excel spreadsheet containing contact information like names, emails, phone numbers, and websites.

If you would like to read more about the differences in these data types, check the link here.

Metrics For Assessing Big Data

Big data can be analyzed in terms of three different metrics: volume, velocity, and variety.

Volume refers to the size of the data. The average size of datasets is often increasing. For example, the largest hard drive in 2006 was a 750 GB hard drive. In contrast, Facebook is thought to generate over 500 terabytes of data in a day and the largest consumer hard drive available today is a 16 terabyte hard drive. What quantifies as big data in one era may not be big data in another. More data is generated today because more and more of the objects surrounding us are equipped with sensors, cameras, microphones, and other data collection devices.

Velocity refers to how fast data is moving, or to put that another way, how much data is generated within a given period of time. Social media streams generate hundreds of thousands of posts and comments every minute, while your own email inbox will probably have much less activity. Big data streams are streams that often handle hundreds of thousands or millions of events in more or less real-time. Examples of these data streams are online gaming platforms and high-frequency stock trading algorithms.

Variety refers to the different types of data contained within the dataset. Data can be made up of many different formats, like audio, video, text, photos, or serial numbers. In general, traditional databases are formatted to handle one, or just a couple, types of data. To put that another way, traditional databases are structured to hold data that is fairly homogenous and of a consistent, predictable structure. As applications become more diverse, full of different features, and used by more people, databases have had to evolve to store more types of data. Unstructured databases are ideal for holding big data, as they can hold multiple data types that aren’t related to each other.

Methods Of Handling Big Data

There are a number of different platforms and tools designed to facilitate the analysis of big data. Big data pools need to be analyzed to extract meaningful patterns from the data, a task that can prove quite challenging with traditional data analysis tools. In response to the need for tools to analyze large volumes of data, a variety of companies have created big data analysis tools. Big data analysis tools include systems like ZOHO Analytics, Cloudera, and Microsoft BI.

Spread the love
Continue Reading

Big Data

Jerry Xu, Co-Founder & CEO of Datatron – Interview Series

mm

Published

on

Jerry Xu, Co-Founder & CEO of Datatron - Interview Series

Jerry has extensive experience in machine learning, storage systems, online service, distributed systems, virtualization, and OS kernel. He has worked on high performance and large-scale systems at companies such as: Lyft, Box, Twitter, Zynga, and Microsoft. He has also authored the open-source project Lib Crunch and is a three-time Microsoft Gold Star Award winner. Jerry completed his master’s degree in computer science at Shanghai University. His most recent startup is Datatron.
 

Datatron began in 2016 after you left Lyft. How did you initially conceive of the Datatron business concept?

When we worked at Lyft, we noticed that data scientist usually comes from diverse background like Math, Physics, Bio-Engineering etc. It is often very hard for them to get the engineering part of how their models work although they have good intuition on the model and math. That motivated us to start Datatron. We are not trying to help data scientist to find the best algorithm. We only come into picture after the algorithm is decided and to make the model deployment, monitoring and management more efficient.

 

Datatron was selected by 500 Startups to be included in the 18th cohort of accelerator companies. How did this residency personally influence you, and how you manage Datatron?

We did learn a lot from StartX and 500 Startup experiences which includes how to pitch to investors, how to find product/market fit, how to run sales/marketing which we don’t have experience personally before.

 

Datatron is a management platform for ML, AI, and Data Science models. Could you elaborate on some of the functionalities that are offered by your platform?

Our product has four modules now, Model Deployment, Model Minoring, Model Challenger and Model Governance.

Model Deployment:

Create and scale model deployments in just a few clicks. Deploy models developed in any framework or language.

Model Monitoring:

Make better business decisions to save your team time and money. Monitor model performance and detect model decay as it happens.

Model Governance:

Spend less time on model validation, bias detection, and internal audit processes. Go from model development to internal auditing to production faster than ever.

 

One of the use cases of Datatron is Demand Forecasting which is important for enterprises which need to plan and allocate resources. How does machine learning play into this?

Demand usually changes with both seasonality and trend, which is a typical machine learning problem. Machine learning models like ARIMA, Recurrent Neural Network (RNN) can learn from historic data to find the trend and seasonality automatically and make predictions based on that.

 

Which framework models (for example, TensorFlow) do you currently support?

We support most of the popular machine learning frameworks like sklearn, TensorFlow, H2O, R, SAS etc.

 

Which languages do models need to be built in to be supported by Datatron?

We support models in their native languages – Python, R, Java etc.

 

What are some of the types of industries which are best served by using the Datatron platform?

Fundamentally our platform is a horizontal solution which means it can be used by lots of different industries. As of now, we are trying to focus on financial service and telecommunication.

 

What are some of the most challenging aspects of data science that companies face, and why does Datatron solve this for them?

Lots of companies have different data science team already and those teams are using different tools to build their model and have different practice to manage their models. More and more enterprise realized that model is becoming an asset and will impact their top line directly. Having a platform can standardize the machine learning practice across the company becomes critical and required. Our platform can help to solve those issues.

 

Is there anything else that you would like to share about Datatron?

We got lots of inbound interests from big enterprises. At the same time, we are also building up our sales and marketing team to reach out to potential customers actively.

To learn more visit Datatron.

Spread the love
Continue Reading