Connect with us

Big Data

Warner Bros. To Start Using AI Analysis Tool To Assist In Greenlighting Movies

mm

Published

 on

Warner Bros. To Start Using AI Analysis Tool To Assist In Greenlighting Movies

Hollywood has been embracing digital technology and computational algorithms in order to movies for a while now, using CGI to de-age actors and enhance shots in other ways. Just recently, one Hollywood company announced its intention to use AI to analyze movie data and assist in making a decision regarding greenlighting projects. As reported by The Hollywood Reporter, the AI firm will be providing Warner Bros. a program intended to simplify aspects of distribution and give projections regarding pricing and possible profit.

The system developed for Warner Bros. will utilize big data to guide decision-making during the greenlight phase of a project. The system can reportedly return analyses regarding star power for a given region and even predict how much money a film is likely to make in theaters and through other distribution methods. Cinelytic has reportedly been engineering and beta-testing their predictive platform for over three years, and in addition to Warner Bros, several other companies, such as Ingenious Media and Productivity Media, have partnered with the company.

The AI platform is predicted to be especially useful when it comes to film festivals, where companies must make bids on films after only a few hours of deliberation.

Tobias Queisser, the founder of Cinelytic, stated that the value of the platform is that it will be able to quickly make the types of calculations that would take human analysts much longer to complete. Queisser also acknowledges that while the idea of giving AI influenced over what projects get produced can be unnerving, the AI itself won’t be making any decisions.

“The system can calculate in seconds what used to take days to assess by a human when it comes to general film package evaluation or a star’s worth,” says Queisser. “Artificial intelligence sounds scary. But right now, an AI cannot make any creative decisions,” says Queisser. “What it is good at is crunching numbers and breaking down huge data sets and showing patterns that would not be visible to humans. But for creative decision-making, you still need experience and gut instinct.”

Despite Queisser’s assurances that humans will still be in charge of any important decisions, some people are concerned about how the AI will be used. For instance, Popular Mechanics noted that the entire Marvel film franchise was based on the willingness of executives to take a chance on Iron Man and Robert Downey Jr., who was considered “box office poison” at one time. The fear is that using AI algorithms to minimize risk could lead to situations where original and/or high-quality films are passed over. To be sure, AI tools can potentially extend our own biases if there aren’t systems in place to control them.

Of course, one could make the argument that the technology behind Cinelytic’s analysis tool could be used to give more deserving projects a chance, instead of projects that are likely to fail. As QZ notes, Cinelytic was tested last year when it predicted that the Hellboy film would end up being a box office bomb, and it was proven correct. The film had a $50 million dollar budget and it made only about $21.9 million at the box office after Cinelytic’s tool predicted that it would make around $23.2 million. A correct prediction like this could mean that executives could take that money and invest it in projects that have more potential, making those resources available to other films. It could potentially even make choosing new investments in new IPs less scary and uncertain for those greenlighting projects.

Looking beyond Cinelytic, if AI algorithms are ever used to recommend films, the algorithms could also be used to control for human biases in decision making. Depending on what features the AI selects for, it could be instructed to recommend stories about underrepresented minorities more often, reducing some of the disparity in representation often seen in Hollywood films.

Ultimately, the AI device tool developed Cinelytic is a tool, and much like any tool it can be used properly or misused. Regardless, it seems likely that automating repetitive and time-consuming calculations is something the movie industry is only going to continue to invest in.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Big Data

Computer Scientists Tackle Bias in AI

Published

on

Computer Scientists Tackle Bias in AI

Computer scientists from Princeton and Stanford University are now addressing problems of bias in artificial intelligence (AI). They are working on methods that result in fairer data sets containing images of people. The researchers work closely with ImageNet, which is a database of more than 13 million images. Throughout the past decade, ImageNet has helped advance computer vision. With the use of their methods, the researchers then recommended improvements for the database. 

ImageNet includes images of objects, landscapes, and people. Researchers that create machine learning algorithms that classify images use ImageNet as a source of data. Because of the database’s massive size, it was necessary for there to be automated image collection and crowdsourced image annotation. Now, the ImageNet team works to correct biases and other issues. The images often contain people that are unintended consequences of ImageNet’s construction.

Olga Russakovsky is the co-author and an assistant professor of computer science at Princeton. 

“Computer vision now works really well, which means it’s being deployed all over the place in all kinds of contexts,” he said. “This means that now is the time for talking about what kind of impact it’s having on the world and thinking about these kinds of fairness issues.”

In the new paper, the ImageNet team systematically identified non-visual concepts and offensive categories. These categories included racial and sexual characterizations, and the team proposed removing them from the database. The team has also developed a tool that allows users to specify and retrieve image sets of people, and it can do so by age, gender expression, and skin color. The goal is to create algorithms that more fairly classify people’s faces and activities in images. 

The work done by the researchers was presented on Jan. 30 at the Association for Computing Machinery’s Conference on Fairness, Accountability, and Transparency in Barcelona, Spain. 

“There is very much a need for researchers and labs with core technical expertise in this to engage in these kinds of conversations,” said Russakovsky. “Given the reality that we need to collect the data at scale, given the reality that it’s going to be done with crowdsourcing because that’s the most efficient and well-established pipeline, how do we do that in a way that’s fairer — that doesn’t fall into these kinds of prior pitfalls? The core message of this paper is around constructive solutions.”

ImageNet was launched in 2009 by a group of computer scientists at Princeton and Stanford. It was meant to serve as a resource for academic researchers and educators. The creation of the system was led by Princeton alumni and faculty member Fei-Fei Li. 

ImageNet was able to become such a large database of labeled images through to the use of crowdsourcing. One of the main platforms used was the Amazon Mechanical Turk (MTurk), and workers were paid to verify candidate images. This caused some problems, and there were many biases and inappropriate categorizations. 

Lead author Kaiyu Yang is a graduate student in computer science. 

“When you ask people to verify images by selecting the correct ones from a large set of candidates, people feel pressured to select some images and those images tend to be the ones with distinctive or stereotypical features,” he said. 

The first part of the study involved filtering out potentially offensive or sensitive person categories from ImageNet. Offensive categories were defined as those that contained profanity or racial or gender slurs. One such sensitive category was the classification of people based on sexual orientation or religion. Twelve graduate students from diverse backgrounds were brought in to annotate the categories, and they were instructed to label a category sensitive if they were unsure of it. About 54% of the categories were eliminated, or 1,593 out of the 2,932 person categories in ImageNet. 

MTurk workers then rated the “imageability” of the remaining categories on a scale of 1 to 5. 158 categories were classified as both safe and imageable, rating 4 or higher. These filtered set of categories included more than 133,000 images, which can be highly useful for training computer vision algorithms. 

The researchers studied the demographic representation of people in the images, and the level of bias in ImageNet was assessed. Sourced content from search engines often provide results that overrepresent males, light-skinned people, and adults between the ages of 18 and 40.

“People have found that the distributions of demographics in image search results are highly biased, and this is why the distribution in ImageNet is also biased,” said Yang. “In this paper we tried to understand how biased it is, and also to propose a method to balance the distribution.”

The researchers considered three attributes that are also protected under U.S. anti-discrimination laws: skin color, gender expression, and age. The MTurk workers then annotated each attribute of each person in an image. 

The results showed that ImageNet’s content has a considerable bias. The most underrepresented were dark-skinned, females, and adults over the age of 40.

A web-interface tool was designed that allows users to obtain a set of images that are demographically balanced in a way that the user chooses. 

“We do not want to say what is the correct way to balance the demographics, because it’s not a very straightforward issue,” said Yang. “The distribution could be different in different parts of the world — the distribution of skin colors in the U.S. is different than in countries in Asia, for example. So we leave that question to our user, and we just provide a tool to retrieve a balanced subset of the images.”

The ImageNet team is now working on technical updates to its hardware and database. They are also trying to implement the filtering of the person categories and the rebalancing tool developed in this research. ImageNet is set to be re-released with the updates, along with a call for feedback from the computer vision research community. 

The paper was also co-authored by Princeton Ph.D. student Klint Qinami and Assistant Professor of Computer Science Jia Deng. The research was supported by the National Science Foundation.  

 

Spread the love
Continue Reading

Big Data

Data Science Companies Use AI To Protect Environment And Fight Climate Change

mm

Published

on

Data Science Companies Use AI To Protect Environment And Fight Climate Change

As the nations of Earth attempt to invent and implement solutions to the growing threat of climate change, just about every option is on the table. Investing in renewable sources of energy and dropping emissions around the globe are the dominant strategies, but utilizing artificial intelligence can help reduce the damage done by climate change. As reported by Live Mint, artificial intelligence algorithms can help conservationists limit deforestation, protect vulnerable species of animals from climate change, fight poaching, and monitor air pollution.

The data science company Gramener has employed machine learning to help get estimates of the number of penguin colonies across Antarctica by analyzing images taken by camera traps. The size of penguin colonies in Antarctica has decreased dramatically over the course of the past decade, impacted by climate change. In order to help conservation groups and scientists analyze image data of Antarctic penguins, Gramener employed convolutional neural networks to clean up the data, and once the data was clean it was deployed through Microsoft’s data science virtual machine. The model developed by Gramener makes use of penguin density in the captured images in order to achieve estimates of penguin populations faster and more reliably. Gramener also used similar techniques to estimate salmon populations in various rivers.

As LiveMint reported, there are other animal conservation projects that make use of AI as well, such as the Elephant Listening Project designed by Conservation Metrics. Populations of elephants throughout Africa have been suffering because of illegal poaching. The project utilizes machine learning algorithms to identify the vocalizations of elephants, distinguishing them from sounds made by other animals. By training machine learning models to recognize unique sound patterns and then using data from sensors distributed throughout elephant habitat, the researchers can develop a system that alerts them to potential poaching or deforestation. They can have a system that listens for things like vehicles, sounds, or guns, and if these sounds are detected alerts can be sent out to authorities.

Machine learning algorithms can also be used to predict the damage that can be done by severe weather events like thunderstorms and tropical cyclones. For instance, IBM has produced a new high-resolution atmospheric forecasting model intended to track potentially damaging weather events.

Jaspreet Bindra, the author of The Tech Whisperer and expert on digital transformations explained to LiveMint that machine learning is necessary to keep up with the changes caused by climate change. Bindra explained:

“Global warming has changed the way climate modeling is done. Using AI/ML is very important as it will make things happen faster. All this will require lots of computing power and, going forward, quantum computers might play an important role.”

Blue Sky Analytics, based in Gurugram, India, is another example of using machine learning algorithms to protect the environment. An application developed by Blue Sky Analytics is used to monitor industrial emissions and air quality in general. Data is gathered and analyzed through satellite data and sensors at ground level.

It requires a substantial amount of computer power in order to analyze and understand the environmental effects of issues like climate change, poaching, pollution. UC Berkeley is trying to speed up research by crowdsourcing the computation of environmental data using smartphones and PCs. The crowdsourcing project is called BOINC (Berkley Open Infrastructure for Network Computing). Those who want to assist in the crowdsourced data analysis just have to install the BOINC software on a chosen device, and when that device isn’t being used the CPU and GPU resources available will be leveraged to carry out computations.

Spread the love
Continue Reading

Big Data

Garth Rose, CEO of GenRocket, Inc – Interview Series

mm

Published

on

Garth Rose, CEO of GenRocket, Inc - Interview Series

Garth is the Co-Founder & CEO of GenRocket. He is an expert at launching and building technology startups. He has held numerous senior leadership roles in startups over the past 25 years including President & CEO of Concentric Visions (VC backed + acquired), VP Sales & VP Business Development at Indus River Networks (VC backed + acquired), VP Sales & Marketing at Digital Products (acquired) and National Sales Manager at Leading Edge Products.

In 2012 you Co-founded GenRocket a company that specializes in enterprise test data automation.  What was the initial vision that inspired this?

I met GenRocket Co-Founder Hycel Taylor in 2011 and he educated me about the need for accurate, conditioned test data for effective software testing. Hycel had done a lot of research and found a huge gap when it came to test data solutions. Hycel decided to architect his own platform that was low cost, really fast and flexible.

What are some of the benefits of using test data versus production data?

Proper software testing means not just testing “positive” conditions of an application but also testing “negative” conditions as well as permutations and edge cases. Production data is useful for data analytics but has limitations for many test cases. One of our financial services customers shared that their production data can only fully satisfy 33% of their testing requirements.

The speed of data generation is important, what are the speeds that GenRocket can deliver?

For a typical automated test case we deliver test data in about 100 milliseconds. For volume data GenRocket generates at a rate of about 10,000 rows of data per second. For big data applications we can use multiple GenRocket instances in parallel to generate millions to billions of rows of data in minutes.

There’s always a learning curve when it comes to generating both test and production data. Do you offer any type of user training?

GenRocket University was created in 2017 to educate our customers and channel partners on GenRocket. We offer multiple on-line training courses at no cost including our “GenRocket Certified Engineer” training course.

You currently serve enterprise customers in over 10 verticals. What are these different types of enterprise customers?

Major banks, numerous global financial services companies, major U.S. healthcare providers, major manufacturers, global supply chain firms, data information services firms are some of our customers across the world.

Our most active industry verticals are banking, financial services, insurance, healthcare and manufacturing.

How does GenRocket differ from other Test Data Management tools?

Traditional Test Data Management (TDM) solutions copy, mask and refresh production data. These solutions tend to be expensive and complex and production data also has limitations for software testing. GenRocket flips the TDM paradigm by quickly and accurately generating most of the required data and querying the small amount of production data that is needed for some of the tests. The GenRocket Test Data Automation (TDA) approach is faster, lower cost and easier to implement and use than TDM.

Could you tell us a little bit about the ability for test data framework compatibility?

Every organization has their own testing framework or testing tools so GenRocket has the flexibility to integrate into every customer’s environment. GenRocket can integrate with just about any testing framework in any language and any testing tool like Jenkins or Selenium. GenRocket can also insert data into any database, and can send data over web services. GenRocket also offers integration with Salesforce and can support complex data feeds like NACHA in banking and EDI and HL7 for the health care industry.

Is there anything else that you would like to tell us about GenRocket?

We rely on an extensive network of trained channel partners to introduce and deliver GenRocket test data solutions into our global customers. Partners like Cognizant, HCL, Wipro, Hexaware, Mindtree and UST Global are actively working with GenRocket.

To learn more visit GenRocket.

Spread the love
Continue Reading