Connect with us

Deepfakes

Deep Learning Is Re-Shaping The Broadcasting Industry

mm

Published

 on

Deep Learning Is Re-Shaping The Broadcasting Industry

Deep learning has become a buzz word in many endeavors, and broadcasting organizations are also among those that have to start to explore all the potential it has to offer, from news reporting to feature films and programs, both in the cinemas and on TV.

As TechRadar reported, the number of opportunities deep learning presents in the field of video production, editing and cataloging are already quite high. But as is noted, this technology is not just limited to what is considered repetitive tasks in broadcasting, since it can also “enhance the creative process, improve video delivery and help preserve the massive video archives that many studios keep.”

As far as video generation and editing are concerned, it is mentioned that Warner Bros. recently had to spend $25M on reshoots for ‘Justice League’ and part of that money went to digitally removing a mustache that star Henry Cavill had grown and could not shave due to an overlapping commitment. The use of deep learning in such time-consuming and financially taxing processes in post-production will certainly be put to good use.

Even widely available solutions like Flo make it possible to use deep learning in creating automatically a video just by describing your idea. The software then searches for possible relevant videos that are stored in a certain library and edits them together automatically.

Flo is also able to sort and classify videos, making it easier to find a particular part of the footage. Such technologies also make it possible to easily remove undesirable footage or make a personal recommendation list based on a video somebody has expressed an interest in.

Google has come up with a neural network “that can automatically separate the foreground and background of a video. What used to require a green screen can now be done with no special equipment.”

The deep fake has already made a name for itself, both good and bad, but its potential use in special effects has already reached quite a high level.

The area where deep learning will certainly make a difference in the restoration of classic films, as the UCLA Film & Television Archive, nearly half of all films produced prior to 1950 have disappeared and 90% of the classic film prints are currently in a very poor condition.

Colorizing black and white footage is still a controversial subject among the filmmakers, but those who decide to go that route can now use Nvidia tools, which will significantly shorten such a lengthy process as it now requires that the artist colors only one frame of a scene and deep learning will do the rest from there. On the other hand, Google has come up with a technology that is able to recreate part of a video-recorded scene based on start and end frames.

Face/Object recognition is already actively used, from classifying a video collection or archive, searching for clips with a given actor or newsperson, or counting the exact time of an actor in a video or film. TechRadar mentions that Sky News recently used facial recognition to identify famous faces at the royal wedding.

This technology is now becoming widely used in sports broadcasting to, say, “track the movements of the ball, or to identify other key elements to the game, such as the goal.” In soccer (football) this technology, given the name VAR is actually used in many official tournaments and national leagues as a referee’s tool during the game.

Streaming is yet another aspect of broadcasting that can benefit from deep learning. Neural networks can recreate high definition frames from low definition input, making it possible for the viewer to benefit from better viewing, even if the original input signal is not fully up to the standard.

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Deep Learning

NBA Using Artificial Intelligence to Create Highlights

Published

on

NBA Using Artificial Intelligence to Create Highlights

The National Basketball Association (NBA) will be using artificial intelligence and machine learning to create highlights for their NBA All-Star weekend. 

The league has been testing this technology for many years now, starting in 2014. It comes from WSC Sports, an Isreali company, and it’s used to analyze key moments of each game in order to create highlights. One of the reasons behind this shift is that social media is becoming increasingly important as a way to reach fans, and customized highlights can reach more people. 

During the All-Star weekend, each individual player will have his own highlight reel created by the software. 

Bob Carney is the senior vice president of social and digital strategy for the NBA. 

“This is something we wouldn’t do before when we had to do it manually and push it out across 200 social and digital platforms across the US,” he said.

“We developed this technology that identifies each and every play of the game,” said Shake Arnon, general manager of WSC North America. 

Machine learning or AI is utilized by the software, and it identifies key moments in games through visual, audio and data cues. The software is then able to create highlights to be shared throughout social media and elsewhere. According to WSC Sports, they produced more than 13 million total clips and highlights in 2019. 

“We provide them the streams of our games and they are able to identify moments in the games, which allow us to automate the creation and distribution of highlight content,” Carney said.

According to Carney, who has worked for the NBA for almost 20 years, he wasn’t sure about the technology when he first met with WSC Sports. 

“We’ve heard the pitch about automated content many times…rarely can content providers do it,” he said. 

He eventually changed his mind after a pilot test with the NBA’s development league, which showcased the potential of the technology if used on a larger scale. Now, WSC technology is used on all of the NBA’s platforms, including the WNBA, G-League, and esports. 

The use of artificial intelligence has greatly reduced the time it takes to create highlights. 

“Previously, it could take an hour to cut a post-game highlights package,” Carney said. “Now it takes a few minutes to create over 1,000 highlight packages.” 

WSC’s long-term goal is personalized content, and they believe it is the future of sports highlights. They would like every individual fan to be able to receive personalized content delivered directly to them. 

“I want to be in control as a fan…We provide the tools to see what you want and when,” said Arnon.

The NBA says that the use of the new technology will not result in job loss, a problem often associated with the implementation of artificial intelligence and automation. 

“What it’s really done for us is allow us to take our best storytellers and let them focus on all the amazing stories…while the machines are focused on the automation,” he said. 

WSC, or World’s Scouting Center, is used by almost every single sports league, including the PGA Tour and NCAA. A total of 16 sports use the company. 

According to Arnon, “The NBA was always the holy grail. We are now in our sixth season and every year we’re doing more things to help the NBA lead the charge and get NBA content to more fans around the globe.” 

The company raised $23 million of Series C funding back in August, and their total capital is up to $39 million. Some of the investors include Dan Gilbert, owner of the Cleveland Cavaliers, and the Wilf family, owners of the Minnesota Vikings. Previous NBA Commissioner David Stern is an advisor to the company. 

WSC has over 120 employees and buildings in Tel Aviv, New York and Sydney, Australia. They have plans to expand to Europe within the next two years.

 

Spread the love
Continue Reading

Deep Learning

Researchers Create AI Tool That Can Make New Video Game Levels

mm

Published

on

Researchers Create AI Tool That Can Make New Video Game Levels

As machine learning and artificial intelligence became more sophisticated, video games proved to be a natural and useful proving ground for AI algorithms and models. Because video games have observable and quantifiable mechanics, objects, and metrics, they make convenient ways for AI developers to test the versatility and reliability of their models. While video games have helped AI engineers develop their models, AI can potentially help video game designers create their own games. Recently, a group of researchers at the University of Alberta designed a set of algorithms that could automate the creation of simple platforming video games.

Matthew Guzdial is an assistant professor and AI researcher at the University of Alberta, and according to Time magazine, Guzdial and his team have been working on an AI algorithm that can automatically create levels in side-scrolling platforming video games. This automated level design could save game designers time and energy, allowing them to focus on more demanding tasks.

Guzdial and his team trained an AI to generate platforming game levels by having the AI train on many hours of platforming game gameplay. Guzdial, including games like the original Super Mario Bros., Kirby’s Adventure, and Mega Man. After the initial training, the AI is tasked with rendering predictions about the rules/mechanics of the game, comparing its assumptions with test footage of the game. After the AI has managed to interpret the rules that a game operates on, the researchers then used a similar training method to construct entirely new levels that the model’s rules are tested in.

Guzdial and his team created a “game graph”, which is a merger of both the model’s beliefs regarding rules and its assumptions about how the levels that use this rules are designed. The game graph combined all the crucial features regarding a game into one representation, and this representation, therefore, it contained all the necessary information for the game to be reproduced from scratch. All of the information contained in the game graph was then used to engineer new levels and games. The contents of the model’s observations are combined in new, unique ways. For example, the AI combined aspects of both Super Mario Bros. and Mega Man to create a new level that drew on the platforming mechanics of both games. When this process is repeated over and over, the end result could be an entirely new game that feels very similar to classic platformers but is nonetheless unique.

According to Guzdial, as quoted by Time, the idea behind the project is to create a tool that game developers can use to start designing their own levels and games without needing to learn how to code. Guzdial pointed to the fact that Super Mario Maker is already taking this concept and running with it.

Guzdial and the other members of the research team are hoping to take the concept even further, potentially creating a tool that could like people to create new levels or games just by specifying a certain “feel” or “look” that they. Once the model receives these specifications it can then go about creating a new game with unique levels and rules. The model would apparently only need two frames of a game in order to do this, as it would extrapolate from the differences between the two frames. The user would be able to give the model feedback as it generated levels, and the model would create new levels based on the provided feedback.

“We’re putting some finishing touches on the interface and then we’re going to run a human subject study to find out if we’re on the right track,” Guzdial said to Time.

Although any consumer-ready version of that application is still a way in the future, Guzdial expressed concerns that the games industry might be slow to adopt the technology due to concerns that it might reduce the need for human game designers. Despite this, Guzdial did think that if anyone was likely to use the tool, the first people to do so would likely be independent game developers, who might use it to create interesting, experimental games.

“I can totally imagine that what we get are some passionate indie [developers] messing around with these technologies and making weird, cool, interesting little experiences,” said Guzdial. “But I don’t think they’re going to impact triple-A game development anytime soon.”

Spread the love
Continue Reading

Deep Learning

Computer Algorithm Can Identify Unique Dancing Characteristics

Published

on

Computer Algorithm Can Identify Unique Dancing Characteristics

Researchers at the Centre for Interdisciplinary Music Research at the University of Jyväskylä in Finland have been using motion capture technology to study people and dancing over the last few years. It is being used as a way to better understand the connection between music and individuals. They have been able to learn things through dance such as how extroverted or neurotic an individual is, their mood, and how much that individual empathizes with other people.

By continuing this work, they have run into a surprising new discovery. 

According to Dr. Emily Carlson, the first author of the study, “We actually weren’t looking for this result, as we set out to study something completely different.”

“Our original idea was to see if we could use machine learning to identify which genre of music our participants were dancing to, based on their movements.”

There were 73 participants in the study. As they danced to the eight different genres of Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap, they were motion captured. They were told to listen to the music and then move their bodies in any way that felt natural.

“We think it’s important to study phenomena as they occur in the real world, which is why we employ a naturalistic research paradigm,” according to Professor Petri Toivianinen, the senior author of the study. 

Participants’ movements were analyzed by the researchers using machine learning, which attempted to distinguish between the different musical genres. The process didn’t go as planned, and the computer algorithm was only able to identify the correct genre less than 30% of the time. 

Even though the process didn’t go as planned, the researchers did discover that the computer was able to correctly identify the individual from the group of 73, based on their movements. The accuracy rate was 94%, compared to the 2% accuracy rate if it was left to chance, or the computer guessed without any given information.

“It seems as though a person’s dance movements are a kind of fingerprint,” says Dr. Pasi Saari, co-author of the study and data analyst. “Each person has a unique movement signature that stays the same no matter what kind of music is playing.”

There was an increased effect on individual dance movements depending on the genre of music that was played. When individuals danced to Metal music, the computer was less accurate in identifying who it was.

“There is a strong cultural association between Metal and certain types of movement, like headbanging,” Emily Carlson says. “It’s probable that Metal caused more dancers to move in similar ways, making it harder to tell them apart.”

These new developments could lead to something such as dance-recognition software.

“We’re less interested in applications like surveillance than in what these results tell us about human musicality,” Carlson explains. “We have a lot of new questions to ask, like whether our movement signatures stay the same across our lifespan, whether we can detect differences between cultures based on these movement signatures, and how well humans are able to recognize individuals from their dance movements compared to computers. Most research raises more questions than answers and this study is no exception.”

 

Spread the love
Continue Reading