Connect with us

Deep Learning

AI Based on Slow Brain Dynamics

Published

 on

AI Based on Slow Brain Dynamics

Scientists at Bar-llan University in Israel have used advanced experiments on neural cultures and large scale simulations to create a new ultrafast artificial intelligence. The new AI is based on the slow brain dynamics of humans. Those brain dynamics have better learning rates compared to the best learning algorithms that we have today. 

Machine learning is actually strongly related and based on the dynamics of our brains. With the speed of modern computers and their large data sets, we have been able to create deep learning algorithms that are similar to human experts in various different fields. However, these learning algorithms have different characteristics than human brains. 

The team of scientists at the university published their work in the journal Scientific Reports. They worked to connect neuroscience and advanced artificial intelligence algorithms, a field that has been abandoned for decades. 

Professor Ido Kanter of Bar-llan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Study, and the leading author of the study, commented on the two fields. 

“The current scientific and technological viewpoint is that neurobiology and machine learning are two distinct disciplines that advance independently,” he said. “The absence of expectedly reciprocal influence is puzzling.” 

“The number of neurons in a brain is less than the number of bits in a typical disc size of modern personal computers, and the computational speed of the brain is like the second hand on a clock, even slower than the first computer invented over 70 years ago,” he said. 

“In addition, the brain’s learning rules are very complicated and remote from the principles of learning steps in current artificial intelligence algorithms.” 

Professor Kanter works with a research team including Herut Uzan, Shira Sardi, Amir Goldental, and Roni Vardi. 

When it comes to brain dynamics, they deal with asynchronous inputs since physical reality changes and develops. Because of this, there is no synchronization for the nerve cells. This is different with artificial intelligence algorithms since they are based on synchronous inputs. Different inputs within the same frame and their timings are normally ignored. 

Professor Kanter went on to explain this dynamic. 

“When looking ahead one immediately observes a frame with multiple objects. For instance, while driving one observes cars, pedestrian crossings, and road signs, and can easily identify their temporal ordering and relative positions,” he said. “Biological hardware (learning rules) is designed to deal with asynchronous inputs and refine their relative information.” 

One of the points that this study makes is that ultrafast learning rates are about the same whether it’s a small or large network. According to the researchers, “the disadvantage of the complicated brain’s learning scheme is actually an advantage.” 

The study also shows that learning is able to take place without learning steps. It can be achieved through self-adaptation based on asynchronous inputs. In the human brain, this type of learning happens in the dendrites, which are short extensions of nerve cells, and different terminals of each neuron. This has been observed before. Previously, it was believed to be unimportant that network dynamics under dendeitic learning are controlled by weak weights. 

This new research and findings can mean a lot of different things. These efficient deep learning algorithms and their similarity to the very slow brain’s dynamics can help create a new class of advanced artificial intelligence with fast computers. 

The study also pushes for cooperation between the fields of neurobiology and artificial intelligence, which can help both fields advance further. According to the research group, “Insights of fundamental principles of our brain have to be once again at the center of future artificial intelligence.” 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Deep Learning

NBA Using Artificial Intelligence to Create Highlights

Published

on

NBA Using Artificial Intelligence to Create Highlights

The National Basketball Association (NBA) will be using artificial intelligence and machine learning to create highlights for their NBA All-Star weekend. 

The league has been testing this technology for many years now, starting in 2014. It comes from WSC Sports, an Isreali company, and it’s used to analyze key moments of each game in order to create highlights. One of the reasons behind this shift is that social media is becoming increasingly important as a way to reach fans, and customized highlights can reach more people. 

During the All-Star weekend, each individual player will have his own highlight reel created by the software. 

Bob Carney is the senior vice president of social and digital strategy for the NBA. 

“This is something we wouldn’t do before when we had to do it manually and push it out across 200 social and digital platforms across the US,” he said.

“We developed this technology that identifies each and every play of the game,” said Shake Arnon, general manager of WSC North America. 

Machine learning or AI is utilized by the software, and it identifies key moments in games through visual, audio and data cues. The software is then able to create highlights to be shared throughout social media and elsewhere. According to WSC Sports, they produced more than 13 million total clips and highlights in 2019. 

“We provide them the streams of our games and they are able to identify moments in the games, which allow us to automate the creation and distribution of highlight content,” Carney said.

According to Carney, who has worked for the NBA for almost 20 years, he wasn’t sure about the technology when he first met with WSC Sports. 

“We’ve heard the pitch about automated content many times…rarely can content providers do it,” he said. 

He eventually changed his mind after a pilot test with the NBA’s development league, which showcased the potential of the technology if used on a larger scale. Now, WSC technology is used on all of the NBA’s platforms, including the WNBA, G-League, and esports. 

The use of artificial intelligence has greatly reduced the time it takes to create highlights. 

“Previously, it could take an hour to cut a post-game highlights package,” Carney said. “Now it takes a few minutes to create over 1,000 highlight packages.” 

WSC’s long-term goal is personalized content, and they believe it is the future of sports highlights. They would like every individual fan to be able to receive personalized content delivered directly to them. 

“I want to be in control as a fan…We provide the tools to see what you want and when,” said Arnon.

The NBA says that the use of the new technology will not result in job loss, a problem often associated with the implementation of artificial intelligence and automation. 

“What it’s really done for us is allow us to take our best storytellers and let them focus on all the amazing stories…while the machines are focused on the automation,” he said. 

WSC, or World’s Scouting Center, is used by almost every single sports league, including the PGA Tour and NCAA. A total of 16 sports use the company. 

According to Arnon, “The NBA was always the holy grail. We are now in our sixth season and every year we’re doing more things to help the NBA lead the charge and get NBA content to more fans around the globe.” 

The company raised $23 million of Series C funding back in August, and their total capital is up to $39 million. Some of the investors include Dan Gilbert, owner of the Cleveland Cavaliers, and the Wilf family, owners of the Minnesota Vikings. Previous NBA Commissioner David Stern is an advisor to the company. 

WSC has over 120 employees and buildings in Tel Aviv, New York and Sydney, Australia. They have plans to expand to Europe within the next two years.

 

Spread the love
Continue Reading

Deep Learning

Researchers Create AI Tool That Can Make New Video Game Levels

mm

Published

on

Researchers Create AI Tool That Can Make New Video Game Levels

As machine learning and artificial intelligence became more sophisticated, video games proved to be a natural and useful proving ground for AI algorithms and models. Because video games have observable and quantifiable mechanics, objects, and metrics, they make convenient ways for AI developers to test the versatility and reliability of their models. While video games have helped AI engineers develop their models, AI can potentially help video game designers create their own games. Recently, a group of researchers at the University of Alberta designed a set of algorithms that could automate the creation of simple platforming video games.

Matthew Guzdial is an assistant professor and AI researcher at the University of Alberta, and according to Time magazine, Guzdial and his team have been working on an AI algorithm that can automatically create levels in side-scrolling platforming video games. This automated level design could save game designers time and energy, allowing them to focus on more demanding tasks.

Guzdial and his team trained an AI to generate platforming game levels by having the AI train on many hours of platforming game gameplay. Guzdial, including games like the original Super Mario Bros., Kirby’s Adventure, and Mega Man. After the initial training, the AI is tasked with rendering predictions about the rules/mechanics of the game, comparing its assumptions with test footage of the game. After the AI has managed to interpret the rules that a game operates on, the researchers then used a similar training method to construct entirely new levels that the model’s rules are tested in.

Guzdial and his team created a “game graph”, which is a merger of both the model’s beliefs regarding rules and its assumptions about how the levels that use this rules are designed. The game graph combined all the crucial features regarding a game into one representation, and this representation, therefore, it contained all the necessary information for the game to be reproduced from scratch. All of the information contained in the game graph was then used to engineer new levels and games. The contents of the model’s observations are combined in new, unique ways. For example, the AI combined aspects of both Super Mario Bros. and Mega Man to create a new level that drew on the platforming mechanics of both games. When this process is repeated over and over, the end result could be an entirely new game that feels very similar to classic platformers but is nonetheless unique.

According to Guzdial, as quoted by Time, the idea behind the project is to create a tool that game developers can use to start designing their own levels and games without needing to learn how to code. Guzdial pointed to the fact that Super Mario Maker is already taking this concept and running with it.

Guzdial and the other members of the research team are hoping to take the concept even further, potentially creating a tool that could like people to create new levels or games just by specifying a certain “feel” or “look” that they. Once the model receives these specifications it can then go about creating a new game with unique levels and rules. The model would apparently only need two frames of a game in order to do this, as it would extrapolate from the differences between the two frames. The user would be able to give the model feedback as it generated levels, and the model would create new levels based on the provided feedback.

“We’re putting some finishing touches on the interface and then we’re going to run a human subject study to find out if we’re on the right track,” Guzdial said to Time.

Although any consumer-ready version of that application is still a way in the future, Guzdial expressed concerns that the games industry might be slow to adopt the technology due to concerns that it might reduce the need for human game designers. Despite this, Guzdial did think that if anyone was likely to use the tool, the first people to do so would likely be independent game developers, who might use it to create interesting, experimental games.

“I can totally imagine that what we get are some passionate indie [developers] messing around with these technologies and making weird, cool, interesting little experiences,” said Guzdial. “But I don’t think they’re going to impact triple-A game development anytime soon.”

Spread the love
Continue Reading

Deep Learning

Computer Algorithm Can Identify Unique Dancing Characteristics

Published

on

Computer Algorithm Can Identify Unique Dancing Characteristics

Researchers at the Centre for Interdisciplinary Music Research at the University of Jyväskylä in Finland have been using motion capture technology to study people and dancing over the last few years. It is being used as a way to better understand the connection between music and individuals. They have been able to learn things through dance such as how extroverted or neurotic an individual is, their mood, and how much that individual empathizes with other people.

By continuing this work, they have run into a surprising new discovery. 

According to Dr. Emily Carlson, the first author of the study, “We actually weren’t looking for this result, as we set out to study something completely different.”

“Our original idea was to see if we could use machine learning to identify which genre of music our participants were dancing to, based on their movements.”

There were 73 participants in the study. As they danced to the eight different genres of Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap, they were motion captured. They were told to listen to the music and then move their bodies in any way that felt natural.

“We think it’s important to study phenomena as they occur in the real world, which is why we employ a naturalistic research paradigm,” according to Professor Petri Toivianinen, the senior author of the study. 

Participants’ movements were analyzed by the researchers using machine learning, which attempted to distinguish between the different musical genres. The process didn’t go as planned, and the computer algorithm was only able to identify the correct genre less than 30% of the time. 

Even though the process didn’t go as planned, the researchers did discover that the computer was able to correctly identify the individual from the group of 73, based on their movements. The accuracy rate was 94%, compared to the 2% accuracy rate if it was left to chance, or the computer guessed without any given information.

“It seems as though a person’s dance movements are a kind of fingerprint,” says Dr. Pasi Saari, co-author of the study and data analyst. “Each person has a unique movement signature that stays the same no matter what kind of music is playing.”

There was an increased effect on individual dance movements depending on the genre of music that was played. When individuals danced to Metal music, the computer was less accurate in identifying who it was.

“There is a strong cultural association between Metal and certain types of movement, like headbanging,” Emily Carlson says. “It’s probable that Metal caused more dancers to move in similar ways, making it harder to tell them apart.”

These new developments could lead to something such as dance-recognition software.

“We’re less interested in applications like surveillance than in what these results tell us about human musicality,” Carlson explains. “We have a lot of new questions to ask, like whether our movement signatures stay the same across our lifespan, whether we can detect differences between cultures based on these movement signatures, and how well humans are able to recognize individuals from their dance movements compared to computers. Most research raises more questions than answers and this study is no exception.”

 

Spread the love
Continue Reading