Connect with us

Deep Learning

NBA Using Artificial Intelligence to Create Highlights

Published

 on

NBA Using Artificial Intelligence to Create Highlights

The National Basketball Association (NBA) will be using artificial intelligence and machine learning to create highlights for their NBA All-Star weekend. 

The league has been testing this technology for many years now, starting in 2014. It comes from WSC Sports, an Isreali company, and it’s used to analyze key moments of each game in order to create highlights. One of the reasons behind this shift is that social media is becoming increasingly important as a way to reach fans, and customized highlights can reach more people. 

During the All-Star weekend, each individual player will have his own highlight reel created by the software. 

Bob Carney is the senior vice president of social and digital strategy for the NBA. 

“This is something we wouldn’t do before when we had to do it manually and push it out across 200 social and digital platforms across the US,” he said.

“We developed this technology that identifies each and every play of the game,” said Shake Arnon, general manager of WSC North America. 

Machine learning or AI is utilized by the software, and it identifies key moments in games through visual, audio and data cues. The software is then able to create highlights to be shared throughout social media and elsewhere. According to WSC Sports, they produced more than 13 million total clips and highlights in 2019. 

“We provide them the streams of our games and they are able to identify moments in the games, which allow us to automate the creation and distribution of highlight content,” Carney said.

According to Carney, who has worked for the NBA for almost 20 years, he wasn’t sure about the technology when he first met with WSC Sports. 

“We’ve heard the pitch about automated content many times…rarely can content providers do it,” he said. 

He eventually changed his mind after a pilot test with the NBA’s development league, which showcased the potential of the technology if used on a larger scale. Now, WSC technology is used on all of the NBA’s platforms, including the WNBA, G-League, and esports. 

The use of artificial intelligence has greatly reduced the time it takes to create highlights. 

“Previously, it could take an hour to cut a post-game highlights package,” Carney said. “Now it takes a few minutes to create over 1,000 highlight packages.” 

WSC’s long-term goal is personalized content, and they believe it is the future of sports highlights. They would like every individual fan to be able to receive personalized content delivered directly to them. 

“I want to be in control as a fan…We provide the tools to see what you want and when,” said Arnon.

The NBA says that the use of the new technology will not result in job loss, a problem often associated with the implementation of artificial intelligence and automation. 

“What it’s really done for us is allow us to take our best storytellers and let them focus on all the amazing stories…while the machines are focused on the automation,” he said. 

WSC, or World’s Scouting Center, is used by almost every single sports league, including the PGA Tour and NCAA. A total of 16 sports use the company. 

According to Arnon, “The NBA was always the holy grail. We are now in our sixth season and every year we’re doing more things to help the NBA lead the charge and get NBA content to more fans around the globe.” 

The company raised $23 million of Series C funding back in August, and their total capital is up to $39 million. Some of the investors include Dan Gilbert, owner of the Cleveland Cavaliers, and the Wilf family, owners of the Minnesota Vikings. Previous NBA Commissioner David Stern is an advisor to the company. 

WSC has over 120 employees and buildings in Tel Aviv, New York and Sydney, Australia. They have plans to expand to Europe within the next two years.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Deep Learning

Uber’s Fiber Is A New Distributed AI Model Training Framework

mm

Published

on

Uber's Fiber Is A New Distributed AI Model Training Framework

According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models.

Fiber has recently been made open-source on GitHub, and it’s compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.

The team of researchers from Uber explains that many of the most recent and relevant advances in artificial intelligence have been driven by larger models and more algorithms that are trained using distributed training techniques. However, creating population-based models and reinforcement models remains a difficult task for distributed training schemes, as they frequently have issues with efficiency and flexibility. Fiber makes the distributed system more reliable and flexible by combining cluster management software with dynamic scaling and letting users move their jobs from one machine to a large number of machines seamlessly.

Fiber is made out of three different components: an API, a backend, and a cluster layer. The API layer enables users to create things like queues, managers, and processes. The backend layer of Fiber lets the user create and terminate jobs that are being managed by different clusters, and the cluster layer manages the individual clusters themselves along with their resources, which greatly the number of items that Fiber has to keep tabs on.

Fiber enables jobs to be queued and run remotely on one local machine or many different machines, utilizing the concept of job-backed processes. Fiber also makes use of containers to ensure things like input data and dependent packages are self-contained. The Fiber framework even includes built-in error handling so that if a worker crashes it can be quickly revived. FIber is able to do all of this while interacting with cluster managers, letting Fiber apps run as if they were normal apps running on a given computer cluster.

Experimental results showed that on average Fiber’s response time was a few milliseconds and that it also scaled up better than baseline AI techniques when built with 2,048 processor cores/workers. The length of time required to complete jobs decreased gradually as the set number of workers increased. IPyParallel completed 50 iterations of training in approximately 1400 seconds, while Fiber was able to complete the same 50 iterations of training in approximately 50 seconds with 512 workers available.

The coauthors of the Fiber paper explain that Fiber is able to do achieve multiple goals like dynamically scaling algorithms and using large volumes of computing power:

“[Our work shows] that Fiber achieves many goals, including efficiently leveraging a large amount of heterogeneous computing hardware, dynamically scaling algorithms to improve resource usage efficiency, reducing the engineering burden required to make [reinforcement learning] and population-based algorithms work on computer clusters, and quickly adapting to different computing environments to improve research efficiency. We expect it will further enable progress in solving hard [reinforcement learning] problems with [reinforcement learning] algorithms and population-based methods by making it easier to develop these methods and train them at the scales necessary to truly see them shine.”

Spread the love
Continue Reading

Deep Learning

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Published

on

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Researchers from Cornell University have created a computer algorithm inspired by the mammalian olfactory system. Scientists have long sought out explanations of how mammals learn and identify smells. The new algorithm provides insight into the workings of the brain, and applying it to a computer chip allows it to quickly and reliably learn patterns better than current machine learning models. 

Thomas Cleland is a professor of psychology and senior author of the study titled “Rapid Learning and Robust Recall in a Neuromorphic Olfactory Circuit,” published in Nature Machine Intelligence on March 16.

“This is a result of over a decade of studying olfactory bulb circuitry in rodents and trying to figure out essentially how it works, with an eye towards things we know animals can do that our machines can’t,” Cleland said. 

“We now know enough to make this work. We’ve built this computational model based on this circuitry, guided heavily by things we know about the biological systems’ connectivity and dynamics,” he continued. “Then we say, if this were so, this would work. And the interesting part is that it does work.”

Intel Computer Chip

Cleland was joined by co-author Nabil Imam, a researcher at Intel, and together they applied the algorithm to an Intel computer chip. The chip is called Loihi, and it is neuromorphic, which means it is inspired by the functions of the brain. The chip has digital circuits that mimic the way in which neurons learn and communicate. 

The Loihi chip relies on parallel cores that communicate via discrete spikes, and each one of these spikes has an effect that can change depending on local activity. This requires different strategies for algorithm design than what is used in existing computer chips. 

Through the use of neuromorphic computer chips, machines could work a thousand times faster than a computer’s central or graphics processing units at identifying patterns and carrying out certain tasks. 

The Loihi research chip can also run certain algorithms while using around a thousand times less power than traditional methods. This is well-suited for the algorithm, which can accept input patterns from various different sensors, learn patterns quickly and sequentially, and identify each of the meaningful patterns even with strong sensory interference. The algorithm is capable of successfully identifying odors, and it can do so when the pattern is an astounding 80% different from the pattern originally learned by the computer. 

“The pattern of the signal has been substantially destroyed,” Cleland said, “and yet the system is able to recover it.”

The Mammalian Brain

The brain of a mammal is able to identify and remember smells extremely well, and there can be thousands of olfactory receptors and complex neural networks working to analyze the patterns associated with odors. One of the things that mammals can do better than artificial intelligence systems is retain what they’ve learned, even after there is new knowledge. In deep learning approaches, the network must be presented with everything at once, since new information can affect or even destroy what the system previously learned. 

“When you learn something, it permanently differentiates neurons,” Cleland said. “When you learn one odor, the interneurons are trained to respond to particular configurations, so you get that segregation at the level of interneurons. So on the machine side, we just enhance that and draw a firm line.”

Cleland spoke about how the team came up with new experimental approaches. 

“When you start studying a biological process that becomes more intricate and complex than you can just simply intuit, you have to discipline your mind with a computer model,” he said. “You can’t fuzz your way through it. And that led us to a number of new experimental approaches and ideas that we wouldn’t have come up with just by eyeballing it.”

 

Spread the love
Continue Reading

Big Data

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine – Opinion

mm

Published

on

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine - Opinion

The AI community must collaborate with geneticists, in finding a treatment for those deemed most at risk of coronavirus. A potential treatment could involve removing a person’s cells, editing the DNA and then injecting the cells back in, now hopefully armed with a successful immune response. This is currently being worked on for some other vaccines.

The first step would be sequencing the entire human genome from a sizeable segment of the human population.

Sequencing Human Genomes

Sequencing the first human genome cost $2.7 billion and took nearly 15 years to complete. The current cost of sequencing an entire human has dropped dramatically. As recent as 2015 the cost was $4000, now the cost is less than $1000 per person. This cost could drop a few percentage points more when economies of scale are taken into consideration.

We need to sequence the genome of two different types of patients:

  1. Infected with Coronavirus; but healthy
  2. Infected with Coronavirus; but poor immune response

It is impossible to predict which data point will be most valuable, but each sequenced genome would provide a dataset. The more data the more options there are to locate DNA variations which increase a body’s resistance to the disease vector.

Nations are currently losing trillions of dollars to this outbreak, the cost of $1000 a human genome is minor in comparison. A minimum of 1,000 volunteers for both segments of the population would arm researchers with significant volumes of big data. Should the trial increase in size by one order of magnitude, the AI would have even more training data which would increase the odds of success by several orders of magnitude. The more data the better, which is why a target of 10,000 volunteers should be aimed for.

Machine Learning

While multiple functionalities of machine learning would be present, deep learning would be used to find patterns in the data. For instance, there might be an observation that certain DNA variables correspond to a high immunity, while others correspond to a high mortality. At a minimum we would learn which segments of the human population are more susceptible and should be quarantined.

To decipher this data an Artificial Neural Network (ANN) would be located on the cloud, and sequenced human genomes from around the world would be uploaded. With time being of the essence, parallel computing will reduce the time required for the ANN to work its magic.

We could even take it one step further and use the output data sorted by the ANN,and feed it into a separate system called a Recurrent Neural Network (RNN). The RNN uses reinforcement learning to identify which gene selected by the initial ANN is most successful in a simulated environment. The reinforcement learning agent would gamify the entire process of creating a simulated setting, to test which DNA changes are more effective.

A simulated environment is like a virtual game environment, something many AI companies are well positioned to take advantage of based on their previous success in designing AI algorithms to win at esports. This includes companies such DeepMind and OpenAI.

These companies can use their underlying architecture optimized at mastering video games, to create a stimulated environment, test gene edits, and learn which edits lead to specific desired changes.

Once a gene is identified, another technology is used to make the edits.

CRISPR

Recently, the first ever study using CRISPR to edit DNA inside the human body was approved. This was to treat a rare type of genetic disorder that effects one of every 100,000 newborns. The condition can be caused by mutations in as many as 14 genes that play a role in the growth and operation of the retina. In this case, CRISPR sets out to carefully target DNA and to cause slight temporary damage to the DNA strand, causing the cell to repair itself. It is this restorative healing process which has the potential to restore eyesight.

While we are still waiting for results on if this treatment will work, the precedent of having CRISPR approved for trials in the human body is transformational. Potential disorders which can be treated include improving a body’s immune response to specific disease vectors.

Potentially, we can manipulate the body’s natural genetic resistance to a specific disease. The diseases that could potentially be targeted are diverse, but the community should be focusing on the treatment of the new global epidemic coronavirus.  A threat that if unchecked could lead to a death sentence to a large percentage of our population.

FINAL THOUGHTS

While there are many potential options to achieving success, it will require that geneticists, epidemiologists, and machine learning specialists unify. A potential treatment option may be as described above, or may be revealed to be unimaginably different, the opportunity lies in the genome sequencing of a large segment of the population.

Deep learning is the best analysis tool that humans have ever created; we need to at a minimum attempt to use it to create a vaccine.

When we take into consideration what is currently at risk with this current epidemic, these three scientific communities need to come together to work on a cure.

Spread the love
Continue Reading