Amazon’s annual re:Invent conference in Las Vegas began this week with three major AI announcements. The company presented the public with Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.
What is being called the biggest announcement of the three, Transcribe Medical is the newest edition to the company’s transcribe speech recognition service. It will transcribe medical speech for primary care. The program is capable of operating in medical speech as well as standard conversational diction.
According to the company, Transcribe Medical can be used across thousands of healthcare facilities, and it will help aid medical professionals in taking notes and other important information. It offers an API and will be able to be used with most smart devices containing a microphone. When the program reads and processes the information, it returns text in real-time.
Transcribe Medical is currently being used by SoundLines and Amgen.
Vadim Khazan is the president of technology at SoundLines.
“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get ton insightful data,” he said in a statement.
DeepComposer is an AI-enabled piano keyboard that will allow AWS customers to use AI and a MIDI controller to compose music. Amazon is calling the new technology the “world’s first” machine learning-enabled musical keyboard. It has 32 keys, and it is a two-octave keyboard.
Composers who use the program can choose whether to record a short musical tune or use a prerecorded one. They will then select a model for their desired genre and the model’s architecture parameters. They can also set the loss function, a feature used to measure the difference between the algorithm’s output and expected value. The composer can also choose hyperparameters and a validation sample. DeepComposer then creates a composition which can either be played in the AWS console or exported or shared on SoundCloud.
DeepComposer uses a generative adversarial network (GAN) to fill in compositional gaps in songs. Random data is taken by a generator component and used to create samples which are forwarded to a discriminator bit. The discriminator bit then separates the real samples from the fake ones, and the generator improves along with the discriminator. The generator progressively gets better at learning how to create samples as close to the genuine ones as possible.
SageMaker Operators for Kubernetes
AWS also launched Amazon SageMaker Operators for Kubernetes, which allows data scientists to train, tune, and deploy AI models in Amazon’s SageMaker machine learning development platform. AWS customers are able to install SageMaker Operators on Kubernetes clusters, and this can create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools.
Aditya Bindal is the AWS Deep Learning senior product manager.
“Now with Amazon SageMaker Operators for Kubernetes, customers can continue to enjoy the portability and standardization benefits of Kubernetes … along with integrating the many additional benefits that come out-of-the-box with Amazon SageMaker, no custom code required,” she wrote in a press release.
Kubernetes is an open-source general-purpose container orchestration system that is used to deploy and manage containerized applications. This is often done via a managed service like Amazon Elastic Kubernetes Service (EKS). Scientists and developers are able to gain greater control over their training and interface workloads with the program.
AI Struggles To Master Minecraft Through Imitation Learning
Over the past few months, Microsoft and other companies researching machine learning challenged teams of AI developers to create an AI system that could play Minecraft and find a diamond within the game. As reported by the BBC, while AI platforms have managed to dominate chess and go, but it has struggled to master a task in Minecraft.
Microsoft’s Minecraft-based AI challenge was called MineRL, and the competition results were formally announced at the recent NeurIPS conference. The competition’s intention was to train an AI through an “imitation learning” approach. Imitation learning is a method where an AI is trained through the use of observation. Imitation learning intends to let AI systems learn actions by watching humans carries out those actions, learning through the act of observation. Imitation learning, in comparison to reinforcement learning, is a much less computationally expensive and substantially more efficient way of training an AI.
Reinforcement learning often requires many powerful computers networked together and hundreds or thousands of hours of training to become effective at a task. In contrast, an AI trained with an imitation learning method can be trained much quicker, as the AI already has a baseline of knowledge to work with courtesy of the human operators who have proceeded it.
Imitation learning has practical applications in training an AI where the AI cannot safely explore until it figures out the correct actions. Such scenarios would include the training of an autonomous vehicle as the car couldn’t be allowed to just roam around a street until it has learned desired behaviors. Using a human demonstrator’s data to train the vehicle could potentially make the process faster and safer.
The act of finding a diamond in Minecraft requires carrying out many steps in sequence, such as cutting down trees to make tools, exploring the caves that contain the diamonds, and actually finding a diamond within the cave. Despite the complexity of the task, a human player familiar with the game should be able to get a diamond in around 20 minutes.
Over 660 different AI agents were submitted to the competition, but not a single one of the AIs was able to find a diamond. The data provided to train the AI was a dataset containing over 60 million frames of gameplay collected from many human players. The locations of diamonds are randomized when an instance of the game is started, so this means that the AIs cannot simply look where the human players found the diamonds. In other words, the AIs need to form an understanding of how concepts, like making tools, using tools, exploring, and finding resources, are linked together.
Despite the fact that none of the AI agents were able to successfully find a diamond, the organization team was still pleased by the results of the competition, and that much was still learned from the experiment. The research that the AI teams conducted can help advance the AI field, finding alternatives to reinforcement learning strategies.
Reinforcement learning often gives superior performance over imitation learning, with one notable success of reinforcement learning being DeepMind’s AlphaGo. However, as previously noted, reinforcement learning requires massive computing resources, limiting its use by organizations that cannot afford computer processers at large scale.
William Guss, PhD Student at Carnegie Mellon University and head organizer of the competition, explained to the BBC that the MineRL competition was intended to investigate alternatives to computationally heaving AI. Said Guss:
“…Throwing massive compute at problems isn’t necessarily the right way for us to push the state of the art as a field… It works directly against democratising access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.”
AIs To Compete In Minecraft Machine Learning Competition
As reported by Nature, a new AI competition will be occurring soon, the MineRL competition, which will encourage AI engineers and coders to create programs capable of learning through observation and example. The test case for these AI systems will be the highly popular crafting and survival video game Minecraft.
Artificial intelligence systems are have seen some recent impressive accomplishments when it comes to video games. Just recently an AI beat out the best human players in the world at the strategy game StarCraft II. However, StarCraft II has definable goals that are easier to break down into coherent steps that an AI can use to train. A much more difficult task is for an AI to learn how to navigate a large, open-world sandbox game like Minecraft. Researchers are aiming to help AI programs learn through observation and example, and if they are successful they could substantially reduce the amount of processing power needed to train an artificial intelligence program.
The participants in the competition will have four days to create an AI that will be tested with Minecraft, taking up to eight million steps to train their AI. The goal of the AI is to find a diamond within the game by digging. Eight million steps of training is a much shorter time span than the amount of time needed to train powerful AI models these days, so the participants in the competition need to engineer methods that drastically improve over current training methods.
The approaches being used by the participants are based on a type of learning called imitation learning. Imitation learning stands in contrast with reinforcement learning, which is a popular method for training sophisticated systems like robotic arms in factories or the AIs capable of beating human players at StarCraft II. The primary drawback to reinforcement learning algorithms is the fact that they require immense computer processing power to train, relying on hundreds or even thousands of computers linked together to learn. By contrast, imitation learning is a much more efficient and less computationally expensive method of training. Imitation learning algorithms endeavor to mimic how humans learn by observation.
William Guss, a PhD candidate in deep-learning theory at Carnegie Mellon University explained to Nature that getting an AI to explore and learn patterns in an environment is a tremendously difficult task, but imitation learning provides the AI with a baseline of knowledge, or good prior assumptions, about the environment. This can make training an AI much quicker in comparison to reinforcement learning.
Minecraft serves as a particularly useful training environment for multiple reasons. One reason is that Minecraft allows players to use simple building blocks to create complex structures and items, and the many steps needed to create these structures serve as tangible markers of progress that researchers can use as metrics. Minecraft is also extremely popular, and because of this, it is comparatively easy to gather training data. The organizers of the MineRL competition recruited many Minecraft players to demonstrate a variety of tasks like creating tools and braking apart blocks. By crowdsourcing the generation of data, researchers were able to capture 60 million examples of actions that could be taken in the game. The researchers gave approximately 1000 hours of video to the competition teams.
Use the knowledge that humans have built up, says Rohin Shah, Ph.D. candidate in computer science at the University of California, Berkeley explained to Nature that this competition is likely the first to focus on using the knowledge that humans have already generated to expedite the training of AI.
Guss and the other researchers are hopeful that the contest could have results with implications beyond Minecraft, giving rise to better imitation learning algorithms and inspiring more people to consider imitation learning as a viable form of training an AI. The research could potentially help create AIs that are better capable of interacting with people in complex, changing environments.
A New AI System Could Create More Hope For People With Epilepsy
As Endgadget reports, two AI researchers may have created a system that creates new hope for people suffering from epilepsy – a system “that can predict epileptic seizures with 99.6-percent accuracy,” and do it up to an hour before seizures occur.
This would not be the first new advancement, since previously researchers at Technical University (TU) in Eindhoven, Netherlands developed a smart arm bracelet that can predict epileptic seizures during nighttime. But the accuracy and short time-frame the new AI system can work on as IEEE Spectrum notes, gives more hope to around 50 million people around the world who suffer from epilepsy (based on the data from World Health Organization). Out of this number of patients, 70 percent of them can control their seizures with medication if taken on time.
The new AI system was created by Hisham Daoud and Magdy Bayoumi of the University of Louisiana at Lafayette, and is lauded as “a major leap forward from existing prediction methods.” As Hisham Daoud, one of the two researchers that developed the system explains, “Due to unexpected seizure times, epilepsy has a strong psychological and social effect on patients.”
As is explained, “each person exhibits unique brain patterns, which makes it hard to accurately predict seizures.” So far, the previously existing models predicted seizures “ in a two-stage process, where the brain patterns must be extracted manually and then a classification system is applied,” which, as Daoud explains, added to the time needed to make a seizure prediction.
In their approach explained in a study published on 24 July in IEEE Transactions on Biomedical Circuits and Systems, “the features extraction and classification processes are combined into a single automated system, which enables earlier and more accurate seizure prediction.”
To further boost the accuracy of their system Daoud and Bayoumi “incorporated another classification approach whereby a deep learning algorithm extracts and analyzes the spatial-temporal features of the patient’s brain activity from different electrode locations, boosting the accuracy of their model.” Since “EEG readings can involve multiple ‘channels’ of electrical activity,” to speed up the prediction process, even more, the two researchers “applied an additional algorithm to identify the most appropriate predictive channels of electrical activity.”
The complete system was then tested on 22 patients at the Boston Children’s Hospital. While the sample size was small, the system proved to be very accurate (99.6%), and had “a low tendency for false positives, at 0.004 false alarms per hour.”
As Daoud explained the next step would be the development of a customized computer chip to process the algorithms. “We are currently working on the design of efficient hardware [device] that deploys this algorithm, considering many issues like system size, power consumption, and latency to be suitable for practical application in a comfortable way to the patient.”
- DeepMind Discovers AI Training Technique That May Also Work In Our Brains
- What is Gradient Descent?
- Artificial Intelligence Could Bring an End to Finger-Prick Glucose Tests
- Scientists Repurpose Living Frog Cells to Develop World’s First Living Robot
- Warner Bros. To Start Using AI Analysis Tool To Assist In Greenlighting Movies