A team of scientists at the University of Kentucky that was able to, as The Guardian says, found in the holy ark of a synagogue in En-Gedi in Israel, and which contained text from the biblical book of Leviticus, is now involved in an even harder and more complex task – reading the carbonised scrolls left after the eruption of Mount Vesuvius in AD 79 in the Italian city of Pompeii.
While the team lead by Prof Brent Seales was able to read the parchment found in a synagogue in En-Gedi Israel with ‘just’ high-energy x-rays, this time around, due to the manner in which the Pompeii scrolls were made and written, they will have to use machine learning to try and solve the mysteries hidden in these scrolls.
They will test their prices on two unopened scrolls that belong to the Institut de France in Paris and are part of a collection of about 1,800 scrolls that was first discovered in 1752 during excavations of Herculaneum. As The Guardian points out, they make up the only known intact library from antiquity, with the majority of the collection now preserved in a museum in Naples.
Professor Seales explained the problem his team faces – “although you can see on every flake of papyrus that there is writing, to open it up would require that papyrus to be really limber and flexible – and it is not anymore.” The problem also lies in the fact that“while the En-Gedi scroll contained a metal-based ink which shows up in x-ray data, the inks used on the Herculaneum scrolls are thought to be carbon-based, made using charcoal or soot, meaning there is no obvious contrast between the writing and the papyrus in x-ray scans.”
To be able to resolve the problem, the team has decided to use both high-energy x-rays and artificial intelligence. The method they are using involves photographs of scroll fragments with writing visible to the naked eye. These are then fed to “teach machine learning algorithms where ink is expected to be in x-ray scans of the same fragments, collected using a number of techniques.”
The team is guided by the concept that “the system will pick out and learn subtle differences between inked and blank areas in the x-ray scans, such as differences in the structure of papyrus fibers.” After the system is trained on these fragments, the idea is to apply it to the data from the intact scrolls and hopefully, that will reveal the text that is contained in the scrolls.
Seals added that the team has finished collecting the x-ray data and is now in the process of training the designated algorithms, which will then be applied to the scrolls in the coming months. “The first thing we are hoping to do is perfect the technology so that we can simply repeat it on all 900 scrolls that remain [unwrapped].”
Talking about the importance of possible discoveries, Dr. Dirk Obbink, a papyrologist and classicist at the University of Oxford, also involved in the project said that there is a possibility that the text might be in Latin. He added that “a new historical work by Seneca the Elder was discovered among the unidentified Herculaneum papyri only last year, thus showing what uncontemplated rarities remain to be discovered there.”
Computer Algorithm Can Identify Unique Dancing Characteristics
Researchers at the Centre for Interdisciplinary Music Research at the University of Jyväskylä in Finland have been using motion capture technology to study people and dancing over the last few years. It is being used as a way to better understand the connection between music and individuals. They have been able to learn things through dance such as how extroverted or neurotic an individual is, their mood, and how much that individual empathizes with other people.
By continuing this work, they have run into a surprising new discovery.
According to Dr. Emily Carlson, the first author of the study, “We actually weren’t looking for this result, as we set out to study something completely different.”
“Our original idea was to see if we could use machine learning to identify which genre of music our participants were dancing to, based on their movements.”
There were 73 participants in the study. As they danced to the eight different genres of Blues, Country, Dance/Electronica, Jazz, Metal, Pop, Reggae and Rap, they were motion captured. They were told to listen to the music and then move their bodies in any way that felt natural.
“We think it’s important to study phenomena as they occur in the real world, which is why we employ a naturalistic research paradigm,” according to Professor Petri Toivianinen, the senior author of the study.
Participants’ movements were analyzed by the researchers using machine learning, which attempted to distinguish between the different musical genres. The process didn’t go as planned, and the computer algorithm was only able to identify the correct genre less than 30% of the time.
Even though the process didn’t go as planned, the researchers did discover that the computer was able to correctly identify the individual from the group of 73, based on their movements. The accuracy rate was 94%, compared to the 2% accuracy rate if it was left to chance, or the computer guessed without any given information.
“It seems as though a person’s dance movements are a kind of fingerprint,” says Dr. Pasi Saari, co-author of the study and data analyst. “Each person has a unique movement signature that stays the same no matter what kind of music is playing.”
There was an increased effect on individual dance movements depending on the genre of music that was played. When individuals danced to Metal music, the computer was less accurate in identifying who it was.
“There is a strong cultural association between Metal and certain types of movement, like headbanging,” Emily Carlson says. “It’s probable that Metal caused more dancers to move in similar ways, making it harder to tell them apart.”
These new developments could lead to something such as dance-recognition software.
“We’re less interested in applications like surveillance than in what these results tell us about human musicality,” Carlson explains. “We have a lot of new questions to ask, like whether our movement signatures stay the same across our lifespan, whether we can detect differences between cultures based on these movement signatures, and how well humans are able to recognize individuals from their dance movements compared to computers. Most research raises more questions than answers and this study is no exception.”
AI Struggles To Master Minecraft Through Imitation Learning
Over the past few months, Microsoft and other companies researching machine learning challenged teams of AI developers to create an AI system that could play Minecraft and find a diamond within the game. As reported by the BBC, while AI platforms have managed to dominate chess and go, but it has struggled to master a task in Minecraft.
Microsoft’s Minecraft-based AI challenge was called MineRL, and the competition results were formally announced at the recent NeurIPS conference. The competition’s intention was to train an AI through an “imitation learning” approach. Imitation learning is a method where an AI is trained through the use of observation. Imitation learning intends to let AI systems learn actions by watching humans carries out those actions, learning through the act of observation. Imitation learning, in comparison to reinforcement learning, is a much less computationally expensive and substantially more efficient way of training an AI.
Reinforcement learning often requires many powerful computers networked together and hundreds or thousands of hours of training to become effective at a task. In contrast, an AI trained with an imitation learning method can be trained much quicker, as the AI already has a baseline of knowledge to work with courtesy of the human operators who have proceeded it.
Imitation learning has practical applications in training an AI where the AI cannot safely explore until it figures out the correct actions. Such scenarios would include the training of an autonomous vehicle as the car couldn’t be allowed to just roam around a street until it has learned desired behaviors. Using a human demonstrator’s data to train the vehicle could potentially make the process faster and safer.
The act of finding a diamond in Minecraft requires carrying out many steps in sequence, such as cutting down trees to make tools, exploring the caves that contain the diamonds, and actually finding a diamond within the cave. Despite the complexity of the task, a human player familiar with the game should be able to get a diamond in around 20 minutes.
Over 660 different AI agents were submitted to the competition, but not a single one of the AIs was able to find a diamond. The data provided to train the AI was a dataset containing over 60 million frames of gameplay collected from many human players. The locations of diamonds are randomized when an instance of the game is started, so this means that the AIs cannot simply look where the human players found the diamonds. In other words, the AIs need to form an understanding of how concepts, like making tools, using tools, exploring, and finding resources, are linked together.
Despite the fact that none of the AI agents were able to successfully find a diamond, the organization team was still pleased by the results of the competition, and that much was still learned from the experiment. The research that the AI teams conducted can help advance the AI field, finding alternatives to reinforcement learning strategies.
Reinforcement learning often gives superior performance over imitation learning, with one notable success of reinforcement learning being DeepMind’s AlphaGo. However, as previously noted, reinforcement learning requires massive computing resources, limiting its use by organizations that cannot afford computer processers at large scale.
William Guss, PhD Student at Carnegie Mellon University and head organizer of the competition, explained to the BBC that the MineRL competition was intended to investigate alternatives to computationally heaving AI. Said Guss:
“…Throwing massive compute at problems isn’t necessarily the right way for us to push the state of the art as a field… It works directly against democratising access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.”
Amazon Announces DeepComposer and Other AI Technology
Amazon’s annual re:Invent conference in Las Vegas began this week with three major AI announcements. The company presented the public with Transcribe Medical, SageMaker Operators for Kubernetes, and DeepComposer.
What is being called the biggest announcement of the three, Transcribe Medical is the newest edition to the company’s transcribe speech recognition service. It will transcribe medical speech for primary care. The program is capable of operating in medical speech as well as standard conversational diction.
According to the company, Transcribe Medical can be used across thousands of healthcare facilities, and it will help aid medical professionals in taking notes and other important information. It offers an API and will be able to be used with most smart devices containing a microphone. When the program reads and processes the information, it returns text in real-time.
Transcribe Medical is currently being used by SoundLines and Amgen.
Vadim Khazan is the president of technology at SoundLines.
“For the 3,500 health care partners relying on our care team optimisation strategies for the past 15 years, we’ve significantly decreased the time and effort required to get ton insightful data,” he said in a statement.
DeepComposer is an AI-enabled piano keyboard that will allow AWS customers to use AI and a MIDI controller to compose music. Amazon is calling the new technology the “world’s first” machine learning-enabled musical keyboard. It has 32 keys, and it is a two-octave keyboard.
Composers who use the program can choose whether to record a short musical tune or use a prerecorded one. They will then select a model for their desired genre and the model’s architecture parameters. They can also set the loss function, a feature used to measure the difference between the algorithm’s output and expected value. The composer can also choose hyperparameters and a validation sample. DeepComposer then creates a composition which can either be played in the AWS console or exported or shared on SoundCloud.
DeepComposer uses a generative adversarial network (GAN) to fill in compositional gaps in songs. Random data is taken by a generator component and used to create samples which are forwarded to a discriminator bit. The discriminator bit then separates the real samples from the fake ones, and the generator improves along with the discriminator. The generator progressively gets better at learning how to create samples as close to the genuine ones as possible.
SageMaker Operators for Kubernetes
AWS also launched Amazon SageMaker Operators for Kubernetes, which allows data scientists to train, tune, and deploy AI models in Amazon’s SageMaker machine learning development platform. AWS customers are able to install SageMaker Operators on Kubernetes clusters, and this can create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools.
Aditya Bindal is the AWS Deep Learning senior product manager.
“Now with Amazon SageMaker Operators for Kubernetes, customers can continue to enjoy the portability and standardization benefits of Kubernetes … along with integrating the many additional benefits that come out-of-the-box with Amazon SageMaker, no custom code required,” she wrote in a press release.
Kubernetes is an open-source general-purpose container orchestration system that is used to deploy and manage containerized applications. This is often done via a managed service like Amazon Elastic Kubernetes Service (EKS). Scientists and developers are able to gain greater control over their training and interface workloads with the program.
- Quantum Stat Releases “Big Bad NLP Database”
- Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”
- AI Ethics Principles Undergo Meta-Analysis, Human Rights Emphasized
- Computer Algorithm Can Identify Unique Dancing Characteristics
- DeepMind Discovers AI Training Technique That May Also Work In Our Brains