A team of scientists at the University of Kentucky that was able to, as The Guardian says, found in the holy ark of a synagogue in En-Gedi in Israel, and which contained text from the biblical book of Leviticus, is now involved in an even harder and more complex task – reading the carbonised scrolls left after the eruption of Mount Vesuvius in AD 79 in the Italian city of Pompeii.
While the team lead by Prof Brent Seales was able to read the parchment found in a synagogue in En-Gedi Israel with ‘just’ high-energy x-rays, this time around, due to the manner in which the Pompeii scrolls were made and written, they will have to use machine learning to try and solve the mysteries hidden in these scrolls.
They will test their prices on two unopened scrolls that belong to the Institut de France in Paris and are part of a collection of about 1,800 scrolls that was first discovered in 1752 during excavations of Herculaneum. As The Guardian points out, they make up the only known intact library from antiquity, with the majority of the collection now preserved in a museum in Naples.
Professor Seales explained the problem his team faces – “although you can see on every flake of papyrus that there is writing, to open it up would require that papyrus to be really limber and flexible – and it is not anymore.” The problem also lies in the fact that“while the En-Gedi scroll contained a metal-based ink which shows up in x-ray data, the inks used on the Herculaneum scrolls are thought to be carbon-based, made using charcoal or soot, meaning there is no obvious contrast between the writing and the papyrus in x-ray scans.”
To be able to resolve the problem, the team has decided to use both high-energy x-rays and artificial intelligence. The method they are using involves photographs of scroll fragments with writing visible to the naked eye. These are then fed to “teach machine learning algorithms where ink is expected to be in x-ray scans of the same fragments, collected using a number of techniques.”
The team is guided by the concept that “the system will pick out and learn subtle differences between inked and blank areas in the x-ray scans, such as differences in the structure of papyrus fibers.” After the system is trained on these fragments, the idea is to apply it to the data from the intact scrolls and hopefully, that will reveal the text that is contained in the scrolls.
Seals added that the team has finished collecting the x-ray data and is now in the process of training the designated algorithms, which will then be applied to the scrolls in the coming months. “The first thing we are hoping to do is perfect the technology so that we can simply repeat it on all 900 scrolls that remain [unwrapped].”
Talking about the importance of possible discoveries, Dr. Dirk Obbink, a papyrologist and classicist at the University of Oxford, also involved in the project said that there is a possibility that the text might be in Latin. He added that “a new historical work by Seneca the Elder was discovered among the unidentified Herculaneum papyri only last year, thus showing what uncontemplated rarities remain to be discovered there.”
Foodvisor App Uses Deep Learning to Monitor & Maintain Your Diet
Foodvisor, a startup that launched its new AI-based app in France in 2018 is about to change the manner in which you track and keep your diet plans. As TechCrunch explains, the Foodvisor app “helps you log everything you eat in order to lose weight, follow a diet or get healthier.” The users are also given the ability to input additional data by capturing a photo of the food you are about to eat.
The app works by using deep learning “to enable image recognition to detect what you’re about to eat. In addition to identifying the type of food, the app tries to estimate the weight of each item.” Using autofocus data, it also makes an evaluation of the distance between the plate of food and the phone it is on.
Foodvisor also allows its users to manually correct any data before the meal is logged in. For many people tracking their diet nutrition trackers turn out to be too demanding, and the idea behind Foodvisor is to make “the data entry process as seamless as possible.”
Finally, it produces a list of nutrition facts about what has just been consumed – calories, proteins, carbs, fats, fibers, and other essential information. The users can then set their own goals, log their nutritional activities and monitor their progress.
The app itself is free to use, but it also offers a premium subscription that varies between $5 and $10. These subscriptions offer more analysis and diet plans, but the main feature of these plans being “that you can chat with a registered dietitian/nutritionist directly in the app.”
So far, Foodvisor was able to gather 1.8 million downloads and is available on IOS and Android systems in French, English, German and Spanish, and has raised $1.5 million so far (€1.4 million). Co-founder and CMO Aurore Tran says the company has “enriched [its] database to better target the American market.”
The trend of using AI systems in food apps was started back in 2015 when Google started developing its Im2Calories, a system that counted calories based on Instagram photos. It was followed, as The Daily Meal reported, “researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute created Pic2Recipe, an app that uses artificial intelligence to predict ingredients and suggests similar recipes based on looking at a picture of food.”
The same team is still trying to “improve the system to understand images of food in more detail, including identifying cooking and preparation methods. They are also interested in recommending recipes based on dietary preferences and available ingredients.”
But as Ai capabilities develop, it seems that Foodvisor took the idea one step further.
Deep Learning Is Re-Shaping The Broadcasting Industry
Deep learning has become a buzz word in many endeavors, and broadcasting organizations are also among those that have to start to explore all the potential it has to offer, from news reporting to feature films and programs, both in the cinemas and on TV.
As TechRadar reported, the number of opportunities deep learning presents in the field of video production, editing and cataloging are already quite high. But as is noted, this technology is not just limited to what is considered repetitive tasks in broadcasting, since it can also “enhance the creative process, improve video delivery and help preserve the massive video archives that many studios keep.”
As far as video generation and editing are concerned, it is mentioned that Warner Bros. recently had to spend $25M on reshoots for ‘Justice League’ and part of that money went to digitally removing a mustache that star Henry Cavill had grown and could not shave due to an overlapping commitment. The use of deep learning in such time-consuming and financially taxing processes in post-production will certainly be put to good use.
Even widely available solutions like Flo make it possible to use deep learning in creating automatically a video just by describing your idea. The software then searches for possible relevant videos that are stored in a certain library and edits them together automatically.
Flo is also able to sort and classify videos, making it easier to find a particular part of the footage. Such technologies also make it possible to easily remove undesirable footage or make a personal recommendation list based on a video somebody has expressed an interest in.
Google has come up with a neural network “that can automatically separate the foreground and background of a video. What used to require a green screen can now be done with no special equipment.”
The deep fake has already made a name for itself, both good and bad, but its potential use in special effects has already reached quite a high level.
The area where deep learning will certainly make a difference in the restoration of classic films, as the UCLA Film & Television Archive, nearly half of all films produced prior to 1950 have disappeared and 90% of the classic film prints are currently in a very poor condition.
Colorizing black and white footage is still a controversial subject among the filmmakers, but those who decide to go that route can now use Nvidia tools, which will significantly shorten such a lengthy process as it now requires that the artist colors only one frame of a scene and deep learning will do the rest from there. On the other hand, Google has come up with a technology that is able to recreate part of a video-recorded scene based on start and end frames.
Face/Object recognition is already actively used, from classifying a video collection or archive, searching for clips with a given actor or newsperson, or counting the exact time of an actor in a video or film. TechRadar mentions that Sky News recently used facial recognition to identify famous faces at the royal wedding.
This technology is now becoming widely used in sports broadcasting to, say, “track the movements of the ball, or to identify other key elements to the game, such as the goal.” In soccer (football) this technology, given the name VAR is actually used in many official tournaments and national leagues as a referee’s tool during the game.
Streaming is yet another aspect of broadcasting that can benefit from deep learning. Neural networks can recreate high definition frames from low definition input, making it possible for the viewer to benefit from better viewing, even if the original input signal is not fully up to the standard.
Humans and AI on Par when Interpreting Medical Images
According to an expert study published in the British journal The Lancet Digital Health, the artificial intelligence has now reached a stage where it is on a par with human experts in making medical diagnoses based on images. As the British daily The Guardian puts it, the “potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment.” The daily adds that in August 2019 the British government announced £250m of funding for a new NHS artificial intelligence laboratory.
In its report, the team of experts led by dr Xioan Liu and prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and other co-authors focused on research papers that were published since 2012. They considered that as the pivotal year for deep learning, something on which using AI in interpreting medical images, when “a series of labeled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in the diagnosis of diseases from cancers to eye conditions.”
Initially, the researchers found more than 20,000 relevant studies, but only 14 of those that were based on human disease gave them quality data that they could use, “tested the deep learning system with images from a separate dataset to the one used to train it, and showed the same images to human experts.”
Based on their results culled from these 14 studies, the expert team concluded that“deep learning systems correctly detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93% of the time, compared with 91% for human experts.”
Talking about the study, prof Denniston said that at the same time “the results were encouraging but the study was a reality check for some of the hype about AI.” Still, he remained optimistic about the use of AI in healthcare saying that “such deep learning systems could act as a diagnostic tool and help tackle the backlog of scans and images.” Also, Dr. Liu thought that “ they could prove useful in places which lack experts to interpret images.”
On the other side of the ocean, and related to the use of AI in medicine, it was announced that Minnesota’s Mayo Clinic, who according to Wired originated “the beginning of modern medical record-keeping in the US,” will partner up with Google to securely store “the hospital’s patient data in a private corner of the company’s cloud. It’s a switch from Microsoft Azure, where Mayo has stored patient data since May of last year when it completed a years-long project to get all of its care sites onto a single electronic health record system.” At the time it was called Project Plummer, after Henry Plummer, the inventor of Mayo Clinic’s medical record-keeping system.
As Wired points out, Google is already involved in other efforts to use AI in health care, with experiments like reading medical images, analyzing genomes, predicting kidney disease, and screening for eye problems caused by diabetes. Based on the 10-year partnership, “Google plans to unleash its deep AI expertise on Mayo’s colossal collection of clinical records. The tech giant also plans to establish an office in Rochester, Minnesota, to support the partnership, but declined to say how many employees will staff it or when it will open.”
To overcome possible regulatory and legal problems that Google has previously had, like the one with “an app called Streams that its DeepMind subsidiary is developing into an AI-powered assistant for doctors and nurses,” Mayo Clinic has announced that“Google will be contractually prohibited from combining Mayo clinical data with any other datasets, according to a hospital spokesperson. That means that whatever data Google has about a person through its consumer-facing services, such as Gmail, Google Maps, and YouTube, can’t be combined with caches of scrubbed Mayo medical records.”