Connect with us

Healthcare

Humans and AI on Par when Interpreting Medical Images

mm

Published

 on

Humans and AI on Par when Interpreting Medical Images

According to an expert study published in the British journal The Lancet Digital Health, the artificial intelligence has now reached a stage where it is on a par with human experts in making medical diagnoses based on images. As the British daily The Guardian puts it, the “potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment.” The daily adds that in August 2019 the British government announced £250m of funding for a new NHS artificial intelligence laboratory.

In its report, the team of experts led by dr Xioan Liu and prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and other co-authors focused on research papers that were published since 2012. They considered that as the pivotal year for deep learning, something on which using AI in interpreting medical images, when “a series of labeled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in the diagnosis of diseases from cancers to eye conditions.”

Initially, the researchers found more than 20,000 relevant studies, but only 14 of those that were based on human disease gave them quality data that they could use, “tested the deep learning system with images from a separate dataset to the one used to train it, and showed the same images to human experts.”

Based on their results culled from these 14 studies, the expert team concluded that“deep learning systems correctly detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93% of the time, compared with 91% for human experts.”

Talking about the study, prof Denniston said that at the same time “the results were encouraging but the study was a reality check for some of the hype about AI.” Still, he remained optimistic about the use of AI in healthcare saying that “such deep learning systems could act as a diagnostic tool and help tackle the backlog of scans and images.” Also, Dr. Liu thought that “ they could prove useful in places which lack experts to interpret images.”

On the other side of the ocean, and related to the use of AI in medicine, it was announced that Minnesota’s Mayo Clinic, who according to Wired originated “the beginning of modern medical record-keeping in the US,” will partner up with Google to securely store “the hospital’s patient data in a private corner of the company’s cloud. It’s a switch from Microsoft Azure, where Mayo has stored patient data since May of last year when it completed a years-long project to get all of its care sites onto a single electronic health record system.” At the time it was called Project Plummer, after Henry Plummer, the inventor of Mayo Clinic’s medical record-keeping system.

As Wired points out, Google is already involved in other efforts to use AI in health care, with experiments like reading medical imagesanalyzing genomespredicting kidney disease, and screening for eye problems caused by diabetes. Based on the 10-year partnership, “Google plans to unleash its deep AI expertise on Mayo’s colossal collection of clinical records. The tech giant also plans to establish an office in Rochester, Minnesota, to support the partnership, but declined to say how many employees will staff it or when it will open.”

To overcome possible regulatory and legal problems that Google has previously had, like the one with “an app called Streams that its DeepMind subsidiary is developing into an AI-powered assistant for doctors and nurses,” Mayo Clinic has announced that“Google will be contractually prohibited from combining Mayo clinical data with any other datasets, according to a hospital spokesperson. That means that whatever data Google has about a person through its consumer-facing services, such as Gmail, Google Maps, and YouTube, can’t be combined with caches of scrubbed Mayo medical records.”

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Deep Learning

New Research Shows How AI Can Act as Mediators

Published

on

New Research Shows How AI Can Act as Mediators

New research out of Cornell University shows how artificial intelligence (AI) can play a role in mediating conversations. This comes during a time of social distancing and remote conversations due to a pandemic. 

According to the new study, humans trusted artificial intelligent systems more than the actual people they were talking to when having difficult conversations. The artificial intelligent systems were “smart” reply suggestions in texts. 

The new study is titled “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust.” It was published online in the journal Computers in Human Behavior.

Jess Hohenstein is a doctoral student in the field of information science. He is the paper’s first author.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Hohenstein. “This introduces a potential to take AI and use it as a mediator in our conversations.”

Detect When Things Go Bad 

During a conversation, the algorithm can analyze language to detect the moment when things are going bad. It can then suggest certain conflict-resolution strategies, according to Hohenstein.

The study’s main goal was to look at the different subtle and significant ways that AI systems, like smart replies, can alter how humans interact. According to the researchers, something as small as selecting a reply that is not completely accurate can drastically change the different aspects of a conversation. That language is often selected to save time typing, and it can have a direct effect on relationships. 

Malte Jung is co-author of the study and assistant professor of information science. He is also director of the Robots in Groups lab, which studies how robots change group dynamics. 

“Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we’re able to accomplish anything working together,” said Jung.

“This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact,” Jung continued. “We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other.”

Better Understanding of Human Interaction

The study can help understand the ways in which people perceive and interact with computers. It can also help improve human communication, through the use of subtle guidance and AI reminders.

Hohenstein and Jung wanted to find out if the AI system could absorb the “crash” of a conversation.

“There’s a physical mechanism in the front of the car that’s designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash,” Hohenstein said. “Here we see the AI system absorb some of the moral responsibility.”

The research was supported in part by the National Science Foundation. 

 

Spread the love
Continue Reading

Deep Learning

Uber’s Fiber Is A New Distributed AI Model Training Framework

mm

Published

on

Uber's Fiber Is A New Distributed AI Model Training Framework

According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models.

Fiber has recently been made open-source on GitHub, and it’s compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.

The team of researchers from Uber explains that many of the most recent and relevant advances in artificial intelligence have been driven by larger models and more algorithms that are trained using distributed training techniques. However, creating population-based models and reinforcement models remains a difficult task for distributed training schemes, as they frequently have issues with efficiency and flexibility. Fiber makes the distributed system more reliable and flexible by combining cluster management software with dynamic scaling and letting users move their jobs from one machine to a large number of machines seamlessly.

Fiber is made out of three different components: an API, a backend, and a cluster layer. The API layer enables users to create things like queues, managers, and processes. The backend layer of Fiber lets the user create and terminate jobs that are being managed by different clusters, and the cluster layer manages the individual clusters themselves along with their resources, which greatly the number of items that Fiber has to keep tabs on.

Fiber enables jobs to be queued and run remotely on one local machine or many different machines, utilizing the concept of job-backed processes. Fiber also makes use of containers to ensure things like input data and dependent packages are self-contained. The Fiber framework even includes built-in error handling so that if a worker crashes it can be quickly revived. FIber is able to do all of this while interacting with cluster managers, letting Fiber apps run as if they were normal apps running on a given computer cluster.

Experimental results showed that on average Fiber’s response time was a few milliseconds and that it also scaled up better than baseline AI techniques when built with 2,048 processor cores/workers. The length of time required to complete jobs decreased gradually as the set number of workers increased. IPyParallel completed 50 iterations of training in approximately 1400 seconds, while Fiber was able to complete the same 50 iterations of training in approximately 50 seconds with 512 workers available.

The coauthors of the Fiber paper explain that Fiber is able to do achieve multiple goals like dynamically scaling algorithms and using large volumes of computing power:

“[Our work shows] that Fiber achieves many goals, including efficiently leveraging a large amount of heterogeneous computing hardware, dynamically scaling algorithms to improve resource usage efficiency, reducing the engineering burden required to make [reinforcement learning] and population-based algorithms work on computer clusters, and quickly adapting to different computing environments to improve research efficiency. We expect it will further enable progress in solving hard [reinforcement learning] problems with [reinforcement learning] algorithms and population-based methods by making it easier to develop these methods and train them at the scales necessary to truly see them shine.”

Spread the love
Continue Reading

Deep Learning

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Published

on

Researchers Develop Computer Algorithm Inspired by Mammalian Olfactory System

Researchers from Cornell University have created a computer algorithm inspired by the mammalian olfactory system. Scientists have long sought out explanations of how mammals learn and identify smells. The new algorithm provides insight into the workings of the brain, and applying it to a computer chip allows it to quickly and reliably learn patterns better than current machine learning models. 

Thomas Cleland is a professor of psychology and senior author of the study titled “Rapid Learning and Robust Recall in a Neuromorphic Olfactory Circuit,” published in Nature Machine Intelligence on March 16.

“This is a result of over a decade of studying olfactory bulb circuitry in rodents and trying to figure out essentially how it works, with an eye towards things we know animals can do that our machines can’t,” Cleland said. 

“We now know enough to make this work. We’ve built this computational model based on this circuitry, guided heavily by things we know about the biological systems’ connectivity and dynamics,” he continued. “Then we say, if this were so, this would work. And the interesting part is that it does work.”

Intel Computer Chip

Cleland was joined by co-author Nabil Imam, a researcher at Intel, and together they applied the algorithm to an Intel computer chip. The chip is called Loihi, and it is neuromorphic, which means it is inspired by the functions of the brain. The chip has digital circuits that mimic the way in which neurons learn and communicate. 

The Loihi chip relies on parallel cores that communicate via discrete spikes, and each one of these spikes has an effect that can change depending on local activity. This requires different strategies for algorithm design than what is used in existing computer chips. 

Through the use of neuromorphic computer chips, machines could work a thousand times faster than a computer’s central or graphics processing units at identifying patterns and carrying out certain tasks. 

The Loihi research chip can also run certain algorithms while using around a thousand times less power than traditional methods. This is well-suited for the algorithm, which can accept input patterns from various different sensors, learn patterns quickly and sequentially, and identify each of the meaningful patterns even with strong sensory interference. The algorithm is capable of successfully identifying odors, and it can do so when the pattern is an astounding 80% different from the pattern originally learned by the computer. 

“The pattern of the signal has been substantially destroyed,” Cleland said, “and yet the system is able to recover it.”

The Mammalian Brain

The brain of a mammal is able to identify and remember smells extremely well, and there can be thousands of olfactory receptors and complex neural networks working to analyze the patterns associated with odors. One of the things that mammals can do better than artificial intelligence systems is retain what they’ve learned, even after there is new knowledge. In deep learning approaches, the network must be presented with everything at once, since new information can affect or even destroy what the system previously learned. 

“When you learn something, it permanently differentiates neurons,” Cleland said. “When you learn one odor, the interneurons are trained to respond to particular configurations, so you get that segregation at the level of interneurons. So on the machine side, we just enhance that and draw a firm line.”

Cleland spoke about how the team came up with new experimental approaches. 

“When you start studying a biological process that becomes more intricate and complex than you can just simply intuit, you have to discipline your mind with a computer model,” he said. “You can’t fuzz your way through it. And that led us to a number of new experimental approaches and ideas that we wouldn’t have come up with just by eyeballing it.”

 

Spread the love
Continue Reading