IF A WOMAN (or non-female-identifying person with a uterus and visions of starting a family) is struggling to conceive and decides to improve their reproductive odds at an IVF clinic, they’ll likely interact with a doctor, a nurse, and a receptionist. They will probably never meet the army of trained embryologists working behind closed lab doors to collect eggs, fertilize them, and develop the embryos bound for implantation.
One of embryologists’ more time-consuming jobs is grading embryos—looking at their morphological features under a microscope and assigning a quality score. Round, even numbers of cells are good. Fractured and fragmented cells, bad. They’ll use that information to decide which embryos to implant first.
It’s more gut than science and not particularly accurate. Newer methods, like pulling off a cell to extract its DNA and test for abnormalities, called preimplantation genetic screening, provide more information. But that tacks on additional costs to an already expensive IVF cycle and requires freezing the embryos until the test results come back. Manual embryo grading may be a crude tool, but it’s noninvasive and easy for most fertility clinics to carry out. Now, scientists say, an algorithm has learned to do all that time-intensive embryo ogling even better than a human.
In new research published today in NPJ Digital Medicine, scientists at Cornell University trained an off-the-shelf Google deep learning algorithm to identify IVF embryos as either good, fair, or poor, based on the likelihood each would successfully implant. This type of AI—the same neural network that identifies faces, animals, and objects in pictures uploaded to Google’s online services—has proven adept in medical settings. It has learned to diagnose diabetic blindness and identify the genetic mutations fueling cancerous tumor growth. IVF clinics could be where it’s headed next.
“All evaluation of the embryo as it’s done today is subjective,” says Nikica Zaninovic, director of the embryology lab at Weill Cornell Medicine, where the research was conducted. In 2011, the lab installed a time-lapse imaging system inside its incubators, so its technicians could watch (and record) the embryos developing in real time. This gave them something many fertility clinics in the US do not have—videos of more than 10,000 fully anonymized embryos that could each be freeze-framed and fed into a neural network. About two years ago, Zaninovic began Googling to find an AI expert to collaborate with. He found one just across campus in Olivier Elemento, director of Weill Cornell’s Englander Institute for Precision Medicine.
For years, Elemento had been collecting all kinds of medical imaging data—MRIs, mammograms, stained slides of tumor tissue—from any colleague who would give it to him, to develop automated systems to help radiologists and pathologists do their jobs better. He’d never thought to try it with IVF but could immediately see the potential. There’s a lot going on in an embryo that’s invisible to the human eye but might not be to a computer. “It was an opportunity to automate a process that is time-consuming and prone to errors,” he says. “Which is something that’s not really been done before with human embryos.”
To judge how their neural net, nicknamed STORK, stacked up against its human counterparts, they recruited five embryologists from clinics on three continents to grade 394 embryos based on images taken from different labs. The five embryologists reached the same conclusion on only 89 embryos, less than a quarter of the total. So the researchers instituted a majority voting procedure—three out of five embryologists needed to agree to classify an embryo as good, fair, or poor. When STORK looked at the same images, it predicted the embryologist majority voting decision with 95.7 percent accuracy. The most consistent volunteer matched results only 70 percent of the time; the least, 25 percent.
For now, STORK is just a tool embryologists can upload images to and play around with on a secure website hosted by Weill Cornell. It won’t be ready for the clinic until it can pass rigorous testing that follows implanted embryos over time, to see how well the algorithm fares in real life. Elemento says the group is still finalizing the design for a trial that would do that by pitting embryologists against the AI in a small, randomized cohort. Most important is understanding if STORK actually improves outcomes—not just implantation rates but successful, full-term pregnancies. On that score, at least some embryologists are skeptical.
“All this algorithm can do is change the order of which embryos we transfer,” says Eric Forman, medical and lab director at Columbia University Fertility Center. “It needs more evidence to say it helps women get pregnant quicker and safer.” On its own, he worries that STORK might make only a small contribution to improving IVF’s success rate, while possibly inserting its own biases.
In addition to embryo grading, the Columbia clinic uses pre-implantation genetic screening to improve patients’ odds of pregnancy. While not routine, it is offered to everyone. Forman says about 70 percent of the clinic’s IVF cycles include the blastocyst biopsy procedure, which can add a few thousand dollars to a patient’s tab. That’s why he’s most intrigued about what Elemento’s team is cooking up next. They’re training a new set of neural networks to see if they can detect chromosomal abnormalities, like the one that causes Down Syndrome. With an embryo developing under a camera’s watchful gaze, Elemento’s algorithm would monitor the feed for telltale signs of trouble. “We think the patterns of cell division we can capture with these movies could potentially carry information about these defects, which are hidden in just the snapshots,” says Elemento. They’re also looking into using the technique to predict miscarriages.
There’s plenty of room to improve the performance of IVF, and these algorithmic upgrades could make a dent—in the right circumstances. “If it could provide accurate predictions in real time with minimal risk for harm and no additional cost, then I could see the potential to implement AI like this for embryo selection,” says Forman. But there would be barriers to its adoption. Most IVF clinics in the US don’t have one of these fancy time-lapse recording systems because they’re so expensive. And there are a lot of other potential ways to improve embryo viability that could be more affordable—like tailoring hormone treatments and culturing techniques to the different kinds of infertility that women experience. In the end, though, the number one problem IVF clinics contend with is that sometimes there just aren’t enough high-quality eggs, no matter how many cycles a patient goes through. And no AI, no matter how smart, can do anything about that.
Big Developments Bring Us Closer to Fully Untethered Soft Robots
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components.
The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments.
“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said.
The researchers used origami as a model to create multifunctional soft robots. Origami, through sequential folds, is able to change into multiple shapes and functionalities while staying in a single structure. The research team used liquid crystal elastomers that are able to change their shape when exposed to heat. The team utilized 3D-printing to get two types of soft hinges. Those hinges fold depending on the temperature, and they can be programmed to fold into a specific order.
Arda Kotikan is a graduate student at SEAS and the Graduate School of Arts and Sciences and the co-first author of the paper.
“With our method of 3D printing active hinges, we have full programmability over temperature response, the amount of torque the hinges can exert, their bending angle, and fold orientation. Our fabrication method facilitates integrating these active components with other materials,” she said.
Connor McMahan is a graduate student at Caltech and co-first author of the paper as well.
“Using hinges makes it easier to program robotic functions and control how a robot will change shape. Instead of having the entire body of a soft robot deform in ways that can be difficult to predict, you only need to program how a few small regions of your structure will respond to changes in temperature,” he said.
The team of researchers built multiple soft devices. One of these devices was an untethered soft robot called “Rollbot.” It starts as an 8 centimeters long and 4 centimeters wide flat sheet. When it is in contact with a hot surface of around 200°C, one set of the hinges folds and shapes the robot into a pentagonal wheel.
On each of the five sides of the wheel, there are more sets of hinges that fold when in contact with a hot surface.
“Many existing soft robots require a tether to external power and control systems or are limited by the amount of force they can exert. These active hinges are useful because they allow soft robots to operate in environments where tethers are impractical and to lift objects many times heavier than the hinges,” said McMahan.
This research that was conducted focused solely on temperature responses. In the future, the liquid crystal elastomers will be studied further as they are also able to respond to light, pH, humidity, and other external stimuli.
“This work demonstrates how the combination of responsive polymers in an architected composite can lead to materials with self-actuation in response to different stimuli. In the future, such materials can be programmed to perform ever more complex tasks, blurring the boundaries between materials and robots,” said Chiara Daraio, Professor of Mechanical Engineering and Applied Physics at Caltech and co-lead author of the study.
The research included co-authors Emily C. Davidson, Jalilah M. Muhammad, and Robert D. Weeks. The work was supported by the Army Research Office, Harvard Materials Research Science and Engineering Center through the National Science Foundation, and the NASA Space Technology Research Fellowship.
Modeling Artificial Neural Networks (ANNs) On Animal Brains
Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador has shown that evolution and animal brains can be used as inspiration for machine learning. It can be beneficial in helping AI solve many different problems.
According to CSHL neuroscientist Anthony Zador, Artificial Intelligence (AI) can be greatly improved by looking to animal brains. WIth this approach, neuroscientists and those working in the AI field have a new way of solving some of AI’s most pressing problems.
Anthony Zador, M.D., Ph.D., has dedicated much of his career to explaining the complex neural networks within the living brain. He goes all the way down to the individual neuron. In the beginning of his career, he focused on something different. He studied artificial neural networks (ANNs). ANNs are computing systems that have been the basis of much of our developments in the AI secor. They are modeled after the networks in both animal and human brains. Until now, this is where the concept stopped.
A recent perspective piece, authored by Zador, was published in Nature Communications. In that piece, Zador detailed how new and improved learning algorithms are helping AI systems develop to a point where they greatly outperform humans. This happens in a variety of tasks, problems, and games like chess and poker. Even though some of these computers are able to perform so well in a variety of complex problems, they are often confused by things us humans would consider simple.
If those working in this field were able to solve this problem, robots could reach a point in development where they could learn to do extremely natural and organic things such as stalking prey or building a nest. They could even do something like washing the dishes, which has proven to be extremely difficult for robots.
“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”
Zador thinks that if we want robots to achieve quick learning, something that would change everything in the sector, we might not want to only look at a perfected general learning algorithm. What scientists and others should do is look towards biological neural networks that have been given to us through nature and evolution. These could be used as a base to build on for quick and easy learning of specific types of tasks, tasks that are important for survival.
Zador talks about what we can learn from squirrels living in our own backyards if we just looked at genetics, neural networks, and genetic predisposition.
“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”
Zador believes that one thing that could come from genetic predisposition is the innate circuitry that is within an animal. It helps that animal and guides its early learning. One of the problems with attaching this to the AI world is that the networks used in machine learning, ones that are pursued by AI experts, are much more generalized than the ones in nature.
If we are able to get to a point where ANNs reach a point in development where they can be modeled after the things we see in nature, robots could begin to do tasks that at one point were extremely difficult.
California Start-Up Cerebras Has Developed World’s Biggest Chip For AI
California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years.
Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity.
The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know.
Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip.
Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies.
There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios.
Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets.
The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition.
The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor.
Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip.
It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems.