Connect with us

AI News

Dragonflies and Missile Defense Systems

mm

Published

 on

Dragonflies have extremely fast reflexes with little depth perception. Their reaction time to prey that is moving through the air or ground is 50 milliseconds, the same amount of time it takes for information to cross three neurons. Sandia National Laboratories is working on research to figure out how dragonfly brains work and learn the ways they are able to calculate complex trajectories. 

The research is led by computational neuroscientist Frances Chance. She is the one responsible for developing the algorithms, and she will be presenting her research at the International Conference on Neuromorphic Systems in Knoxville, Tennessee. The research has already been presented at the Annual Meeting of the Organization for Computational Neurosciences in Barcelona, Spain. 

Frances Chance specializes in replicating biological neural networks like brains, especially neurons and the process of sending information throughout the nervous system. Brains are more complex and better versions of computers. They are more energy efficient while leaning and adapting at a faster speed. 

“I try to predict how neurons are wired in the brain and understand what kinds of computations those neurons are doing, based on what we know about the behavior of the animal or what we know about the neural responses,” Chance said. 

The research conducted by Sandia National Laboratories included creating a simple environment that had generated dragonflies through computer simulations. They used computer algorithms to make the dragonflies catch prey just like their real-life counterparts. The computer simulated dragonflies were able to process visual information while hunting just like dragonflies in the real environment. This showed that programming in this manner is possible, which could be applied in many different sectors. 

The new research is already being applied to the missile defense sector. Using the same system as the one with the computer simulated dragonflies could improve missile defense systems. Missile defense systems work in a similar way as dragonflies targeting and catching prey. They intercept an object in flight like a dragonfly intercepts prey in the environment. Dragonflies are one of the top predators in the world as they catch 95% of the prey they target.

With these new developments, they are trying to make on-board computers on missile defense systems smaller while still being fast and accurate. The current way missile defense systems work is through established intercept techniques that require a heavy computation load. This is one of the areas a model based on dragonflies and their prey can help. 

The new technology and research could help improve missile defense systems in many ways including reducing the size, weight, and power needs of onboard computers. Then, interceptors could become smaller and lighter which will make it much easier for them to move around. The new systems could also learn new ways to intercept moving targets like hypersonic weapons. Unlike ballistic missiles, these targets do not follow a similar predictive trajectory or pattern. Finally, the system could be able to use simpler sensors rather than the complex ones used now to intercept a target. 

One of the problems with this research and the idea is that missiles and dragonflies travel at very different speeds. This could cause some discrepancies

Outside of missile defense systems, the computation model of dragonfly brains could also help develop better machine learning and artificial intelligence. As the use of this kind of technology and artificial intelligence grows, it is finding its way into more and more sectors. The defense sector is one that is using this to become much more efficient and grow rapidly. This research shows how we can develop complex systems based on those that already exist in our environment, among those are dragonflies and their brains. Our new technology allows us to model this and create a better version.

 

Spread the love

Alex McFarland is a historian and journalist from the United States who covers developments in AI around the world.

AI News

Big Developments Bring Us Closer to Fully Untethered Soft Robots

mm

Published

on

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components. 

The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments. 

“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said. 

The researchers used origami as a model to create multifunctional soft robots. Origami, through sequential folds, is able to change into multiple shapes and functionalities while staying in a single structure. The research team used liquid crystal elastomers that are able to change their shape when exposed to heat. The team utilized 3D-printing to get two types of soft hinges. Those hinges fold depending on the temperature, and they can be programmed to fold into a specific order. 

Arda Kotikan is a graduate student at SEAS and the Graduate School of Arts and Sciences and the co-first author of the paper. 

“With our method of 3D printing active hinges, we have full programmability over temperature response, the amount of torque the hinges can exert, their bending angle, and fold orientation. Our fabrication method facilitates integrating these active components with other materials,” she said. 

Connor McMahan is a graduate student at Caltech and co-first author of the paper as well. 

“Using hinges makes it easier to program robotic functions and control how a robot will change shape. Instead of having the entire body of a soft robot deform in ways that can be difficult to predict, you only need to program how a few small regions of your structure will respond to changes in temperature,” he said.

The team of researchers built multiple soft devices. One of these devices was an untethered soft robot called “Rollbot.” It starts as an 8 centimeters long and 4 centimeters wide flat sheet. When it is in contact with a hot surface of around 200°C, one set of the hinges folds and shapes the robot into a pentagonal wheel. 

On each of the five sides of the wheel, there are more sets of hinges that fold when in contact with a hot surface. 

“Many existing soft robots require a tether to external power and control systems or are limited by the amount of force they can exert. These active hinges are useful because they allow soft robots to operate in environments where tethers are impractical and to lift objects many times heavier than the hinges,” said McMahan.

This research that was conducted focused solely on temperature responses. In the future, the liquid crystal elastomers will be studied further as they are also able to respond to light, pH, humidity, and other external stimuli. 

“This work demonstrates how the combination of responsive polymers in an architected composite can lead to materials with self-actuation in response to different stimuli. In the future, such materials can be programmed to perform ever more complex tasks, blurring the boundaries between materials and robots,” said Chiara Daraio, Professor of Mechanical Engineering and Applied Physics at Caltech and co-lead author of the study.

The research included co-authors Emily C. Davidson, Jalilah M. Muhammad, and Robert D. Weeks. The work was supported by the Army Research Office, Harvard Materials Research Science and Engineering Center through the National Science Foundation, and the NASA Space Technology Research Fellowship. 

 

Spread the love
Continue Reading

AI News

Modeling Artificial Neural Networks (ANNs) On Animal Brains

mm

Published

on

Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador has shown that evolution and animal brains can be used as inspiration for machine learning. It can be beneficial in helping AI solve many different problems. 

According to CSHL neuroscientist Anthony Zador, Artificial Intelligence (AI) can be greatly improved by looking to animal brains. WIth this approach, neuroscientists and those working in the AI field have a new way of solving some of AI’s most pressing problems. 

Anthony Zador, M.D., Ph.D., has dedicated much of his career to explaining the complex neural networks within the living brain. He goes all the way down to the individual neuron. In the beginning of his career, he focused on something different. He studied artificial neural networks (ANNs). ANNs are computing systems that have been the basis of much of our developments in the AI secor. They are modeled after the networks in both animal and human brains. Until now, this is where the concept stopped. 

A recent perspective piece, authored by Zador, was published in Nature Communications. In that piece, Zador detailed how new and improved learning algorithms are helping AI systems develop to a point where they greatly outperform humans. This happens in a variety of tasks, problems, and games like chess and poker. Even though some of these computers are able to perform so well in a variety of complex problems, they are often confused by things us humans would consider simple. 

If those working in this field were able to solve this problem, robots could reach a point in development where they could learn to do extremely natural and organic things such as stalking prey or building a nest. They could even do something like washing the dishes, which has proven to be extremely difficult for robots. 

“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”

Zador thinks that if we want robots to achieve quick learning, something that would change everything in the sector, we might not want to only look at a perfected general learning algorithm. What scientists and others should do is look towards biological neural networks that have been given to us through nature and evolution. These could be used as a base to build on for quick and easy learning of specific types of tasks, tasks that are important for survival. 

Zador talks about what we can learn from squirrels living in our own backyards if we just looked at genetics, neural networks, and genetic predisposition.

“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”

Zador believes that one thing that could come from genetic predisposition is the innate circuitry that is within an animal. It helps that animal and guides its early learning. One of the problems with attaching this to the AI world is that the networks used in machine learning, ones that are pursued by AI experts, are much more generalized than the ones in nature. 

If we are able to get to a point where ANNs reach a point in development where they can be modeled after the things we see in nature, robots could begin to do tasks that at one point were extremely difficult. 

 

Spread the love
Continue Reading

AI News

California Start-Up Cerebras Has Developed World’s Biggest Chip For AI

mm

Published

on

California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years. 

Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity. 

The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know. 

Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip. 

Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies. 

There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios. 

Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets. 

The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition. 

The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor. 

Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip. 

It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems. 

 

Spread the love
Continue Reading