Connect with us

Robotics

Skin-like Sensors Help Advance AISkin

Published

 on

Skin-like Sensors Help Advance AISkin

A group of researchers from the University of Toronto have developed super-stretchy, transparent, and self-powering sensors that will help advance artificial ionic skin. The sensor is able to record the complex sensations of human skin, which was one of the big barriers to developing artificial skin similar to the real thing. 

The new technology is being called AISkin, and the researchers believe that the new technology will be important in wearable electronics, personal health care, and robotics. 

Professor Xinyu Liu’s lab is working on the breakthrough areas of ionic skin and soft robotics.

“Since it’s hydrogel, it’s inexpensive and biocompatible — you can put it on the skin without any toxic effects. It’s also very adhesive, and it doesn’t fall off, so there are so many avenues for this material,” according to Professor Liu.

The AISkin is adhesive, and it consists of two oppositely charged sheets of stretchable substances. Those substances are known as hydrogels. The researchers overlay negative and positive ions in order to create a “sensing junction” on the surface of the gel.

The sensing junction works whenever the AISkin is subjected to strain, humidity, or changes in temperature, which cause controlled ion movements across it. Those can then be measured as electrical signals such as voltage or current. 

“If you look at human skin, how we sense heat or pressure, our neural cells transmit information through ions — it’s really not so different from our artificial skin,” says Liu.

The AISkin is both tough and stretchable.

Binbin Ying is a visiting PhD candidate from McGill University, and he is leading the project in Liu’s lab. 

According to Ying, “Our human skin can stretch about 50 percent, but our AISkin can stretch up to 400 percent of its length without breaking.” 

The researchers published their findings in Materials Horizons.

The new AISkin can lead to the development of certain technologies such as skin-like Fitbits that are capable of measuring multiple body parameters. Other technologies include an adhesive touchpad that is able to stick onto the surface of your hand. 

“It could work for athletes looking to measure the rigour of their training, or it could be a wearable touchpad to play games,” according to Liu.

The technology could also measure the progress that is made in muscle rehabilitation. 

“If you were to put this material on a glove of a patient rehabilitating their hand for example, the health care workers would be able to monitor their finger-bending movements,” says Liu.

The technology could also play a role within the field of soft robotics, or flexible bots made out of polymers. One of the uses could be with soft robotic grippers that handle delicate objects within factories.

The researchers hope that AISkin will be integrated onto soft robots in order to measure data, such as the temperature of food or the pressure required to handle certain objects.

The lab will now work on advancing AISkin and decreasing the size of the sensors. Bio-sensing capabilities will be added to the material, which will allow it to measure biomolecules in body fluids. 

“If we further advance this research, this could be something we put on like a ‘smart bandage,'” says Liu. “Wound healing requires breathability, moisture balance — ionic skin feels like the natural next step.”

 

Spread the love

Robotics

Scientists Repurpose Living Frog Cells to Develop World’s First Living Robot

Published

on

Scientists Repurpose Living Frog Cells to Develop World's First Living Robot

In what is a remarkable cross between biological life and robotics, a team of scientists has repurposed living frog cells and used them to develop “xenobots.” The cells came from frog embryos, and the xenobots are just a millimeter wide. They are capable of moving towards a target, possibly pick up a payload such as medicine for the inside of a human body, and heal themselves after being cut or damaged. 

“These are novel living machines,” according to Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

The scientists designed the bots on a supercomputer at the University of Vermont, and a group of biologists at Tufts University assembled and tested them. 

“We can imagine many useful applications of these living robots that other machines can’t do,” says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, “like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque.”

The research was published in the Proceedings of the National Academy of Sciences on January 13.

According to the team, this is the first time ever that research “designs completely biological machines from the ground up.”

It took months of processing time on the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core. The team included lead author and doctoral student Sam Kriegman, and they relied on an evolutionary algorithm to develop thousands of different designs for the new life-forms. 

When the computer was tasked with completing a task given by the scientists, such as locomotion in one direction, it would continuously reassemble a few hundred simulated cells into different forms and body shapes. As the programs ran, the most successful simulated organisms were kept and refined. The algorithm ran independently a hundred times, and the best designs were picked for testing.

The team at Tufts, led by Levin and with the help of microsurgeon Douglas Blackiston, then took up the project. They transferred the designs into the next stage, which was life. The team gathered stem cells that were harvested from the embryos of African frogs, the species Xenopus laevis. Single cells were then separated out and left to incubate. The team used tiny forceps and an electrode to cut the cells and join them under a microscope into the designs created by the computer.

The cells were assembled into all-new body forms, and they began to work together. The skin cells developed into a more passive build and the heart muscle cells were responsible for creating ordered forward motion as guided by the computer’s design. The robots were able to move on their own because of the spontaneous self-organizing patterns.

The organisms were capable of moving in a coherent way, and they lasted days or weeks exploring their watery environment. They relied on embryonic energy stores, but they failed once flipped over on their backs. 

“It’s a step toward using computer-designed organisms for intelligent drug delivery,” says Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center.

Since the xenobots are living technologies, they have certain advantages. 

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. These xenobots are fully biodegradable,” he continues. “When they’re done with their job after seven days, they’re just dead skin cells.”

These developments will have big implications for the future. 

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “Much of science is focused on controlling the low-level rules. We also need to understand the high-level rules. If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

“This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences, whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.”

“There’s all of this innate creativity in life,” says UVM’s Josh Bongard. “We want to understand that more deeply — and how we can direct and push it toward new forms.”

 

Spread the love
Continue Reading

Interviews

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics – Interview Series

mm

Published

on

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics - Interview Series

Deniz Kalaslioglu is the Co-Founder & CTO of Soar Robotics a cloud-connected Robotic Intelligence platform for drones.

You have over 7 years of experience in operating AI-back autonomous drones. Could you share with us some of the highlights throughout your career?

Back in 2012, drones were mostly perceived as military tools by the majority. On the other hand, the improvements in mobile processors, sensors and battery technology had already started creating opportunities for consumer drones to become mainstream. A handful of companies were trying to make this happen, and it became obvious to me that if correct research and development steps were taken, these toys could soon become irreplaceable tools that help many industries thrive.

I participated exclusively in R&D teams throughout my career, in automotive and RF design. I founded a drone service provider startup in 2013, where I had the chance to observe many of the shortcomings of human-operated drones, as well as their potential benefits for industries. I’ve led two research efforts in a timespan of 1.5 years, where we addressed the problem of autonomous outdoor and indoor flight.

Precision landing and autonomous charging was another issue that I have tackled later on. Solving these issues meant fully-autonomous operation with minimal human intervention throughout the operation cycle. At the time, solving the problem of fully-autonomous operation was huge and it enabled us to create intelligent systems that don’t need any human operator to execute flights; which resulted in safer, cost-effective and efficient flights. The “AI” part came into play later on in 2015, where deep learning algorithms could be effectively used to solve problems that were previously solved through classical computer vision and/or learning methods. We leveraged robotics to enable fully-autonomous flights and deep learning to transform raw data into actionable intelligence.

 

What inspired you to launch Soar Robotics?

Drones lack sufficient autonomy and intelligence features to become the next revolutionary tools for humans. They become inefficient and primitive tools in the hands of a human operator, both in terms of flight and post-operation data handling. Besides, these robots have very little access to real-time and long-term robotic intelligence that they can consume to become smarter.

As a result of my experience in this field, I have come to an understanding that the current commercial robotics paradigm is inefficient which is limiting the growth of many industries. I co-founded Soar Robotics to tackle some very difficult engineering challenges to make intelligent aerial operations a reality, which in turn will provide high-quality and cost-efficient solutions for many industries.

 

Soar Robotics provides a fully autonomous cloud connected robotics intelligence platform for drones. What are the types of applications that are best served by these drones?

Our cloud-connected robotics intelligence platform is designed as a modular system that can serve almost any application by utilizing the specific functionalities implemented within the cloud. Some industries such as security, solar energy, construction, and agriculture are currently in immediate need of this technology.

  • Surveillance of a perimeter for security,
  • Inspection and analysis of thermal and visible faults in solar energy,
  • Progress tracking and management in construction and agriculture

These are the main applications with the highest beneficial impact that we focus on.

 

For a farmer who wishes to use this technology, what are some of use cases that will benefit them versus traditional human-operated drones?

As with all our applications, we also provide end-to-end service for precision agriculture. Currently, the drone workflow in almost any industry is as follows:

  • the operator carries the drone and its accessories to the field,
  • the operator creates a flight plan,
  • the operator turns on the drone, uploads the flight plan for the specific task in hand,
  • drone arms and executes the planned mission and return to its takeoff coordinates, drone lands,
  • the operator turns off the drone,
  • the operator shares the data with the client (or the related department if hired in-house),
  • the data is processed accurately to become actionable insights for the specific industry.

It is crucial to point out that this workflow is proven to be very inefficient, especially in sectors such as solar energy, agriculture and construction where collecting periodic and objective aerial data for vast lands is essential. A farmer who uses our technology is able to get measurable, actionable and accurate insights on:

  • plant health and rigor,
  • nitrogen intake of the soil,
  • optimization and effectiveness of irrigation methods
  • early detection of disease and pest

Without having to go through all the hassle mentioned above, without even clicking a button every time. I firmly believe that enabling drones with autonomous features and cloud intelligence will provide considerable savings in terms of time, labor and money.

 

How will the drones be used for solar farm operators?

We handle almost everything that needs counting and measuring in all stages of the solar project. In the pre-construction and planning period, we generate topographic model, hydrologic analysis and obstacle analysis with high geographical precision and accuracy. During the construction period, we generate daily maps and videos of the site. After processing the collected media we measure the progress of the piling structures’, the mounting racks’ and the photovoltaic panels’ installations, position, area and volume measurements of trenches and inverter foundations as well as counting the construction machinery/vehicles and personnel on the site.

When the construction is over, and the solar site is fully operational Soar’s autonomous system continues its daily flights but this time generating thermal maps and videos along with visible spectrum maps and videos. From thermal data, Soar’s algorithms detect cell, multi-cell, diode, string, combiner and inverter level defects. From visible spectrum data, Soar’s algorithms detect shattering, soiling, shadowing, vegetation and missing panels. As a result, Soar’s software generates a detailed report of the detected faults and marks them on the as-built and RGB map of the site down to cell level, as well as showing all detected errors on a table; indicating string, row and module numbers with geolocations. Also clients’ total loss due to the inefficiencies caused by these faults and prioritize each depending on their importance and urgency.

 

In July 2019 Soar Robotics joined NVIDIA’s Inception Program which is an exclusive program for AI startups. How has this experience influenced you personally and how Soar Robotics is managed?

Throughout the months, this was proven to be an extremely beneficial program for us. We had already been using NVIDIA products both for onboard computation as well as the cloud side. This program has a lot of perks that streamlined our research, development and test processes.

 

Soar Robotics will be generating recurring revenue with Robotics-as-a-Service (RaaS) model. What is this model exactly and how does it differ from SaaS?

It possesses many similarities with SaaS in terms of its application and effects to our business model. RaaS model is especially critical since hardware is involved; most of our clients don’t want to own the hardware and only interested in the results. Cloud software and the new generations of robotics hardware blend together more and more each day.

This results in some fundamental changes in industrial robotics which used to be about stationary robots with repetitive tasks that didn’t need much of an intelligence. Operating under this mindset we provide our clients’ with robot connectivity and cloud robotics services to augment what their hardware would normally be capable of achieving.

Therefore Robotics-as-a-Service encapsulates all hardware and software tools that we utilize to create domain-specific robots for our clients’ purpose in the form of drones, communications hardware and cloud intelligence.

 

What are your predictions for drone technology in the coming decade?

Drones have clearly proven their value for enterprises, and the usage will only continue to increase. We have witnessed many businesses trying to integrate drones into their workflows, with only a few of them achieving great ROIs and most of them failing due to the inefficient nature of current commercial drone applications. Since the drone industry hype began to fade, we have seen a rapid consolidation in the market, especially in the last couple of years.  I believe that this was a necessary step for the industry, which opened the path to real productivity and better opportunities for products and services that are actually beneficial for enterprises. The addressable market that the commercial drones will create until 2025 is expected to exceed $100B, which in my opinion is a fairly modest estimation.

 

  • We will see an exponential rise in “Beyond Visual Line of Sight” flights, which will be the enabling factor for many use cases of commercial UAVs.
  • The advancements in battery technology such as hydrogen fuel cells will extend the flight times by at least an order of magnitude, which will also be a driving factor for many novel use cases.
  • Drone-in-a-box systems are still perceived as somehow experimental, but we will definitely see this technology become ubiquitous in the next decade.
  • There have been ongoing tests that are conducted by companies of various sizes in urban air mobility market, which could be broken down into roughly three segments, namely last-mile delivery, aerial public transport and aerial personal transport. The commercialization of these segments will definitely happen in the coming decade.

 

Is there anything else that you would like to share about Soar Robotics?

We believe that the feasibility and commercialization of autonomous aerial operations mainly depend on solving the problem of aerial vehicle connectivity. For drones to be able to operate Beyond Visual Line of Sight (BVLOS) they need seamless coverage, real-time high throughput data transmission, command and control, identification, and regulation. Although there have been some successful attempts to leverage current mobile networks as a communications method, these networks have many shortcomings and are far from becoming the go-to solution for aerial vehicles.

We have been developing a connectivity hardware and software stack that have the capability of forming ad hoc drone networks. We expect that these networking capabilities will enable seamless, safe and intelligent operations for any type of autonomous aerial vehicle. We are rolling the alpha and beta releases of the hardware in the coming months to test our products with larger user bases under various usage conditions and start forming these ad-hoc networks to serve many industries.

To learn more visit Soar Robotics or to invest in this company visit the Crowdfunding Page on Republic.

Spread the love
Continue Reading

Robotics

Advancements in Human-Robot-Computer Research

Published

on

Advancements in Human-Robot-Computer Research

The automated experimental facility, called the Intelligent Towing Tank (ITT), conducted around 100,000 total experiments in its first year of operation. What would normally take a PhD student to complete within five years of experiments, the ITT was able to do within weeks. The development of the ITT in the MIT Sea Grant Hydrodynamics Laboratory takes us further into the area of human-robot-computer research. 

The ITT automatically and adaptively performs, analyzes, and designs experiments. The experiments are focused on exploring vortex-induced vibrations (VIVs). VIVs are important for engineering offshore ocean structures such as marine drilling risers, which are responsible for connecting underwater oil wells to the surface. With VIVs, there are a high number of parameters involved.

The ITT is guided by active learning, and it conducts a series of experiments. Within the experiments, the parameters for each next experiment are selected by a computer. The system uses an “explore-and-exploit” methodology, which helps greatly reduce the number of experiments required for mapping and exploring the complex aspects of VIVs.

PhD candidate Dixia Fan began the project while searching for a way to reduce the thousand or so experiments that needed to be conducted by hand. That led to the development of the ITT system. 

A paper was published last month in the journal Science Robotics. 

Fan is now a postdoc, and the project was worked on by a team of researchers from the MIT Sea Grant College Program and MIT’s Department of Mechanical Engineering, École Normale Supérieure de Rennes, and Brown University. The new project showcases the type of cooperation that can take place between humans, computers, and robots in order to make scientific discoveries at a faster pace.

The ITT is a 33-foot tank, and it works without interruption or suspension. The researchers would like to see the system used in a variety of different disciplines, which could lead to the creation of new models in nonlinear systems. 

The ITT allowed Fan and his collaborators to explore a wider parametric space. “If we performed traditional techniques on the problem we study, it would take 950 years to finish the experiment,” he explained. 

In order to shorten the long time it would take for the experiment, Fan and the team integrated a Gaussian process regression learning algorithm into the ITT. By doing this, the researchers were able to reduce the amount of experiments needed, down to a few thousand. 

The robotic system is capable of automatically conducting an initial sequence of experiments. It then takes partial control over the parameters of the next experiment. 

Fan was awarded an MIT Mechanical Engineering de Florez Award for “Outstanding Ingenuity and Creative Judgement” in the development of the ITT. 

According to Michael Triantafyllou, Henry L. and Grace Doherty Professor in Ocean Science and Engineering, and also Fan’s doctoral advisor, “Dixia’s design of the Intelligent Towing Tank is an outstanding example of using novel methods to reinvigorate mature fields.”

Triantafyllou was a co-author on the paper and the director of the MIT Sea Grant College Program. 

“MIT Sea Grant has committed resources and funded projects using deep-learning methods in ocean-related problems for several years that are already paying off,” he said.

MIT is funded by the National Oceanic and Atmospheric Administration and administered by the National Sea Grant Program. It is a federal-institute partnership that combines research and engineering at MIT to help tackle ocean-related issues, 

Other contributors to the paper include George Karniadakis from Brown University, affiliated with MIT Sea Grant; Gurvan Jodin from ENS Rennes; MIT PhD candidate in mechanical engineering Yu Ma; and Thomas Consi, Luca Bonfiglio, and Lily Keyes from MIT Sea Grant.

 

Spread the love
Continue Reading