Researchers at Texas A&M University have developed a new surgical technology for steadying robotic arms during surgery. The new study was published in the journal Scientific Reports.
Robotic Fingers as an Extension of the Surgeon
The team of researchers demonstrated that users can have an accurate perception of distance to contact through the use of electrical currents that are sent to fingerprints. The electrical currents cause small but perceptible buzzes, and users are able to control robotic fingers in a more precise and accurate way when operating on fragile surfaces.
According to the researchers, the method could be used by surgeons to reduce inadvertent injuries, some of which occur during robot-assisted operative procedures.
Hangye Park is an assistant professor in the Department of Electrical and Computer Engineering.
“One of the challenges with robotic fingers is ensuring that they can be controlled precisely enough to softly land on biological tissue,” said Park. “With our design, surgeons will be able to get an intuitive sense of how far their robotic fingers are from contact, information they can then use to touch fragile structures with just the right amount of force.”
Surgeons utilize robot-assisted surgical systems, or telerobotic surgical systems, as physical extensions of themselves. The surgeons then control robotic fingers with movements from their own, which allows for complicated procedures to be done remotely. This also allows surgeons to take on more patients for medical care, and because of the small size of the robotic fingers, smaller incisions can be done. With the technology, surgeons are not required to make large incisions, which are often needed to accommodate the surgeon’s hands in the patient’s body.
One of the key aspects of moving robotic fingers precisely is the use of live streaming of visual information, which comes from cameras that are placed on telerobotic arms. This requires the surgeons to observe monitors in order to match their finger movements with the telerobotic finger movements. This helps them locate where their robotic fingers are and how close they are to each other.
According to Park, visual information alone is not enough to guide fine finger movements, which is extremely important when the fingers are operating very close to the brain and other delicate tissue.
“Surgeons can only know how far apart their actual fingers are from each other indirectly, that is, by looking at where their robotic fingers are relative to each other on a monitor,” Park said. “This roundabout view diminishes their sense of how far apart their actual fingers are from each other, which then affects how they control their robotic fingers.”
Glove Fitted with Stimulation Probes
Trying to overcome this challenge, the team of researchers developed an alternate way to deliver distance information, and it is independent of visual feedback. They do this by using gloves fitted with stimulation probes, which pass different frequencies of electrical currents onto fingertips. By doing this, users were able to be trained to associate the frequency of current pulses with distance, in a way in such a way that there is an increase in current frequencies when a test object is getting closer.
The technology was specific to the user’s sensitivity to electrical current frequencies. This means that in the case of a user being sensitive to a wider range of current frequencies, the distance information was then delivered with a smaller increase in currents.
According to the researchers, the users receiving electrical pulses were capable of lowering their force of contact by about 70%, and they had a raised awareness of the proximity to underlying surfaces. One of the concluding observations was that proximity information being delivered through mild electrical pulses was about three times more effective than just visual information.
According to Park, the new developments could drastically increase maneuverability during surgery, while at the same time minimizing unintended tissue damage.
“Our goal was to come up with a solution that would improve the accuracy in proximity estimation without increasing the burden of active thinking needed for this task,” he said. “When our technique is ready for use in surgical settings, physicians will be able to intuitively know how far their robotic fingers are from underlying structures, which means that they can keep their active focus on optimizing the surgical outcome of their patients.”
New Software Developed to Improve Robotic Prosthetics
New software has been developed by researchers at North Carolina State University in order to improve robotic prosthetics or exoskeletons. The new software is able to be integrated with existing hardware, resulting in safer and more natural walking on different terrains.
The paper is titled “Environmental Context Prediction for Lower Limb Prostheses With Uncertainty Quantification.” It was published in IEEE Transactions on Automation Science and Engineering.
Adapting to Different Terrains
Edgar Lobaton is a co-author of the paper. He is an associate professor of electrical and computer engineering at the university.
“Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on,” says Lobaton. “The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.”
There were six different terrains that the researchers focused on, with each requiring adjustments in the behavior of a robotic prosthetic. They were tile, concrete, brick, grass, “upstairs,” and “downstairs.”
Boxuan Zhong is the lead author of the paper and a Ph.D. graduate from NC State.
“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision — it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” says Zhong.
Incorporation of Hardware and Software Elements
The new framework relies on both hardware and software elements being incorporated together, and it is used with any lower-limb robotic exoskeleton or robotic prosthetic device.
One new aspect of this framework is a camera as another piece of hardware. In the study, cameras were worn on eyeglasses, and they were placed on the lower-limb prosthesis. The researchers then observed how AI was able to utilize computer vision data from the two different types of cameras, first separately and then together.
Helen Huang is a co-author of the paper. She is the Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill.
“Incorporating computer vision into control software for wearable robotics is an exciting new area of research,” says Huang. “We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well — particularly for near-term predictions, such as what the terrain would be like for the next step or two.”
According to Lobaton, the work is applicable to any type of deep-learning system.
“We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making,” Lobaton says. “This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system.”
Training the AI System
In order to train the AI system, the cameras were placed on able-bodied participants, who then moved through different indoor and outdoor environments. The next step was to have an individual with lower-limb amputation navigate the same environments while wearing the cameras.
“We found that the model can be appropriately transferred so the system can operate with subjects from different populations,” Lobaton says. “That means that the AI worked well even though it was trained by one group of people and used by somebody different.”
The next step is to test the framework in a robotic device.
“We are excited to incorporate the framework into the control system for working robotic prosthetics — that’s the next step,” Huang says.
The team will also work on making the system more efficient, by requiring less visual data input and data processing.
Anthony Tayoun, Co-founder & COO of Dexai Robotics – Interview Series
Anthony is the co-founder and COO of Dexai Robotics, a startup that automates activities in commercial kitchens using flexible robot arms. Prior to Dexai, Anthony worked as a consultant with the Boston Consulting Group, focusing on growth strategies. Anthony holds a MBA from Harvard Business School, and a B.E. in Mechanical Engineering and a B.S. in Mathematics from the American University of Beirut. Outside of work, Anthony enjoys chasing soccer balls and exploring sunken sea treasures.
What is it that attracted you to robotics initially?
I’m amazed by our ability, as humans, to develop “complex tools” out of simple components to improve our standard of living. At the same time, we’re living in a period during which many enabling technologies are being improved by an order of magnitude. Just look back at the past two decades: collaborative robots were created and became affordable for commercial applications, control theory advanced substantially, computer vision is arguably at the super human level, machine learning is enabling very rapid decision making, and the internet infrastructure improved enough to connect all of this together. Right now is really the most exciting time for robotics; for the first time in history, robot performance is soon going to exceed our expectations.
You have a very diverse background including being an Associate for the Boston Consulting Group (BCG). One of your projects was designing a prediction tool to detect illicit activity using advanced statistical methods and big data analysis. Could you talk about this project?
At a high level, that project involved analyzing a very large dataset, comprising demographic and behavioral data for commercial establishments, to unearth predictive behavior. We used advanced statistical modeling techniques, such as binomial regression, to compute the probability of illicit activity based on past non-related data. The results were staggering: from data such as types of licenses owned or historical financial performance, we were able to make predictions an order of magnitude more accurate than the baseline.
Can you discuss how you transitioned away from being an Associate of BCG, to launching Dexai Robotics?
My BCG experience enriched my business knowledge tremendously, as I helped companies navigate various strategic and managerial topics. During this experience, I realized that the projects I enjoy the most are those related to market entry or helping clients set up businesses from the ground up, which pushed me in the entrepreneurial direction. I decided to pursue a Master of Business Administration, and joined Harvard Business School. At HBS, I focused on entrepreneurship and related classes, and had the fortune to experiment with a few ideas at the school’s innovation lab. Midway through the MBA, I met Dave Johnson (now Dexai’s co-founder), and together we started developing business plans to commercialize technology that he and others at Harvard and MIT were developing. A few business competitions and tens of customer calls later, Dexai was born!
Dexai Robotics features AIfred a robot that automates activities in commercial kitchens and the food industry. What are the tasks that AIfred is capable of?
Alfred is currently capable of end-to-end meal assembly for a variety of recipes. Alfred can use regular utensils such as tongs, dishers (scoops), spoons, and ladles to pick and/or scoop almost any ingredient. It takes Alfred ~1 day to “learn” a new ingredient, as long as it can be manipulated using the mentioned utensils. Alfred can also “see” and identify different ingredients in the workspace, pass bowls around, and perform simple tasks such as opening a rice cooker or an oven door. In the future, Alfred will learn additional tasks such as operating kitchen equipment (e.g., fryer, grill), and perform ingredients preparation tasks (e.g., cutting, slicing).
Is there a learning curve for a restaurant operator who wishes to install AIfred in their commercial kitchen?
There is a slight learning curve, in-line with most other kitchen appliances. The initial setup consists of entering supported recipes into Dexai’s software, specifying ingredient portions, and connecting Alfred to the point-of-sale system. After that, Alfred runs pretty much on its own, with restaurant operators only needing to periodically refill food bins with fresh ingredients. Alfred is designed to simplify the lives of restaurant workers: we made a conscious choice to solve the “difficult” problem ourselves, so that our customers don’t have to worry about that. Alfred’s camera, combined with Dexai’s proprietary AI software, allows for seamless adaptation to the majority of layouts and processes. Further, Alfred can adapt to changes in the environment, such as moving a bowl around, or swapping ingredients, to maximize the operator’s flexibility.
What’s the initial reaction from restaurateurs that initially test the AIfred robot?
That’s a very interesting question because the reaction progresses very quickly. The universal initial reaction is to take out your phone and start snapping pictures and videos. There’s something really magical about a robotic arm smoothly moving around in a purposeful manner. Maybe it’s because popular culture has us expecting clunky, abrupt motions, similar to when someone makes a “robot impression”. Contrast that with the robot moving very smoothly, picking up utensils, and scooping food the same way a person would do, and your reaction dramatically changes.
Are there any brand names or large restaurants that are currently using AIfred or trialing AIfred?
We deployed a couple of successful trials to test the system, and had to pause due to concerns for our employee safety related to COVID-19. Our customer names are all still confidential and our initial focus is on salads and bowls. Later this year, we will have our first customer-facing deployment, so stay tuned!
One of your earliest robotic projects was the Mule Robot which assisted users with transporting everyday merchandise. How did this early experience influence your thinking on robotics?
My biggest learning from the Mule Robot project was that solving the technical problem is a necessary but insufficient requirement for success. Without customer focus and a robust business model, even the most elegant technical solution won’t leave the research lab. For the Mule Robot, we developed a solution for residential applications, but struggled to take the project forward. Alternatively, thinking about the same problem with a more commercial lens: transporting merchandise inside a building is perfect for “room service” applications in hospitality. Today, a Chicago hotel uses two robots automating room service, made by a startup that successfully commercialized a similar project.
What do you believe a commercial kitchen of the future will look like? How will robots cooperate or in some cases replacing kitchen staff?
I believe that kitchen staff will always be needed; hospitality is incomplete without a human touch. Regarding the kitchen of the future, the answer really depends on how far in the future we’re looking. In the short and medium term, we’ll see dramatic efficiency increases in different areas of the kitchen, either through automated single-use equipment such as sushi rollers and vegetable slicers, or through end-to-end flexible automation such as ingredient assembly through Dexai’s Alfred. Longer term, in 10 years or so, the commercial kitchen will capitalize on efficiencies by combining all these solutions, and will feature novel cooking techniques instead of only efficiency gains. To illustrate this point, imagine a circular, vertically stacked serving counter operated by a robot at the center which can reach inside the oven and make changes to the meal while it cooks. Eventually, the target is to get from raw ingredients to prepared meals through the smallest and most efficient operation.
Is there anything else that you would like to share about Dexai Robotics or AIfred?
We’re really excited to have Alfred’s first public appearance this year. Especially given the health crisis that our world is suffering from, securing the access to prepared food is a necessity. We look forward to a future where everyone has access to affordable, healthy foods!
Thank you for the fantastic interview. I look forward to the day when we see different version of AIfred in commercial kitchens everywhere. Anyone who wishes to learn more should visit Dexai Robotics.
Computer Graphics Technology Adapted for Soft Robotics
Scientists from the University of California, Los Angeles (UCLA) and Carnegie Mellon University have adapted sophisticated computer graphics technology for soft robotics. They used the same technology that motion-picture animators and video game developers rely on to create very detailed images, such as hair and fabric in animated films. It is now being used by the scientists to simulate soft, limbed robots and their movements.
The work was published in Nature Communications on May 6. The paper is titled “Dynamic Simulation of Articulated Soft Robots.”
Khalid Jawed is the study author and an assistant professor of mechanical and aerospace engineering at UCLA Samueli School of Engineering.
“We have achieved faster than real-time simulation of soft robots, and this is a major step toward such robots that are autonomous and can plan out their actions on their own,” said Jawed. “Soft robots are made of flexible material which makes them intrinsically resilient against damage and potentially much safer in interaction with humans. Prior to this study, predicting the motion of these robots has been challenging because they change shape during operation.”
DER and FEM Technologies
An algorithm called discrete elastic rods (DER) is often used in movie-making in order to animate free-flowing objects. In just a fraction of a second, DER is capable of predicting hundreds of movements.
The researchers set out to use DER to develop a physics engine capable of simulating the movements of bio-inspired robots. They also wanted to use it for robots that exist in difficult environments, like those developed for Mars or underwater.
Finite element method (FEM) is also an algorithm-based technology, and it is able to simulate the movements of solid and rigid robots. However, FEM is not ideal when it comes to soft, natural movements and the required level of detail. Besides that, FEM relies on a lot of computational power and requires long periods of time.
In order to develop and simulate soft robots, roboticists have relied on trial-and-error methods.
Carmel Majidi is an associate professor of mechanical engineering in Carnegie Mellon’s College of Engineering.
“Robots made out of hard and inflexible materials are relatively easy to model using existing computer simulation tools,” said Majidi. “Until now, there haven’t been good software tools to simulate robots that are soft and squishy. Our work is one of the first to demonstrate how soft robots can be successfully simulated using the same computer graphics software that has been used to model hair and fabrics in blockbuster films and animated movies.”
The researchers began to collaborate in Majidi’s Soft Machines Lab over three years ago. Their most recent project involved Jawed running simulations in his research lab at UCLA and Majidi performing physical experiments to confirm the simulation results.
The simulation tool drastically reduces the time it takes to get a soft robot to the point of application.
Support from the Army Research Office
The research was partly funded by the Army Research Office, which is a part of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
Dr. Samuel Stanton is a program manager with the Army Research Office.
“Experimental advances in soft-robotics have been outpacing theory for several years,” said Stanton. “This effort is a significant step in our ability to predict and design for dynamics and control in highly deformable robots operating in confined spaces with complex contacts and constantly changing environments.”
The technology is now being explored and tried on other kinds of soft robots. One of those areas is robots that are based on the movements of bacteria and starfish, which could be utilized in oceanography tasks such as monitoring seawater conditions or inspecting marine life.
- Microsoft to Replace Dozens of Journalists With AI
- AI Model Might Let Game Developers Generate Lifelike Animations
- Akilesh Bapu, Founder & CEO of DeepScribe – Interview Series
- AI Models Trained On Sex Biased Data Perform Worse At Diagnosing Disease
- Stefano Pacifico, and David Heeger, Co-Founders of Epistemic AI – Interview Series