A new 3D-printed prosthetic hand paired with AI has been developed by Biological Systems Lab at Hiroshima University in Japan. This new technology can dramatically change the way prosthetics work. It is another step in the direction of combining both the physical human body with artificial intelligence, something that we are most definitely heading towards.
The 3D-printed prosthetic hand has been paired with a computer interface to create the lightest and cheapest model yet. This version is the most reactive to motion intent that we have seen. Before the current model, they were normally made from metal which caused them to be both heavier and more expensive. The way this new technology works is by a neural network that is trained to recognize certain combined signals, these signals have been named “muscle synergies” by the engineers working on the project.
The prosthetic hand has five independent fingers that can make complex movements. Compared to previous models, these fingers are able to move around more as well as all at the same time. These developments make it possible for the hand to be used for tasks like holding items such bottles and pens. Whenever the user of the technology wants to move the hand or fingers in a certain way, they only have to imagine it. Professor Toshio Tsuji of the Graduate School of Engineering at Hiroshima University explained the way a user can move the 3D-printed hand.
“The patient just thinks about the motion of the hand and then the robot automatically moves. The robot is like a part of his body. You can control the robot as you want. We will combine the human body and machine like one living body.”
The 3D-printed hand works when electrodes in the prosthetic measures electrical signals that come from nerves through the skin. It can be compared to the way ECG and heart rates work. The measured signals are then sent to a computer within five milliseconds at which point the computer recognizes the desired movement. The computer then sends the signal back to the hand.
There is a neural network that helps the computer learn the different complex movements, it has been named Cybernetic Interface. It can differentiate between the 5 fingers so that there can be individual movements. Professor Tsuji also spoke on this aspect of the new technology.
“This is one of the distinctive features of this project. The machine can learn simple basic motions and then combine and then produce complicated motions.”
The technology was tested among seven people, and one of the seven was an amputee who has been wearing a prosthesis for 17 years. The patients performed daily tasks, and they had a 95% accuracy rate for single simple motion and a 93% rate for complex movements. The prosthetics that were used in this specific test were only trained for 5 different movements with each finger; there could be many more complex movements in the future. With just these 5 trained movements, the amputee patient was able to pick up and put down things like bottles an notebooks.
There are numerous possibilities for this technology. It could decrease cost while providing extremely functional prosthetic hands to amputee patients. There are still some problems like muscle fatigue and the capability of software recognizing many complex movements.
This work was completed by Hiroshima University Biological Systems Engineering Lab along with patients from the Robot Rehabilitation Center in the Hygo Institute of Assistive Technology, Kobe. The company Kinki Gishi was responsible for creating the socket which was used on the arm of the amputee patient.
What is Big Data?
“Big Data” is one of the commonly used buzz words of our current era, but what does it really mean?
Here’s a quick, simple definition of big data. Big data is data that is too large and complex to be handled by traditional data processing and storage methods. While that’s a quick definition you can use as a heuristic, it would be helpful to have a deeper, more complete understanding of big data. Let’s take a look at some of the concepts that underlie big data, like storage, structure, and processing.
How Big Is Big Data?
It isn’t as simple as saying “any data over the size ‘X ‘is big data”, the environment that the data is being handled in is an extremely important factor in determining what qualifies as big data. The size that data needs to be, in order to be considered big data, is dependant upon the context, or the task the data is being used in. Two datasets of vastly different sizes can be considered “big data” in different contexts.
To be more concrete, if you try to send a 200-megabyte file as an email attachment, you would not be able to do so. In this context, the 200-megabyte file could be considered big data. In contrast, copying a 200-megabyte file to another device within the same LAN may not take any time at all, and in that context, it wouldn’t be regarded as big data.
However, let’s assume that 15 terabytes worth of video need to be pre-processed for use in training computer vision applications. In this case, the video files take up so much space that even a powerful computer would take a long time to process them all, and so the processing would normally be distributed across multiple computers linked together in order to decrease processing time. These 15 terabytes of video data would definitely qualify as big data.
Types Of Big Data Structures
Big data comes in three different categories of structure: un-structured data, semi-structured, and structured data.
Unstructured data is data that possesses no definable structure, meaning the data is essentially just in one large pool. Examples of unstructured data would be a database full of unlabeled images.
Semi-structured data is data that doesn’t have a formal structure, but does exist within a loose structure. For example, email data migtht count as semi-structured data, because you could refer to the data contained in individual emails, but formal data patterns have not been established.
Structured data is data that has a formal structure, with data points categorized by different features. One example of structured data is an excel spreadsheet containing contact information like names, emails, phone numbers, and websites.
If you would like to read more about the differences in these data types, check the link here.
Metrics For Assessing Big Data
Big data can be analyzed in terms of three different metrics: volume, velocity, and variety.
Volume refers to the size of the data. The average size of datasets is often increasing. For example, the largest hard drive in 2006 was a 750 GB hard drive. In contrast, Facebook is thought to generate over 500 terabytes of data in a day and the largest consumer hard drive available today is a 16 terabyte hard drive. What quantifies as big data in one era may not be big data in another. More data is generated today because more and more of the objects surrounding us are equipped with sensors, cameras, microphones, and other data collection devices.
Velocity refers to how fast data is moving, or to put that another way, how much data is generated within a given period of time. Social media streams generate hundreds of thousands of posts and comments every minute, while your own email inbox will probably have much less activity. Big data streams are streams that often handle hundreds of thousands or millions of events in more or less real-time. Examples of these data streams are online gaming platforms and high-frequency stock trading algorithms.
Variety refers to the different types of data contained within the dataset. Data can be made up of many different formats, like audio, video, text, photos, or serial numbers. In general, traditional databases are formatted to handle one, or just a couple, types of data. To put that another way, traditional databases are structured to hold data that is fairly homogenous and of a consistent, predictable structure. As applications become more diverse, full of different features, and used by more people, databases have had to evolve to store more types of data. Unstructured databases are ideal for holding big data, as they can hold multiple data types that aren’t related to each other.
Methods Of Handling Big Data
There are a number of different platforms and tools designed to facilitate the analysis of big data. Big data pools need to be analyzed to extract meaningful patterns from the data, a task that can prove quite challenging with traditional data analysis tools. In response to the need for tools to analyze large volumes of data, a variety of companies have created big data analysis tools. Big data analysis tools include systems like ZOHO Analytics, Cloudera, and Microsoft BI.
AI Used To Recreate Human Brain Waves In Real Time
Recently, a team of researchers created a neural network that is able to recreate human brain waves in real-time. As reported by Futurism, the research team, comprised of researchers from the Moscow Institute of Physics and Technology (MIPT) and the Neurobotics corporation, were able to visualize a person’s brain waves by translating the waves with a computer vision neural network, rendering them as images.
The results of the study were published in bioRxiv, and a video was posted alongside the research paper, which showed how the network reconstructed images. The MIPT research team hopes that the study will help them create post-stroke rehabilitation systems that are controlled by brain waves. In order to create rehabilitative devices for stroke victims, neurobiologists have to study the processes the brain uses to encode information. A critical part of understanding these processes is studying how people perceive video information. According to ZME Science, the current methods of extracting images from brain waves typically analyze the signals originating from the neurons, through the use of implants, or extract images using functional MRI.
The research team from Neurbiotics and MIPT utilized electroencephalography, or EEG, which logs brain waves collected from electrodes placed on the scalp. In such situations, people often wear devices that track their neural signals while they watch a video or look at pictures. The analysis of brain activity yielded input features that could be used in a machine learning system. The machine learning system was able to reconstruct the images a person witnessed, rendering the images on a screen in real-time.
The experiment was divided into multiple parts. In the experiment’s first phase, the researchers had the subjects watch 10-second clips of YouTube videos for around 20 minutes. There were five different categories that the video were divided into: motorsports, human faces, abstract shapes, waterfalls and moving mechanisms. These different categories can contain a variety of objects. For example, the motorsports category contained clips of snowmobiles and motorcycles.
The research team analyzed the EEG data that was collected while the participants watched the videos. The EEGs displayed specific patterns for each of the different video clips, and this meant that the team could potentially interpret what content the participants were seeing on videos in more or less real-time.
The second phase of the experiment had three categories selected at random. Two neural networks were created to work with these two categories. The first network generated random images that belonged to one of three categories, creating them out of random noise that was refined into an image. Meanwhile, the other network generated noise based on the EEG scans. The data in both of the networks were compared and the randomly generated images were updated based on the EEG noise data, until the generated images became similar to the images that the test subjects were seeing.
After the system had been designed, the researchers tested the program’s ability to visualize brain waves by showing the test subjects videos they hadn’t yet seen from the same categories. The EEGs generated during the second round of viewings were given to the networks, and the networks were able to generate images that could be easily placed into the right category 90% of the time.
The researchers noted that the results of their experiment were surprising because for a long time it was assumed that there wasn’t sufficient information in an EEG to reconstruct the images observed by people. However, the results of the research team proved that it can be done.
Vladimir Konyshev, the head of the Neurorobotics Lab at MIPT, explained that although the research team is currently focused on creating assistive technologies for those who are disabled, the technology they are working could be used to create neural control devices for the general population at some point. Konyshev explained to TechXplore:
“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too.”
AI Engineers Develop Method That Can Detect Intent Of Those Spreading Misinformation
Dealing with misinformation in the digital age is a complex problem. Not only does misinformation have to be identified, tagged, and corrected, but the intent of those responsible for making the claim should also be distinguished. A person may unknowingly spread misinformation, or just be giving their opinion on an issue even though it is later reported as fact. Recently, a team of AI researchers and engineers at Dartmouth created a framework that can be used to derive opinion from “fake news” reports.
As ScienceDaily reports, the Dartmouth team’s study was recently published in the Journal of Experimental & Theoretical Artificial Intelligence. While previous studies have attempted to identify fake news and fight deception, this might be the first study that aimed to identify the intent of the speaker in a news piece. While a true story can be twisted into various deceptive forms, it’s important to distinguish whether or not deception was intended. The research team argues that intent matters when considering misinformation, as deception is only possible if there was intent to mislead. If an individual didn’t realize they were spreading misinformation or if they were just giving their opinion, there can’t be deception.
Eugene Santos Jr., an engineering professor at Dartmouth’s Thayer School of Engineering, explained to ScienceDaily why their model attempts to distinguish deceptive intent:
“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes. To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts.”
In order to construct their model, the research team analyzed the features of deceptive reasoning. The resulting algorithm could distinguish intent to deceive from other forms of communication by focusing on discrepancies between a person’s past arguments and their current statements. The model constructed by the research team needs large amounts of data that can be used to measure how a person deviates from past arguments. The training data the team used to train their model consisted of data taken from a survey of opinions on controversial topics. Over 100 people gave their opinion on these controversial issues. Data was also pulled from reviews of 20 different hotels, consisting of 400 fictitious reviews and 800 real reviews.
According to Santo, the framework developed by the researchers could be refined and applied by news organizations and readers, in order to let them analyze the content of “fake news” articles. Readers could examine articles for the presence of opinions and determine for themselves if a logical argument has been used. Santos also said that the team wants to examine the impact of misinformation and the ripple effects that it has.
Popular culture often depicts non-verbal behaviors like facial expressions as indicators that someone is lying, but the authors of the study note that these behavioral hints aren’t always reliable indicators of lying. Deqing Li, co-author on the paper, explained that their research found that models based on reasoning intent are better indicators of lying than behavioral and verbal differences. Li explained that reasoning intent models “are better at distinguishing intentional lies from other types of information distortion”.
The work of the Dartmouth researchers isn’t the only recent advancement when it comes to fighting misinformation with AI. News articles with clickbait titles often mask misinformation. For example, they often imply one thing happened when another event actually occurred.
As reported by AINews, a team of researchers from both Arizona State University and Penn State University collaborated in order to create an AI that could detect clickbait. The researchers asked people to write their own clickbait headlines and also wrote a program to generate clickbait headlines. Both forms of headlines were then used to train a model that could effectively detect clickbait headlines, regardless of whether they were written by machines or people.
According to the researchers, their algorithm was around 14.5% more accurate, when it came to detecting clickbait titles than other AIs had been in the past. The lead researcher on the project and associate professor at the College of Information Sciences and Technology at Penn State, Dongwon Lee, explained how their experiment demonstrates the utility of generating data with an AI and feeding it back into a training pipeline.
“This result is quite interesting as we successfully demonstrated that machine-generated clickbait training data can be fed back into the training pipeline to train a wide variety of machine learning models to have improved performance,” explained Lee.