Connect with us

Artificial Neural Networks

Legal Tech Company Seeks To Bring AI To Lawyers

mm

Published

 on

Artificial intelligence programs are being used in more applications and more industries all the time. The legal field is an area that could substantially benefit from AI programs, due to the massive amount of documents that have to be reviewed for any given case. As reported by the Observer, one company is aiming to bring AI to the legal fields, with its CEO seeing a wide variety of uses for AI.

Lane Lillquist is the co-founder and CTO of InCloudCounsel, a legal tech firm. Lillquist believes that AI can be used to help lawyers be more efficient and accurate in their jobs. For instance, the massive amount of data that has to be processed by lawyers is usually better processed by a machine learning algorithm, and the insights generated by the AI could be used to make tasks like contract review more accurate. In this sense, the role for AI in the legal space is much like the various other tech tools that we use all the time, things like automatic spelling correction and document searching.

Because of the narrow role Lilliquist expects AI to take, Lilliquist doesn’t see much need to worry that AI will end up replacing lawyers at their jobs, at least not anytime soon. Lilliquist expects that for the near future most of the tasks done by AI will be things like automating many of the high-volume, repetitive tasks that prevent lawyer’s from focusing their attention on more important tasks. These are tasks like data extraction and categorization. Human lawyers will be able to have more time, more bandwidth, to focus on more complex tasks and different forms of work. Essentially, AI could make lawyers more impactful at their jobs, not less.

Lilliquist has made some predictions regarding the role of AI for the near future of the legal field. Lilliquist sees AI accomplishing tasks like automatically filling in certain forms or searching documents for specific terms and phrases relevant to a case.

One example of an application that fills in legal documents is the company DoNotPay, which promises to help users of the platform “fight corporations and beat bureaucracy” with just a few button presses. The app operates by having a chatbot ascertain the legal problems of its users, and then it generates and submits paperwork based on the provided answers. While the app is impressive, Lilliqust doesn’t think that apps like DoNotPay will end up replacing lawyers for a long time.

Lilliquist makes a comparison to how ATMs impacted the banking industry, noting that because it became much easier for banks to open small branches in more remote locations, the number of tellers employed by banks ended up increasing.

Lilliquist does think that AI will make the nature of the legal profession constantly change and evolve, necessitating that lawyers possess a more varied skill set to make use of AI-enabled technologies and stay competitive in the job market. Other kinds of jobs, positions adjacent to legal positions, could also be created. For example, the number of data analysts who can analyze legal and business related datasets and propose plans to improve law practice might increase.

Lilliquist explained to the Observer:

“We’re already seeing a rise of legal technology companies providing alternative legal services backed by AI and machine learning that are enhancing how lawyers practice law. Law firms will begin building their own engineering departments and product teams, too.”

While Lilliquist isn’t worried that AI will put lawyers out of jobs, he is somewhat worried about the way AI can be misused. Lilliquist is worried about how legal AI could be employed by people who don’t fully understand the law, thereby putting themselves at legal risk.

Spread the love

Artificial Neural Networks

AI Systems Discovers Blueprints for Artificial Proteins

mm

Published

on

A team of researchers from the Pritzker School of Molecular Engineering (PME) at the University of Chicago has recently succeeded in the creation of an AI system that can create entirely new, artificial proteins by analyzing stores of big data.

Proteins are macromolecules essential for the construction of tissues in living things, and critical to the life of cells in general. Proteins are used by cells as chemical catalysts to make various chemical reactions occur and to carry out complex tasks. If scientists can figure out how to reliably engineer artificial proteins, it could open the door to new ways of carbon capturing, new methods of harvesting energy, and new disease treatments. Artificial proteins have the power to dramatically alter the world we live in. As reported by EurekaAlert, a recent breakthrough by researchers at PME University of Chicago has put scientists closer to those goals. The PME researchers made use of machine learning algorithms to develop a system capable of generating novel forms of protein.

The research team created machine learning models trained on data pulled from various genomic databases. As the models learned, they began to distinguish common underlying patterns, simple rules of design, that enable the creation of artificial proteins. Upon taking the patterns and synthesizing the respective proteins in the lab, the researchers found that the artificial proteins created chemical reactions that were approximately as effective as those driven by naturally occurring proteins.

According to Joseph Regenstein Professor at PME UC, Rama Ranganathan, the research team found that genome data contains a massive amount of information regarding the basic functions and structures of proteins. By utilizing machine learning to recognize these common structures, the researchers were “able to bottle nature’s rules to create proteins ourselves.”

The researchers focused on metabolic enzymes for this study, specifically a family of proteins called chorismate mutase. This protein family is necessary for life in a wide variety of plants, fungi, and bacteria.

Ranganathan and collaborators realized that genome databases contained insights just waiting to be discovered by scientists, but that traditional methods of determining the rules regarding protein structure and function have only had limited success. The team set out to design machine learning models capable of revealing these design rules. The model’s findings imply that new artificial sequences can be created by conserving amino acid positions and correlations in the evolution of amino acid pairs.

The team of researchers created synthetic genes that encoded amino acid sequences producing these proteins. They cloned bacteria with these synthetic genes and found that the bacteria used the synthetic proteins in their cellular machinery, functioning almost exactly the same as regular proteins.

According to Ranganathan, the simple rules that their AI distinguished can be used to create artificial proteins of incredible complexity and variety. As Ranganathan explained to EurekaAlert:

“The constraints are much smaller than we ever imagined they would be.  There is a simplicity in nature’s design rules, and we believe similar approaches could help us search for models for design in other complex systems in biology, like ecosystems or the brain.”

Ranganathan and collaborators want to take their models and generalize them, creating a platform scientists can use to better understand how proteins are constructed and what effects they have. They hope to use their AI systems to enable other scientists to discover proteins that can tackle important issues like climate change. Ranganathan and Associate Professor Andrew Ferguson have created a company dubbed Evozyne, which aims to commercialize the technology and promote its use in fields like agriculture, energy, and environment.

Understanding the commonalities between proteins, and the relationships between structure and function could also assist in the creation of new drugs and forms of therapy. Though protein folding has long been considered an incredibly difficult problem for computers to crack, the insights from models like the once produced by Ranganathan’s team could help increase the speed these calculations are produced at, facilitating the creation of new drugs based on these proteins. Drugs could be developed that block the creation of proteins within viruses, potentially aiding in the treatment of even novel viruses like the Covid-19 coronavirus.

Ranganathan and the rest of the research team still need to understand how and why their models work and how they produce reliable protein blueprints. The research team’s next goal is to better understand what attributes the models are taking into account to arrive at their conclusions.

Spread the love
Continue Reading

Artificial Neural Networks

AI Model Can Take Blurry Images And Enhance Resolution By 60 Times

mm

Published

on

Researchers from Duke University have developed an AI model capable of taking highly blurry, pixellated images and rendering them with high detail.  According to TechXplore, the model is capable of taking relatively few pixels and scaling the images up to create realistic looking faces that are approximately 64 times the resolution of the original image. The model hallucinates, or imagines, features that are between the lines of the original image.

The research is an example of super-resolution. As Cynthia Rudin from Duke University’s computer science team explained to TechXplore, this research project sets a record for super-resolution, as never before have images been created with such feal from such a small sample of starting pixels. The researchers were careful to emphasize that the model doesn’t actually recreate the face of the person in the original, low-quality image. Instead, it generates new faces, filling in details that weren’t there before. For this reason, the model couldn’t be used for anything like security systems, as it wouldn’t be able to turn out of focus images into images of a real person.

Traditional super-resolution techniques operate by making guesses about what pixels are needed to turn the image into a high-resolution image, based on images that the model has learned about beforehand. Because the added pixels are the result of guesses, not all the pixels will match with their surrounding pixels and certain regions of the image may look fuzzy or warped. The researchers from Duke University used a different method of training their AI model. The model created by the Duke researchers operates by first taking low-resolution images and adding detail to the image over time, referencing high-resolution AI-generated faces as examples. The model references the AI-generated faces and tries to finds ones that resemble the target images when the generated faces are scaled down to the size of the target image.

The research team created a Generative Adversarial Network model to handle the creation of new images. GANs are actually two neural networks that are both trained on the same dataset and pitted against one another. One network is responsible for generating fake images that mimic the real images in the training dataset, while the second network is responsible for detecting the fake images from the genuine ones. The first network is notified when its images have been identified as fake, and it improves until the fake images are hopefully indistinguishable from the genuine images.

The researchers have dubbed their super-resolution model PULSE, and the model consistently produces high-quality images even if given images so blurry that other super-resolution methods can’t create high-quality images from them. The model is even apable of making realistic looking faces from images where the features of the face are almost indistinguishable. For instance, when given an image of a face with 16×16 resolution, it can create a 1024 x 1024 image. More than a million pixels are added during this process, filling in details like strands of hair, wrinkles, and even lighting. When the researchers had people rate 1440 PULSE generated images against images generated by other super-resolution techniques, the PULSE generated images consistently scored the best.

While the researchers used their model on images of people’s faces, the same techniques they use could be applied to almost any object. Low-resolution images of various objects could be used to create high-resolution images of that set of objects, opening-up possible applications for a variety of different industries and fields from microscopy, satellite imagery, education, manufacturing, and medicine.

Spread the love
Continue Reading

Artificial Neural Networks

New Research Suggests Artificial Brains Could Benefit From Sleep

Published

on

New research coming from Los Alamos National Laboratory suggests that artificial brains almost certainly benefit from periods of rest like living brains. 

The research will be presented at the Women in Computer Vision Workshop in Seattle on June 14. 

Yijing Watkins is a Los Alamos National Laboratory computer scientist. 

“We study spiking neural networks, which are systems that learn much as living brains do,” said Watkins. “We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”

Solving Instability in Network Simulations

Watkins and the team found that continuous periods of unsupervised learning led to instability in the network simulations. However, once the team introduced the networks to states that are a result of the waves that living brains experience during sleep, stability was able to be restored. 

“It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.

The team made the discovery when they were working on developing neural networks based on how humans and other biological systems learn to see. The team faced some challenges when it came to stabilizing simulated neural networks that were undergoing unsupervised dictionary training. Unsupervised dictionary training involves classifying objects without having previous examples to use for comparison.

Garrett Kenyon is a computer scientist at Los Alamos and study coauthor.

“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Kenyon. “The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”

Sleep as a Last Resort Solution

According to the researchers, exposing the networks to an artificial analog of sleep was their last resort to stabilizing them. After experimenting with different types of noise, which is similar to the static between stations on a radio, the best results came from the waves of Gaussian noise. This type of noise includes a wide variety and ranges of frequencies and amplitudes. 

The researchers came up with the hypothesis that during slow-wave sleep the noise mimics the input received by biological neurons. The results suggested that slow-wave sleep could play a role in ensuring that cortical neurons do not suffer from hallucinations and maintain their stability. 

The team will now work on implementing the algorithm on Intel’s Loihi neuromorphic chip, hoping that the sleep will help it stably process information from a silicon retina camera in real-time. If the research does determine that artificial brains benefit from sleep, the same is likely true for androids and other intelligent machines.

Source: Using Sinusoidally-Modulated Noise as a Surrogate for Slow-Wave Sleep to Accomplish Stable Unsupervised Dictionary Learning in a Spike-Based Sparse Coding Model, CVPR Women in Computer Vision Workshop, 2020-06-14 (Seattle, Washington, United States)

 

Spread the love
Continue Reading