Connect with us

Artifical Neural Networks

Legal Tech Company Seeks To Bring AI To Lawyers

mm

Published

 on

Legal Tech Company Seeks To Bring AI To Lawyers

Artificial intelligence programs are being used in more applications and more industries all the time. The legal field is an area that could substantially benefit from AI programs, due to the massive amount of documents that have to be reviewed for any given case. As reported by the Observer, one company is aiming to bring AI to the legal fields, with its CEO seeing a wide variety of uses for AI.

Lane Lillquist is the co-founder and CTO of InCloudCounsel, a legal tech firm. Lillquist believes that AI can be used to help lawyers be more efficient and accurate in their jobs. For instance, the massive amount of data that has to be processed by lawyers is usually better processed by a machine learning algorithm, and the insights generated by the AI could be used to make tasks like contract review more accurate. In this sense, the role for AI in the legal space is much like the various other tech tools that we use all the time, things like automatic spelling correction and document searching.

Because of the narrow role Lilliquist expects AI to take, Lilliquist doesn’t see much need to worry that AI will end up replacing lawyers at their jobs, at least not anytime soon. Lilliquist expects that for the near future most of the tasks done by AI will be things like automating many of the high-volume, repetitive tasks that prevent lawyer’s from focusing their attention on more important tasks. These are tasks like data extraction and categorization. Human lawyers will be able to have more time, more bandwidth, to focus on more complex tasks and different forms of work. Essentially, AI could make lawyers more impactful at their jobs, not less.

Lilliquist has made some predictions regarding the role of AI for the near future of the legal field. Lilliquist sees AI accomplishing tasks like automatically filling in certain forms or searching documents for specific terms and phrases relevant to a case.

One example of an application that fills in legal documents is the company DoNotPay, which promises to help users of the platform “fight corporations and beat bureaucracy” with just a few button presses. The app operates by having a chatbot ascertain the legal problems of its users, and then it generates and submits paperwork based on the provided answers. While the app is impressive, Lilliqust doesn’t think that apps like DoNotPay will end up replacing lawyers for a long time.

Lilliquist makes a comparison to how ATMs impacted the banking industry, noting that because it became much easier for banks to open small branches in more remote locations, the number of tellers employed by banks ended up increasing.

Lilliquist does think that AI will make the nature of the legal profession constantly change and evolve, necessitating that lawyers possess a more varied skill set to make use of AI-enabled technologies and stay competitive in the job market. Other kinds of jobs, positions adjacent to legal positions, could also be created. For example, the number of data analysts who can analyze legal and business related datasets and propose plans to improve law practice might increase.

Lilliquist explained to the Observer:

“We’re already seeing a rise of legal technology companies providing alternative legal services backed by AI and machine learning that are enhancing how lawyers practice law. Law firms will begin building their own engineering departments and product teams, too.”

While Lilliquist isn’t worried that AI will put lawyers out of jobs, he is somewhat worried about the way AI can be misused. Lilliquist is worried about how legal AI could be employed by people who don’t fully understand the law, thereby putting themselves at legal risk.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Artifical Neural Networks

AI System Automatically Transforms To Evade Censorship Attempts

mm

Published

on

AI System Automatically Transforms To Evade Censorship Attempts

Research conducted by scientists at the University of Maryland (UMD) has created an AI-powered program that can transform itself to evade internet censorship attempts. As reported by TechXplore,  authoritarian governments who censor the internet and engineers who try to counter censorship attempts are locked in an arms race, with each side trying to outdo the other. Learning to circumvent censorship techniques typically takes more time than developing censorship techniques, but a new system developed by the University of Maryland team could make adapting to censorship attempts easier and quicker.

The tool invented by the research team is dubbed Geneva, which stands for Genetic Evasion. The tool is able to dodge censorship attempts by exploiting bugs and determining failures in the logic of censors, which can be hard to find by humans.

Information on the internet is transported in the form of packets. Small chunks of data start at the sender’s computer where they are dissembled and sent to the receiver’s computer. When they arrive at the receiver’s computer, the information is reassembled. A common method of censoring the internet is the monitoring of packet data created when a search is made on the internet. After monitoring these packets, the censor can block results for certain banned keywords or domain names.

Geneva works by modifying how the packet data is actually broken up and transferred. This means that the censorship algorithms don’t classify the searches or results as banned content, or are otherwise unable to block the connection.

Geneva utilizes a genetic algorithm, a type of algorithm inspired by biological processes. Geneva uses small chunks of code as building blocks in place of DNA strands. The bits of code, or building blocks, can be rearranged into specific combinations that can evade attempts to break up or stall data packets. Geneva’s bits of code are rearranged over multiple generations, utilizing a strategy that combines the instructions that best-evaded censorship in the previous generation to create a new set of instructions/strategies. This evolutionary process enables sophisticated evasion techniques to be created fairly quickly. Geneva is capable of operating as a user browses the web, running in the background of the browser.

Dave Levin, an assistant professor of Computer Science at UMD, explained that Geneva puts anti-censors at a distinct advantage for the first time. Levin also explained that the method the researchers used to create their tool flips traditional censorship evasion strategies on their head. Traditional methods of defeating censorship strategies involve understanding how a censorship strategy works and then reverse-engineering methods to beat it. However, in th case of Geneva, the program figures out how to evade the censor and then the researchers analyze what censorship strategies are being used.

In order to test their tool’s performance, the research team tested Geneva out on a computer located in China equipped with an unmodified Google Chrome browser. When the research team used the strategies that Geneva identified, they were able to browse for keyword results without censorship. The tool also proved useful in India and Kazahkstan, which also block certain URLs.

The research team aims to release the code and data used to create the model sometime soon, hoping that it will give people in authoritarian countries better, more open access to information. The research team is also experimenting with a method of deploying the tool on the device that serves the blocked content instead of the client’s computer (the computer that makes the search). If successful, this would mean that people could access blocked content without installing the tool on their computers.

“If Geneva can be deployed on the server-side and work as well as it does on the client-side, then it could potentially open up communications for millions of people,” Levin said. “That’s an amazing possibility, and it’s a direction we’re pursuing.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Teaches Itself Laws of Physics

Published

on

AI Teaches Itself Laws of Physics

In what is a monumental moment in both AI and physics, a neural network has “rediscovered” that Earth orbits the Sun. The new development could be critical in solving quantum-mechanics problems, and the researchers hope that it can be used to discover new laws of physics by identifying patterns within large data sets. 

The neural network, named SciNet, was fed measurements showing how the Sun and Mars appear from Earth. Scientists at the Swiss Federal Institute of Technology then tasked SciNet with predicting where the Sun and Mars would be at different times in the future. 

The research will be published in Physical Review Letters. 

Designing the Algorithm

The team, including Physicist Renato Renner, set out to make the algorithm capable of distilling large data sets into basic formulae. This is the same system used by physicists when coming up with equations. In order to do this, the researchers had to base the neural network on the human brain. 

The formulas that were generated by SciNet placed the Sun at the center of our solar system. One of the remarkable aspects of this research was that SciNet did this similarly to how astronomer Nicolaus Copernicus discovered heliocentricity. 

The team highlighted this in a paper published on the preprint repository arXiv. 

“In the 16th century, Copernicus measured the angles between a distant fixed star and several planets and celestial bodies and hypothesized that the Sun, and not the Earth, is in the centre of our solar system and that the planets move around the Sun on simple orbits,” the team wrote. “This explains the complicated orbits as seen from Earth.”

The team tried to get SciNet to predict the movements of the Sun and Mars in the simplest way possible, so SciNet uses two sub-networks to send information back and forth. One of the networks analyzes the data and learns from it, and the other one makes predictions and tests accuracy based on that knowledge. Because these networks are connected together by just a few links, information is compressed and communication is simpler. 

Conventional neural networks learn to identify and recognize objects through huge data sets, and they generate features. Those are then encoded in mathematical ‘nodes,’ which are considered the artificial equivalent of neurons. Unlike physicists, neural networks are more unpredictable and difficult to interpret. 

Artificial Intelligence and Scientific Discoveries 

One of the tests involved giving the network simulated data about the movements of Mars and the Sun, as seen from Earth. The orbit of Mars around the Sun appears unpredictable and often reverses its course. It was in the 1500s when Nicolaus Copernicus discovered that simpler formulas could be used to predict the movements of the planets orbiting the Sun. 

When the neural network “discovered” similar formulas for Mar’s trajectory, it rediscovered one of the most important pieces of knowledge in history. 

Mario Krenn is a physicist at the University of Toronto in Canada, and he works on using artificial intelligence to make scientific discoveries. 

SciNet rediscovered “one of the most important shifts of paradigms in the history of science,” he said. 

According to Renner, humans are still needed to interpret the equations and determine how they are connected to the movement of the planets around the Sun. 

Hod Lipson is a roboticist at Columbia University in New York City. 

“This work is important because it is able to single out the crucial parameters that describe a physical system,” he says. “I think that these kinds of techniques are our only hope of understanding and keeping pace with increasingly complex phenomena, in physics and beyond.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

AI Used To Improve Prediction Of Lightning Strikes

mm

Published

on

AI Used To Improve Prediction Of Lightning Strikes

Weather prediction has gotten substantially better over the course of the past decade, with five-day forecasts now being about 90% accurate. However, one aspect of weather that has long eluded attempts to predict it is lightning. Because lightning is so unpredictable, it’s very difficult to minimize the damage it can do to human lives, property, and nature. Thanks to the work of a research team from the EPFL (Ecole Polytechnique Fédérale de Lausanne) School of Engineering, lightning strikes may be much more predictable in the near future.

As reported by SciTechDaily, a team of researchers from EPFL’ s School of Engineering – Electromagnetic Compatibility Laboratory, recently created an AI program capable of accurately predicting a lightning strike within a period of 10 to 30 minutes away and over a 30-kilometer radius. The system created by the engineering team applies artificial intelligence algorithms to meteorological data, and the system will go on to be utilized in the European Laser Lightning Rod project.

The goal of the European Laser Lightning Rod (ELLR) project is to create new types of lightning protection systems and techniques. Specifically, ELLR aims to create a system that utilizes a laser-based technique to reduce the amount of down-ward natural lightning strikes, accomplished by stimulating upward lightning flashes.

According to the research team, current methods of lightning prediction rely on data gathered by radar or satellite, which tends to be very expensive. Radar is used to scan storms and determine the electrical potential of the storm. Other lightning predictions systems often require the use of three or more receivers in a region in order that occurrences of lightning can be triangulated. Creating predictions in such a fashion is an often slow and complex process.

Instead, the method developed by the EPFL team utilizes data that can be collected at any standard weather station. This means the data is much cheaper and easier to collect and the system could potentially be applied to remote regions where satellite or radar systems don’t cover and where communication networks are spotty.

The data for the predictions can also be gathered quickly and in real-time, which means that a region could potentially be advised of incoming lightning strikes even before a storm has formed in the region. As reported by ScienceDaily, the method that the EPFL team used to make predictions is a machine learning algorithm trained on data collected from 12 Swiss weather stations. The data spanned a decade and both mountainous regions and urban regions were represented in the dataset.

The reason that lightning strikes can be predicted at all is that they are heavily correlated with specific weather conditions. One of the most important ingredients for the formation of lightning is intense convection, where moist air rises as the atmosphere becomes unstable in the local region. Collisions between water droplets, ice particles and other molecules within the clouds can cause electrical charges within the particles to separate. This separation leads to the creation of cloud layers with opposing charges, which leads to the discharges that appear as lightning. The atmospheric features associated with these weather conditions can be fed into machine learning algorithms in order to predict lightning strikes.

Among the features in the dataset were variables like wind speed, relative humidity, air temperature, and atmospheric pressure. Those features were labeled with recorded lightning strikes and the location of the system that detected the strike. Based on these features, the algorithm was able to interpret patterns in the conditions that led to lightning strikes. When the model was tested, it proved able to correctly forecast a lightning strike around 80% of the time.

The EPFL teams’ model is notable because it is the first example of a system based on commonly available meteorological data being able to accurately predict lightning strikes.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading