Connect with us

Ethics

Researchers Develop Algorithms Aimed At Preventing Bad Behaviour in AI

mm

Published

 on

Researchers Develop Algorithms  Aimed At Preventing Bad Behaviour in AI

Along with all the advancements and advantages artificial intelligence has exhibited so far, there were also reports of undesirable side effects like racial and gender bias in AI. So as sciencealert.com poses the question “how can scientists ensure that advanced thinking systems can be fair, or even safe?”

The answer may be laying the report by researchers at Stanford and the University of Massachusetts Amherst, titled Preventing undesirable behavior of intelligent machines. As eurekaalert.org notes in its story about this report, AI is now starting to handle sensitive tasks, so “policymakers are insisting that computer scientists offer assurances that automated systems have been designed to minimize, if not completely avoid, unwanted outcomes such as excessive risk or racial and gender bias.”

The report this team of researchers presented “outlines a new technique that translates a fuzzy goal, such as avoiding gender bias, into the precise mathematical criteria that would allow a machine-learning algorithm to train an AI application to avoid that behavior.”

The purpose was, as Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper points out “we want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems.”

The idea was to define “unsafe” or “unfair” outcomes or behaviors in mathematical terms. This would, according to the researchers, making it possible “to create algorithms that can learn from data on how to avoid these unwanted results with high confidence.”

The second goal was to “develop a set of techniques that would make it easy for users to specify what sorts of unwanted behavior they want to constrain and enable machine learning designers to predict with confidence that a system trained using past data can be relied upon when it is applied in real-world circumstances.”

ScienceAlert says that the team named this new system  ‘Seldonian’ algorithms, after the central character of Isaac Asimov’s famous Foundation series of sci-fi novels. Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst and first author of the paper notes, “If I use a Seldonian algorithm for diabetes treatment, I can specify that undesirable behavior means dangerously low blood sugar or hypoglycemia.” 

“I can say to the machine, ‘While you’re trying to improve the controller in the insulin pump, don’t make changes that would increase the frequency of hypoglycemia.’ Most algorithms don’t give you a way to put this type of constraint on behavior; it wasn’t included in early designs.”

Thomas adds that “this Seldonian framework will make it easier for machine learning designers to build behavior-avoidance instructions into all sorts of algorithms, in a way that can enable them to assess the probability that trained systems will function properly in the real world.”

For her part, Emma Brunskill also notes that “thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI.”

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

COVID-19

Universal Basic Income in the Age of COVID-19

mm

Published

on

Universal Basic Income in the Age of COVID-19

The idea of Universal Basic Income (UBI) polarizes and divides. Proponents believe it is necessary with AI, and robotics disrupting the workforce, and human laborers becoming a relic of the past. Detractors believe that it create a society which values laziness over hard work, and where all sense of purpose is lost.

Both groups make solid arguments, what is needed is more data. Currently, one of the side effects of the COVID-19 outbreak, is that multiple nations have implemented UBI without calling it that. While the United States paid a one-time lump sum of $1200 to all single adults who reported adjusted gross income of $75,000 or less on their 2019 tax returns, other countries such as Australia, Canada, and New Zealand have been more generous.

Canada is the simplest program to understand, they will pay $2000 a month for up to 4 months to any Canadian who has lost their job due to COVID-19. This is a program called the Canada Emergency Response Benefit (CERB). We will explore why this is important.

 

What is Universal Basic Income?

Stanford simply defines UBI as “a periodic cash allowance given to all citizens, without means test to provide them with a standard of living above the poverty line”. Furthermore, “It varies based on the funding proposal, the level of payment, the frequency of payment, and the particular policies proposed around it”.

The idea is to protect all members of society so that no one is left behind. When members of society are not living in poverty, they are less likely to turn to crime, which results in reduced policing and incarceration rates. These same citizens are more likely to educate themselves, to donate time to charitable causes, and to contribute to society in other important ways.

 

Who Believes in Universal Basic Income?

There’s something in common with proponents of UBI, they are generally involved in technology, and they have a firm understanding of how disruptive AI and robotics are going to be, and they recognize that unless a shift in society is undertaken, that many jobs will be lost, and that poverty will increase exponentially.

Richard Branson states the following:

“I think with the coming on of AI and other things there is certainly a danger of income inequality.” Branson tells CNN.

He continued by stating “the amount of jobs [artificial intelligence] is going to take away and so on. There is no question”technology will eliminate jobs. “It will [UBI] come about one day.”

Elon Musk did not mince words:

“I think we’ll end up doing universal basic income,” Musk stated at the World Government Summit in Dubai. “It’s going to be necessary.”

In a separate interview Musk stated “There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” says Musk to CNBC. “Yeah, I am not sure what else one would do. I think that is what would happen.”

Mark Zuckerberg is a huge proponent of UBI:

“Now it’s our time to define a new social contract for our generation. We should explore ideas like universal basic income to give everyone a cushion to try new things,” says Zuckerberg.

 

Universal Basic Income Pilot Projects

There are currently multiple pilot projects in many diverse regions.  In Finland, a two-year pilot scheme is taking place under which 2,000 unemployed people have been given 560 euros per month. When interviewed many of the recipients of these funds reported more happiness, less stress, and the ability to take risks, such as pursuing other forms of employment, or education.

Ontario, Canada previously ran a pilot program with 4000 unemployed people which ran for one year in the communities of Thunder Bay, Lindsay, Hamilton, Brantford and Brant County. Under this project, a single person could have received approximately $17,000 a year, minus half of any income he or she earned. A couple could have received up to $24,000 per year. People with disabilities could have received an additional $6,000.The program ran until the government pulled the plug, citing a lack of funding.

Other pilots projects have been operational in Scotland, Kenya, The Netherlands, and even California.

All of these pilot projects suffered from the same issues: Lack of funding, a small sample size, too narrow of a location, and poor data collection.

 

The UBI Opportunity

The CERB program in Canada is UBI in its truest sense. It provides a payment of $2,000 for a 4-week period for up to 16 weeks which is more generous than most countries during the COVID-19 outbreak. The current number of enrolled Canadians is in the millions, this means the sample size is large.

Other benefits of CERB, is unlike most UBI pilots you have a sample size in many regions instead of one specific region. UBI can then be tested in multiple settings such as small towns, suburban areas, and in large cities. Since the amount of money collected does not change, the impact of this subsidy could then be studied based on the cost of living in each region. $2000 a month for someone in remote Nova Scotia, may be more impactful than the same amount in expensive urban environments such as Vancouver, and Toronto.

What I recommend is that instead of trying to fund a UBI pilot project from scratch, something which has failed multiple times in the past, is that a supplementary fund is initiated to study the impact of UBI in Canada when government funds are used.

An additional small amount of money could be paid to Canadians who choose to enroll in an anonymized data collection program. Each Canadian could receive an additional $200 to participate, they would need to outline where the funds are used, for what purpose, as well as how they feel regarding the program.

The purpose of this study would be to fully understand the mindset of the recipient of these funds. Is a sense of purpose lost? Or is the relief of not falling below the poverty line enough for people to choose to educate themselves online for future employment opportunities? These are the type of questions that we need to ask, and we currently have the largest unintended UBI pilot program in the world to ask those important questions.

After all, while the current high levels of unemployment are due to a virus, in 2030 it might be automation caused by AI which results in a similar level of unemployment.

 

Spread the love
Continue Reading

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab - Interview Series

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading

Ethics

AI Researchers Propose Putting Bounties on AI Bias to Make AI More Ethical

mm

Published

on

AI Researchers Propose Putting Bounties on AI Bias to Make AI More Ethical

A team of AI researchers from companies and AI development labs like Intel, Google Brain, and OpenAI has recommended the use of bounties to help ensure the ethical use of AI. The team of researchers recently released a number of proposals regarding ethical AI usage, and they included a suggestion that rewarding people for discovering biases in AI could be an effective way of making AI fairer.

As VentureBeat reports, researchers from a variety of companies throughout the US and Europe joined up to put together a set of ethical guidelines for AI development, as well as suggestions for how to meet the guidelines. One of the suggestions the researchers made was offering bounties to developers who find bias within AI programs. The suggestion was made in a paper entitled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”.

As examples of the biases that the team of researchers hope to address, biased data and algorithms have been found in everything from healthcare applications to facial recognition systems used by law enforcement. One such occurrence of bias is the PATTERN risk assessment tool that was recently used by the US Department of Justice to triage prisoners and decide which ones could be sent home when reducing prison population sizes in response to the coronavirus pandemic.

The practice of rewarding developers for finding undesirable behavior in computer programs is an old one, but this might be the first time that an aI ethics board has seriously advanced the idea as an option for combating AI bias. While it’s unlikely that there are enough AI developers to find enough biases that AI can be ensured ethical, it would still help companies reduce overall bias and get a sense of what kinds of bias are leaking into their AI systems.

The authors of the paper explained that the bug-bounty concept can be extended to AI with the use of bias and safety bounties and that proper use of this technique could lead to better-documented datasets and models. The documentation would better reflect the limitations of both the model and data. The researchers even note that the same idea could be applied to other AI properties like interpretability, security, and privacy protection.

As more and more discussion occurs around the ethical principles of AI, many have noted that principles alone are not enough and that actions must be taken to keep AI ethical. The authors of the paper note that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.” The co-founder of Google Brain and AI industry leader Andrew Ng also opined that guiding principles alone lack the ability to ensure that AI is used responsibly and fairly, saying many of them need to be more explicit and have actionable ideas.

The bias bounty hunting recommendation of the combined research team is an attempt to move beyond ethical principles into an area of ethical action. The research team also made a number of other recommendations that could spur ethical action in the AI field.

The research team made a number of other recommendations that companies can follow to make their AI usage more ethical. They suggest that a centralized database of AI incidents should be created and shared among the wider AI community. Similarly, the researchers propose that an audit trail should be established and that these trails should preserve information regarding the creation and deployment of safety-critical applications in AI platforms.

In order to preserve people’s privacy, the research team suggested that privacy-centric techniques like encrypted communications, federated learning, and differential privacy should all be employed. Beyond this, the research team suggested that open source alternatives should be made widely available and that commercial AI models should be heavily scrutinized. Finally, the research team suggests that government funding be increased so that academic researchers can verify hardware performance claims.

Spread the love
Continue Reading