Once thought of as just automated talking programs, AI chatbots can now learn and hold conversations that are almost indistinguishable from humans. However, the dangers of AI chatbots are just as varied.
These can range from people misusing them to actual cybersecurity risks. As humans increasingly rely on AI technology, knowing the potential repercussions of using these programs are essential. But are bots dangerous?
1. Bias and Discrimination
One of the biggest dangers of AI chatbots is their tendency towards harmful biases. Because AI draws connections between data points humans often miss, it can pick up on subtle, implicit biases in its training data to teach itself to be discriminatory. As a result, chatbots can quickly learn to spew racist, sexist or otherwise discriminatory content, even if nothing that extreme was in its training data.
A prime example is Amazon’s scrapped hiring bot. In 2018, it emerged that Amazon had abandoned an AI project meant to pre-assess applicants’ resumes because it was penalizing applications from women. Because most of the resumes the bot trained on were men’s, it taught itself that male applicants were preferable, even if the training data didn’t explicitly say that.
Chatbots using internet content to teach themselves how to communicate naturally tend to showcase even more extreme biases. In 2016, Microsoft debuted a chatbot named Tay that learned to mimic social media posts. Within a few hours, it started tweeting highly offensive content, leading Microsoft to suspend the account before long.
If companies aren’t careful when building and deploying these bots, they may accidentally lead to similar situations. Chatbots could mistreat customers or spread harmful biased content they’re supposed to prevent.
2. Cybersecurity Risks
The dangers of AI chatbot technology can also pose a more direct cybersecurity threat to people and businesses. One of the most prolific forms of cyberattacks is phishing and vishing scams. These involve cyber attackers imitating trusted organizations such as banks or government bodies.
Phishing scams typically operate through email and text messages — clicking on the link permits malware to enter the computer system. Once inside, the virus can do anything from stealing personal information to holding the system for ransom.
The rate of phishing attacks has been steadily increasing during and after the COVID-19 pandemic. The Cybersecurity & Infrastructure Security Agency found 84% of individuals replied to phishing messages with sensitive information or clicked on the link.
Phishers are using AI chatbot technology to automate searching for victims, convince them to click on links and give up personal information. Chatbots are used by many financial institutions — such as banks — to streamline the customer service experience.
Chatbots phishers can mimic the same automated prompts banks use to trick victims. They can also automatically dial phone numbers or contact victims directly on interactive chat platforms.
3. Data Poisoning
Data poisoning is a newly conceived cyberattack that directly targets artificial intelligence. AI technology learns from data sets and uses that information to complete tasks. This is true of all AI programs, no matter their purpose or functions.
For chatbot AIs, this means learning multiple responses to possible questions users can give to them. However, this is also one of the dangers of AI.
These data sets are often open-source tools and resources available to anyone. Although AI companies usually keep a closely guarded secret of their data sources, cyber attackers can determine which ones they use and manipulate the data.
Cyber attackers can find ways to tamper with the data sets used to train AIs, allowing them to manipulate their decisions and responses. The AI will use the information from altered data and perform acts the attackers want.
For example, one of the most commonly used sources for data sets is Wiki resources such as Wikipedia. Although the data does not come from the live Wikipedia article, it comes from snapshots of data taken at specific times. Hackers can find a way to edit the data to benefit them.
In the case of chatbot AIs, hackers can corrupt the data sets used to train chatbots that work for medical or financial institutions. They can manipulate chatbot programs to give customers false information that could lead them to click on a link containing malware or a fraudulent website. Once the AI starts pulling from poisoned data, it is tough to detect and can lead to a significant breach in cybersecurity that goes unnoticed for a long time.
How to Address the Dangers of AI Chatbots
These risks are concerning, but they don’t mean bots are inherently dangerous. Rather, you should approach them cautiously and consider these dangers when building and using chatbots.
The key to preventing AI bias is searching for it throughout training. Be sure to train it on diverse data sets and specifically program it to avoid factoring things like race, gender or sexual orientation in its decision-making. It’s also best to have a diverse team of data scientists to review chatbots’ inner workings and make sure they don’t exhibit any biases, however subtle.
The best defense against phishing is training. Train all employees to spot common signs of phishing attempts so they don’t fall for these attacks. Spreading consumer awareness around the issue will help, too.
You can prevent data poisoning by restricting access to chatbots’ training data. Only people who need access to this data to do their jobs correctly should have authorization — a concept called the principle of least privilege. After implementing those restrictions, use strong verification measures like multi-factor authentication or biometrics to prevent the risks of cybercriminals hacking into an authorized account.
Stay Vigilant Against the Dangers of AI Reliance
Artificial intelligence is a truly wondrous technology with nearly endless applications. However, the dangers of AI might be obscure. Are bots dangerous? Not inherently, but cybercriminals can use them in various disruptive ways. It's up to users to decide what the applications of this newfound technology are.
- The Black Box Problem in LLMs: Challenges and Emerging Solutions
- Alex Ratner, CEO & Co-Founder of Snorkel AI – Interview Series
- Circleboom Review: The Best AI-Powered Social Media Tool?
- Stable Video Diffusion: Latent Video Diffusion Models to Large Datasets
- Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series