Thought Leaders
How to Overcome Linguistic and Cultural Biases in GenAI Adoption

In 2025, ChatGPT and AI-powered Google searches dominate, but it’s crucial to keep in mind different modes of communication. Generative AI (genAI) is predominantly text-based and functions in English, which can isolate its use-cases for non-native speakers.
Although English is spoken as a native language by less than 20% of the world’s population, it makes up 67.3% of websites. Many genAI platforms are trained in the English language, meaning that communication can be distorted in working environments that involve multiple languages or cultures.
Communication is so much more than letters on a screen: it involves tone, body language, facial expressions, rhythm, and cultural nuance, to name just a few key factors. Organizations deploying gen AI must ensure that they’re also mitigating potential language and cultural biases, particularly given that we live in a globalized world.
Why Voice Still Matters
There are multiple theories that expand on the importance of multimodal communication, especially within multicultural and multilingual settings.
One of the most prominent ones is Edward T. Hall’s theory around high- and low-context cultures. Hall outlines the intrinsic differences in how various cultures communicate. High-context cultures, which are found in many Asian countries, rely on indirect and non-verbal cues in communication. Japanese, for instance, is a high-context language, where onomatopoeia and subtle shifts in expression dramatically influence intent and inference.
Low-context cultures, in contrast, like many of those in the West (the U.S. and many European countries) rely on direct and verbal communication. Since low-context cultures tend to be more explicit, digital text-based messaging seamlessly blends into their communication fabric. Contextualizing the predominant text-based characteristics of genAI against this theory, it’s no surprise that people from high-context cultures, particularly non-native English speakers, struggle to communicate as effectively with these tools.
In an internationalized business setting, where people from all walks of life converge, the lack of subtle cues like body language and tone can make communicating with AI severely less reliable. Digital or technology-based communication, specifically that experienced via genAI tools, must include other modes beyond text-based messaging.
The Problem of English Bias in GenAI
There have also been serious concerns raised about bias in AI-detectors (which are ironically powered by AI) against non-native English writers. Moreover, in the world of science, recent research suggests that as many as 38% of non-native English speakers are rejected by journals because of a perceived language barrier. The author of this research actually posits that breaking down language barriers is key to knowledge sharing. They also argue that the quality of language shouldn’t dictate whether knowledge is valuable enough to be shared.
Researchers are ringing alarm bells about the lack of language diversity across LLMs, and the risks of excluding huge numbers of the world’s population who aren’t native English speakers. This is a deeply ingrained issue that is limiting how people can engage with and use AI tools.
It’s also an issue that must be addressed sooner rather than later, considering that 95% of U.S. companies have adopted genAI. This technology is being increasingly applied to busy work environments like manufacturing factory floors. However, non-native English speakers are often left out of the equation when discussing AI deployment strategies.
Let’s look at what barriers to successful AI adoption look like in real life. Non-native English speakers struggle with prompts, leading to skewed outputs and risks of misinterpreted information or instructions. For example, Vietnamese manufacturers with a limited grasp of English rely on English translations via genAI for instructions. That causes a massive room for error because the context and more subtle cues are stripped away.
Additionally, trust and confidence are eroded. This can heighten resistance to using technology in workflows, while undermining employees’ morale and motivation.
Closing the Gap
These barriers and challenges should be addressed sooner rather than later. To level the playing field around genAI adoption, cultural and linguistic nuances must be taken into consideration. There are a number of strategies organizations can incorporate to bridge these gaps and build genAI adoption for a multi-lingual future.
Incorporate cognitive and analytical frameworks
One particularly useful cognitive framework is the OODA Loop, developed by renowned jetfighter pilot, John Boyd. The five components of “orient” that make up one of four steps of the OODA Loop—genetic heritage, cultural traditions, previous experiences, new information, and analysis/synthesis—can be applied to understand how individual decisions are affected by inputs. \
My recommendation is to treat language as a part of ‘cultural traditions’ while paying particular attention to the ‘genetic heritage’ and ‘analysis/synthesis’ of individuals. Here is a breakdown of how each component plays a role in training AI models to be more linguistically broad-scoping.
-
Genetic heritage (embedded human traits): train AI systems to detect universal cues like tone and rhythm that are shared between languages and cultures. A multimodal approach to genAI that includes voice, text, and video cues—not just text.
-
Cultural traditions: Create datasets to capture certain language characteristics, like onomatopoeia, and context-heavy forms of communication. Curate models for regions rather than using a universal model that is not as culturally or linguistically agile.
-
Previous experiences: People are more likely to trust systems that reflect their lived reality. For example, employees in Vietnam or Japan will use AI differently from U.S.-based teams, depending on their level of exposure and confidence with these tools. Workshops where local teams can test and practice using genAI. They can then share feedback on how well it reflects their linguistic and cultural context. Organizations can then adjust prompt libraries accordingly, keeping in mind use-cases of these guides (factory workers generally prefer visual guides).
-
New information: genAI tools need to be continuously updated with real-world data. Utilize multilingual data input across datasets so the integrated system learns the nuances of different languages and communication forms.
-
Analysis/synthesis: Here’s where alignment between people and AI happens. Linguistic data and signals are often fragmented, which isn’t compatible with genAI models. This data needs to be converted into AI-digestible data so that it can then be processed and analyzed to generate culturally and linguistically agile outputs.
Practical Training for Best Practices
Employees should also be trained on best practices around prompting genAI platforms, with a focus on clarity. Prompt libraries can be incredibly useful to familiarize teams with the best practices for prompts.
Importantly, in AI training workshops, I also recommend homing in on principles like fairness and transparency. These are fundamental aspects of unbiased AI deployment, and teams should also be well-versed in spotting signs of hallucinations and bias, which exacerbate language barriers.
Additionally, avoid ‘echo chambers’ by ensuring that new information from AI is not just from an individual, but from a wide range of sources. Echo chambers are a significant problem in technology, including AI, reinforcing pre-existing biases and skewing outputs. Employees are exposed to risks of falling into a bias trap and following misaligned guidance or information.
Finally, recognize that any AI tool, including genAI, should be treated as a ‘consultant,’ not a strict guideline. Teams should be encouraged to always loop in a human to clarify any confusion to mitigate the risk of misinformation or misguidance.
AI is transforming business processes, but it’s important not to leave anyone behind along the way. Integrating these strategies within AI deployment empowers businesses to navigate language barriers that otherwise cause bias and snowballed problems.








