Understanding How Human and Algorithmic Biases Shape Artificial Intelligence Outputs and What Users Can Do to Manage Them
I have spent over 40 years studying human and machine cognition long before AI reached its current state of remarkable capabilities. Today, AI is leading us into uncharted territories. As a researcher focused on the ethical aspects of technology, I believe it is vital to address an emerging concern.
Generative AI tools like Gemini, Copilot, ChatGPT, Claude, and more are changing how we interact with technology. Despite their immaturity, these tools have become widely accepted, assisting with everyday tasks and solving complex problems.
These tools are still immature and have a long way to go to satisfy our needs, but the progress is fast, and the acceptance rate is high. We cannot deny that their proliferation poses new risks, especially concerning biases, a topic central to my research. So, we need to make a concerted effort to mitigate them.
In this short story, I will address a growing concern about my research area. Besides my technology work, as a cognitive science researcher, I am aware of a critical issue users may overlook: cognitive biases. I wrote about them previously for humans in a story titled A Practical Guide to Handling Cognitive Biases Effectively.
These biases, deeply embedded in both human thinking and algorithmic design, can significantly influence the output of AI systems. Understanding and recognizing these biases is crucial for anyone who uses these tools, whether a casual user or a professional relying on AI for decision-making.
Cognitive biases are systematic patterns of deviation from rationality, and they occur because of how our brains process information. In AI systems, cognitive biases can emerge in two distinct ways: Biases in Data and Biases in User Interaction.
AI tools learn from vast amounts of data (Big Data), which can reflect the biases present in human culture, history, and society. If the data contains biased perspectives, the AI may replicate and even amplify these biases.
As users interact with AI tools, their own cognitive biases can influence how they interpret the responses. This interaction can create feedback loops that reinforce biased conclusions.
I’d like to give you three examples of biases in generative AI tools.
1. Confirmation Bias
Confirmation bias refers to our tendency to seek out or interpret information that confirms our pre-existing beliefs. This bias can easily occur when using generative AI tools.
For example, if a user asks either tool to generate content supporting a particular argument, the system might surface information that aligns with that viewpoint simply because it’s been asked to do so. This can give users a false sense of validation for their beliefs.
During my research, I tested this by asking some AI tools to write a persuasive journalist report on both sides of contentious issues. The AI reflected dominant narratives or broadly available sources, which might unintentionally reinforce existing biases.
2. Availability Heuristic
The availability heuristic is a mental shortcut where we rely on immediate examples that come to mind when evaluating a topic.
For example, Copilot, which assists developers by suggesting code snippets, can inadvertently fall into this trap by providing suggestions based on the most common patterns it has encountered.
However, just because a particular piece of code is frequently used does not mean it’s the best solution for a specific problem. The reliance on what’s “most available” may limit creativity or lead to suboptimal solutions.
3. Anchoring Bias
Anchoring occurs when people rely too heavily on the first piece of information they receive (the “anchor”) when making decisions.
In AI tools like Gemini, which may present users with initial summaries or ideas, this bias can manifest in how users assess the rest of the content.
For instance, when generating a strategy document or business analysis, the first suggestions provided by Gemini may anchor the user’s thought process, limiting their ability to think critically about alternative approaches.
The Illusion of Objectivity
Many users mistakenly assume that because algorithms drive AI systems, they are inherently objective. However, these systems are only as unbiased as the data they’re trained on and the prompts they receive. Even when an AI model generates a seemingly neutral output, subtle biases may be at play.
These AI tools can generate news summaries that might appear impartial but could subtly reflect the biases of the sources they pull from.
I have observed that, depending on the region or language, the AI’s tone and framing of events can differ, suggesting that cultural biases in the training data influence the output.
How Cognitive Biases Can Impact Decision-Making
The influence of cognitive biases in AI tools can have real-world consequences. For instance, a business executive using Copilot to draft a report or strategy could unknowingly rely on biased information, leading to flawed decisions.
Similarly, a researcher using ChatGPT or Gemini to summarize academic papers might overlook critical viewpoints due to confirmation bias, reinforcing their original hypothesis rather than challenging it.
Moreover, AI can also perpetuate societal biases. Gender and racial biases are particularly pervasive in large language models, as they reflect historical and systemic inequalities present in their training data.
In my research, I have found that asking AI tools to describe certain professions or social roles can result in stereotypical responses, further entrenching risky biases.
How to Mitigate Cognitive Biases When Using AI Tools
Recognizing cognitive biases is the first step toward mitigating their effects in generative AI tools. I want to share a few strategies to help.
Challenge the AI by using prompts that explore different perspectives. For instance, to explore a controversial topic, ask it to generate arguments for both sides. This helps reduce confirmation bias.
Cross-check information. Don’t rely solely on AI-generated content. Cross-check with human experts, trusted sources, and alternative datasets. This can be especially important in high-stakes situations, such as legal or medical advice.
Understand your own biases when interacting with AI. If you agree with the AI too quickly, consider whether your cognitive biases influence how you process the information.
AI transparency is important. Advocate for greater transparency in how these AI systems are trained and the data they rely on. This can help users make more informed decisions about when and how to use generative AI.
Conclusions and Key Takeaways
Generative AI tools are powerful and have great potential but are not infallible. As users, we must remain vigilant about the cognitive biases that can affect both the design of these systems and our interactions with them.
With awareness of these biases, we can use AI more effectively and responsibly, ensuring that it serves as a tool for insight rather than one that reinforces our existing blind spots.
In my work, I have seen the profound effects of bias on human thinking, and these same dynamics are now playing out in the digital world. I wrote a comprehensive post reflecting my findings about the issue of racism in technology with growing AI involvement.
Here’s How Technology Can Be Racist.
Emerging technologies, e.g., artificial intelligence, adversely affect the diversity and equality of ethnic people.medium.com
Understanding the nuances of racism or casteism can also help us to find such risky approaches to technology tools like generative AI.
Ultimately, it is up to all of us—researchers, developers, and everyday users—to ensure that we recognize and address the biases present in generative AI tools so we can truly use their potential while minimizing harm to human integrity.
University of Nevada library keeps a good collection on this topic.
Thank you for reading my perspectives. I wish you a healthy and happy life. I am here to help, so reach out when you need support. More stories like this are in this collection. You may also subscribe to my Health and Wellness newsletter to benefit from my decades of health, science, and technology experience.
If you are a new writer on this platform, you can join my publications by sending a request via this link. I support 32K writers who contribute to my publications on this platform. You can contact me via my website. I also have another profile to write and curate tech stories.
You are welcome to join the ILLUMINATION Community on Medium and Substack and our education tool, Substack Mastery, curated by ILLUMINATION-Curators. Here is the Importance and Value of Medium Friendship for Writers and Readers.
A Quick Update on My Recent Book Projects
To support the writing community and help them to be competitive in the market I recently authored a book titled Substack Mastery, which is now available in popular online bookstores. It was well received by readers and now it trends as a best-selling book in its categories. Here is the link to find it in different bookstores. The paperback of this book is also available through Amazon.

Yesterday, I published a new version of the Substack Mastery for busy people and explained the reasons in a new story.
How I Will Help Freelance Writers Save $600 by Condensing My Bestseller 5 Times for Them
Just like some prefer fatty cuts while others opt for lean, my goal is to cater to the unique needs of every reader.medium.com
I will continue beta reading for the next version. So, if you enjoy reading and providing feedback, here are links to chapters for free:
Preface of “Substack Mastery” for Beta Readers, Chapter 1, Chapter 2, Chapter 3, Chapter 4, Chapter 5, Chapter 6, Chapter 7, Chapter 8, Chapter 9, Chapter 10, Chapter 11, Chapter 12, Chapter 13, Chapter 14, Chapter 15, Chapter 16, Chapter 17, Chapter 18, Chapter 19…
You can join my newsletters, where I offer experience-based content on health, content strategy, and technology topics to inform and inspire my readers.
Get an email whenever Dr Mehmet Yildiz publishes. He is a top writer and editor on Medium.



Leave a Reply