Can AI Be Biased? Some Actions to Take

Posted by: iterateai
Friday, July 2, 2021 at 4:16 PM

Introduction

In the podcast Can AI Be Biased?, our co-founder Brian Sathianathan points out that “a lot of folks, leaders, and executives are concerned about AI, but view it as a wonderful workhorse.” The basic objective of AI is to enable computers to perform such intellectual tasks as decision making, problem-solving, perception, understanding human communication, and so on.  But what if that decision-making ability had a flaw from some bias in the data set?

AI engines are only as good as the data sets used to train them. If the humans are biased (assuming only a certain demographic or economic sector), then the data will be skewed.

We are aware that AI is becoming more pervasive as large organizations are beginning to incorporate it throughout operations, customer service, and strategic planning. A Deloitte report says, around 94% of enterprises face potential Artificial Intelligence problems while implementing it. Whether you have noticed the workings of AI or not, it’s real, it’s present, and it’s not going anywhere anytime soon:

  • There is a trust deficit with AI because of the unknown nature of how deep learning models predict the output, and how a specific set of inputs can devise a solution for different kinds of problems. Many people in the world do not know the use of artificial intelligence, and how it is integrated into everyday items they interact with such as smartphones, Smart TVs, Banking, and even automation.
  • There is limited knowledge about the details within artificial intelligence. Few people are aware of the potential of AI. Small and medium enterprises could leverage AI to assist their work schedules or learn innovative ways to increase their production, manage resources, sell and manage products online, learn and understand consumer behavior and react to the market effectively and efficiently. These same SMEs are also only cursorily aware of service providers such as Google Cloud, Amazon Web Services, and others in the tech industry. We’re trying to help with that with our own Interplay low-code solution.
  • Algorithms need a whole lot of power, and it’s not cheap. Machine Learning and Deep Learning are the stepping stones of Artificial Intelligence and require a fair amount of computing power to operate efficiently.

These are early manifestations of the problems that AI encounters. However, it’s also a human-centered problem. AI engines are only as good as the data sets used to train them. If the humans are biased (assuming only a certain demographic or economic sector), then the data will be skewed. Humans have emotions and assumptions, and those can creep into AI as well. Human’s reaction to the output of machine learning methods with algorithmic bias worsens the situations by making decisions based on biased information, which will probably be consumed by algorithms later. 

The Challenge

The overarching challenge that we want to unpack centers around two questions:

  • What methods are there to uncover bias in AI?
  • What actionable steps do we need to take in order to mitigate that bias?

Specifically, in retail, the challenge we face is finding the right procedures and methods to develop the necessary frameworks, tools, processes, and policies to remove bias. AI is considered a new technology, and we are still well in the experimental stage right now. Large corporations were the first to deploy AI and thus are the first to face AI bias. But corporations aren’t pure research organizations, so there may be a gap between profit-seeking scripts and a fully equitable automated response driven by an unbiased AI.

Once AI begins to take on a bigger role in shaping industries’ business practices and operations, people will be less likely to forgive the flaws of AI. With that being said, what is bias, and how does it relate to AI?

Types of Bias

Our Director of Innovation, Solomon Ray, sums up the definition of AI bias as “the irregularity that occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions in the machine learning process, including the algorithm development and prejudicial training data.” AI is a tool.  The adage that ‘tools don’t have biases, only their users do’ may not hold here: AI are decision-making tools, and the flaws might be “built in” with the data, the algorithm, or the deployment. What we do with it and how it’s applied can amplify those effects.

The most apparent biases are as follows:

  • Cognitive biases where human biases can seep into machine learning algorithms because designers, software developers, or product owners unknowingly introduce them (biases) into machine learning models or a training data set, which includes those biases.
  • A lack of complete data– and that lack of information form a bias in itself.
  • After training, an AI could further be biased in the way it is deployed (deployment bias) or if/when several small decisions are strung together to become an aggregation bias.

According to Gartner, through 2030, 85% of AI projects will deliver erroneous outcomes due to bias and data algorithms, or teams responsible for managing that. On one hand, AI bias will not affect a person’s day-to-day life, it will probably help it. An example is when you or I scan our fingers to download an app from the App Store. The AI is recognizing the minutiae on our finger to approve the download. This action is mundane and repetitive at times, but worth the security and protection it provides. On the other hand, as seen in law enforcement, AI has shown that it is flawed- and is capable of ruining a person’s life. An example of a situation where this could happen is when AI is used to detect suspects at a crime scene, and it goes on to misidentify the right suspect. In this case, the reason why the AI didn’t detect the right suspect is because of a “lack of data” in the form of images, relevancy of the data (when it was collected), and where it was collected (online source). These two drastically different occurrences are representative of how bias has found its way into AI, and if not addressed, the consequences it deems for businesses now and in the future will be dire. Knowing what we know, what steps can large corporations, consumers, tech leaders, and government take to lessen AI bias?

Actions to Take

To address AI bias, we need to look at frameworks, tools, processes, and policies, and find a holistic solution that is capable of not only detecting bias but also removing bias where possible. This means taking measurable steps such as defining metrics, finding “blind spots” in data, and testing them as a way to de-bias AI and increase people’s trust in the system. Currently, many large tech corporations use open source tools to detect AI bias and seek out those that solve specific AI-bias problems.

  • Implementing frameworks help organizations manage risks associated with artificial intelligence, such as company reputation and biased conclusions while capitalizing on its returns. They create checks and balances and a level playing field for institutions and organizations for greater reliability. They also set a benchmark that could be trusted, automated, and built into products to become trust technology. A few frameworks used to minimize bias are as follows:
    • Deloitte’s Framework uses a multidimensional AI framework to help organizations develop ethical safeguards across six key dimensions—a crucial step in managing the risks and capitalizing on the returns associated with artificial intelligence.
    • Forbes Framework uses a combination of factors that tech companies and developers can use to build trustworthy AI such as explainability, integrity, conscious development, reproducibility, and regulations.
    • Rolls Royce Altheia Framework is a toolkit to guide the practical application of ethical AI projects designed to be a clear 32-step process that any organization can follow so that its AI is accurate, well-managed, and has a positive impact on the world.
  • Identifying and researching toolkits that detect and remove bias in ML models recognize patterns within the ML pipeline.  
    • AI Fairness 360 – IBM released an open-source library to detect and mitigate biases in unsupervised learning algorithms.
    • IBM Watson OpenScale – Performs bias checking and mitigation in real-time when AI is making its decisions.
    • Google’s What-If Tool – Helps test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data, and for different ML fairness metrics.
  • Processes that can be taken involve defining metrics and testing ways to “de-bias” AI and increase people’s trust in the systems.
    • Biases are not going to be removed completely because humans are creating biased data and human-made algorithms are checking the data to identify and remove bias.
    • Even though biases can’t be removed completely, they could be minimized at least. Minimizing bias is critical for AI to reach its full potential as well as increasing people’s trust in the system.
  • Policies such as creating governance around AI ensures that bias blind spots are not ignored and mitigation actions are offered.

According to PwC, 76% of CEOs are most concerned with the potential for bias and lack of transparency when it comes to AI adoption. AI research is coming out with advanced algorithms to solve many different use cases that cannot be solved with currently deployed AI engines. New algorithms being put to practice are synthetic data generation, transfer learning, generative networks, neural networks, and reinforcement learning. All these different methods are susceptible to bias.

Companies that understand AI bias as a problem could turn it into an opportunity. Rather than a bug that needs to be fixed, companies that avoid biases will engender higher confidence with their customers, and find that it can be a helpful feature that will give users confidence in AI. To reach the full potential of AI, minimize biases and develop trust, all these solutions could be used together in an organization. Take Apple’s new privacy feature that was implemented into the iOS 14.5 update for example. The feature essentially gave users the reigns to do what they felt was appropriate with their data. Apple made efforts in protecting the privacy and security of the public, thus, making it part of a solution rather than a part of the problem.

The goal should be to eradicate bias in a product as much as possible, and if companies are transparent with these goals in mind, people will reciprocate positively to that.

Summary

AI bias is a product of (1) lack of complete data, and (2) human biases playing a big role in how AI is built out and used. It’s important to highlight that we have the tools readily available to mitigate all kinds of bias, but it starts with us as humans making a conscious effort to be intentional about recognizing our own biases and applying those principles into the policies, tools, processes, and frameworks in the development of AI to address them. If we want to get rid of bias as much as we realistically can, we need to learn and understand AI bias as much as possible, and that it’s not the AI’s fault, the root of it falls on its creators. The way we can approach this is by taking the current use cases that we have and think about where AI bias could occur in applications, and if the bias is removed, will it benefit or hurt business? It comes down to asking questions of how, what, and why, and addressing the problems from the team (product owners, developers, leaders) down to the data itself.

Our Innovation Blog

Stay ahead of trends with insights from iterate.ai experts and advisors

Browse by category
We use cookies to make our site work. We'd also like to set optional analytics cookies to help us improve it. They will be enabled, unless you disable them. Our privacy policy
Accept
Decline