Strategies to Reduce Bias in Artificial Intelligence

AI is prevalent in nearly every industry, including associations. We spoke with experts to find the best ways to use and avoid misuse of this technology.

By Celeste Smith, CAE

Artificial Intelligence digital concept with brain shape

Artificial intelligence (AI) is paving the way for truly transformative disruption in how we do business, live and work. This disruption is prevalent in science, healthcare, technology, workforce development and other industries.

What is Artificial Intelligence?

Artificial intelligence is the overarching term for a complex body of science that uses computers and machines to imitate human intelligence. Data practices and human-centered design enable AI to function for the benefit of improving outcomes for society. In order for AI to truly benefit society, a vigilant focus on human needs is core to its impact.

According to a recent AI predictions survey from PWC, 86% of the respondents said they think AI will be mainstream tech in their businesses this year. This proliferation can be seen everywhere from email spam filters and self-driving cars to predicting the structures of proteins made by the human body, and the proliferation of deep fake media. 

The Risks Associated with AI

As organizations strive to become more human-centered and inclusive, organizations are also in jeopardy of becoming less human-centered and possibly less inclusive due to AI practices. One example is Applicant Tracking Systems (ATS). Predictive algorithms are used by applicant tracking systems to help organizations automate the hiring process. The systems are revolutionizing the hiring process but are also subject to problems such as hard-wiring existing systemic biases into the algorithms, excluding qualified applicants, screening out applicants with employment history gaps, and discriminating against certain populations. According to a Harvard Business Review study, millions of people are being excluded from consideration. Responding organizations noted algorithmic bias is a problem.

Research in this area by Feoma Ajunwa, Associate Professor of Law at UNC School of Law and Founding Director of the AI Decision-Making Research Program further exposes the issues with machine learning algorithms creating or exacerbating economic inequality in “The Paradox of Automation,” published in the Cardozo Law Review. 

How is AI Influencing Associations?

Dr. Ukanwa discussed the social harm that ensues when profits and efficiencies are the sole focus. “Organizations should be mindful that algorithms could increase profits and efficiencies, but it could be at the expense of social harm against marginalized groups.”
Kimberly Mosley, executive director of the Digital Analytics Association explains, “Mission driven organizations such as associations are typically looking to leverage AI to improve efficiencies, and reduce manual processes.” Organizations looking to serve their mission do so with strategic priorities that make the best use of their dollars in good faith. These goals are paramount as organizations look to provide products and services to their members, customers and clients.

Dr. Kalinda Ukanwa, Assistant Professor of Marketing (Quantitative), and expert on algorithmic bias, algorithmic decision-making, at the USC Marshall School of Business provided context to highlight these issues. She agrees that, “AI has a great potential to make decisions faster, more efficient, and more evidence-based from the data (as opposed to decisions driven by personal tastes, attitudes, social networks, etc.). 

AI holds promise because it can take decisions to scale at a level we haven’t seen before. But some of the perils of algorithms are associated with producing unanticipated biased or unfair outcomes of the algorithm.  This creates tensions for organizations that they will need to wrestle with and mitigate.” 

Dr. Ukanwa discussed the social harm that ensues when profits and efficiencies are the sole focus. “Organizations should be mindful that algorithms could increase profits and efficiencies, but it could be at the expense of social harm against marginalized groups.”  

See Dr. Ukanwa’s specific insights to reducing bias and improving fairness at the bottom of this article.

She also outlined an even bigger issue as it relates to privacy, consumer data and the issue of bias. “Algorithms could allow organizations to customize their products and services for each consumer, but it could be at the expense of a loss of the consumer’s privacy and a loss of the consumer’s control on how his/her data is being used.  Algorithms can make decisions or deliver services at scale to millions of more people, but if the algorithm is biased, then it is spreading bias to millions of more people who may not have experienced the bias otherwise.” 

“Algorithms have been shown to be more accurate than humans in many tasks, but my research has shown that this increased accuracy can also lead to decreased accessibility of the services the algorithms deliver for marginalized consumers. Organizations also have to wrestle with the tension of removing human involvement by using the algorithm. This could be good in some cases, bad in others.” 

Human-Centered AI Design

Human involvement requires an AI approach that is human-centered in the development of AI innovation. A human-centered approach takes into consideration that AI serves to enable, augment and expand the human capacity to serve society and the greater good. This central focus can help mitigate the potential harm of decreased accessibility of services, harm to marginalized populations, and society at large by intentionally using a human-centered approach. It was Amazon’s machine learning specialists who uncovered bias against women in the their recruiting tool, though they admittedly believed the engine would be the “holy grail” (i.e. let’s set it and forget it). Among numerous issues, they found the algorithm favored men over women. Issues such as this are systemic within the AI ecosystem and a human-centered approach is necessary to bridge the gap between a myriad of ethical, business and technology issues. Human-Centered AI design (HCAI) is an emerging discipline that will serve to bridge this gap. It is an ethical problem-solving approach that places human needs at the center of design efforts. The shift to a multi-disciplinary approach will be a win for humankind. The stakes are high especially when we look at issues of human rights and using AI within the justice system.

An Example of Bias in AI with Predictive Policing

A case in point is the widespread use of predictive policing. Predictive policing has been used for the last 10 years in some of the largest cities across the country to assist in solving and preventing crime, and predicting recidivism. The use of HCAI could help advance the cause of justice and employ a more balanced approach to data-only methodologies in current use. Predictive policing experiments have come under increasing scrutiny for civil rights violations, among a host of related issues such as using “dirty data”, inaccurate, incomplete or inconsistent, erroneous or flawed data or data-sets. The data, based on biased or unlawful policing practices, underscores the inextricable link between human involvement and data practices. Police departments are not the only culprits in their use of dirty data. Organizations have large blind spots that point to problems in data practices that can lead to systemic algorithmic bias. 

What can mission-driven organizations do to harness AI to truly serve and improve societal outcomes?

  • Examine the structures and business environment impacting AI and incorporate human-centered design.
  • Employ an AI data strategy and governance practices to mitigate risks and ensure accountability.
  • Be vigilant and engage in continuous learning about the use of AI, its risks and impacts.
  • Ask questions: What should we do differently? Why are we doing this? Is there a better way? What problems do we need to fix?

Artificial intelligence is here to stay. As billions of dollars are spent, it is extremely important to stay focused on the human side of the equation, and accounting for the realities of the environment in which AI is created and used. It is also important to acknowledge the flaws and develop a human-centered mindset for mitigating bias and ensuring AI produces positive outcomes for the benefit of humanity.

How to Reduce Bias & Improve Fairness in Your Organization’s Algorithms

Dr. Ukanwa provided specific insights for organizations to reduce bias and improve fairness in the algorithms they use.

  • Learn more about algorithmic bias and fairness. Acknowledge that it exists and needs to be addressed.
  • Re-examine and reconsider the design of the algorithm.
    • Considerations:
    • Examine whether the design of the algorithm systematically excludes sociodemographic groups because of the chosen rules or features included in the algorithm design.
    • Exclude sociodemographic features (or proxies in the algorithm design (research finding from Ukanwa and Rust 2021).
    • Include the social impact on the organization’s goal (like profits) in the algorithm design  (research finding from Ukanwa and Rust 2021).
    • Apply fairness, accountability, transparency, and ethics (FATE) standards to the algorithm design.
  • Re-examine and reconsider the data used to train the algorithm. 
    • Ensure high variation and high representation in the data (e.g., does data represent a wide array of sociodemographic groups?).
    • Invest in measurement error reduction methods and tools.
    • Audit your algorithms by running test cases with them and examining algorithm outputs for bias.
    • Apply fairness, accountability, transparency, and ethics (FATE) standards to the data.
    • Use bias measurement toolkits such as IBM’s AI Fairness 360, Microsoft’s FATE, or Python’s FairML to measure degree of bias.
  • Diversify the creators/builders/owners of algorithms and data.

About the Author

Celeste Smith, CAE is an association professional and a member of Forum's Content Working Group.

Related Articles

,

How to Use ChatGPT in Association Work

ChatGPT can be another tool for work efficiency, but there are precautions to take as...

READ MORE
Happy IT Technician Working At The Office Using Her Laptop
,

Learning to Learn: Improving Technology Literacy

Thanksgiving is here—when did that happen? As always, with fall comes thoughts of back-to-school and…

READ MORE
,

Avoid Getting Spooked by AI Hallucinations

New wrinkles continue to emerge regarding AI and its seemingly infinite applications.

READ MORE