Unveiling the Impact of AI Systems in Public Service

Unveiling the Impact of AI Systems in Public Service

Table of Contents

  1. Introduction
  2. About the Author
  3. AI in Work and Labor
  4. ai in healthcare
    • Promises of AI in Healthcare
    • Erroneous Withholding of Benefits
    • Racial Bias in Scheduling
    • Racism in Automated Risk Detection
  5. Intervention Points for AI Systems in Public Service
    • Reconsidering Motivations for AI Adoption
    • Analyzing the Purpose of AI Tools
    • Evaluation Process for AI Systems
  6. Further Reading
    • "Automating Inequality" by Virginia Eubanks
    • "Race After Technology" by Ruha Benjamin
    • "Design Justice" by Sasha Costanza-Chock
  7. Organizations to Follow
    • Black in AI
    • Algorithmic Justice League
    • Data for Black Lives
  8. Conclusion

🤖 AI for Whom: Unveiling the Implications of AI Systems for Public Service


In this article, we will delve into the topic of AI systems in public service and shed light on who benefits and who may be adversely affected by these technologies. We will explore two specific domains: AI in work and labor, and AI in healthcare. These two areas will provide us with concrete examples of the promises made by AI, the observed outcomes, and the potential implications for policymakers and public interest advocates. By understanding the limitations and challenges of AI systems, we can strive towards developing equitable and fair solutions that truly address the needs of the citizenry.

About the Author

Before we dive into the examples and discuss potential solutions, let's take a moment to get to know the author of this article, Donya Glabo. Donya is an industry assistant professor at NYU Tandon School of Engineering, specializing in the intersection of technology, culture, and society. She has a Ph.D. in Science and Technology Studies and is the director of the Science and Technology Studies major at NYU. With a strong focus on ethics and the social dynamics of science and technology, Donya brings a unique perspective to the subject of AI in public service.

AI in Work and Labor

Promises of AI in Recruiting

Artificial Intelligence has been hailed as a tool that can revolutionize the recruitment process by correcting human biases, evaluating applicants based on indicators of success, saving time and money, and protecting the safety and well-being of employees and clients. The idea of having an automated system that can ensure fairness and efficiency in hiring practices is indeed alluring. However, the reality often falls short of these promises.

Biases in AI Recruiting Systems

In certain cases, AI recruiting systems have unwittingly perpetuated biases instead of correcting them. For example, in 2018, Amazon used computer models trained on resumes submitted over a 10-year period to vet applicants. Unfortunately, the predominantly male-dominated tech industry resulted in a bias towards male candidates, penalizing resumes that included terms like "women's." This unintended bias showcases how the biases in the data used to train AI systems can seep into the decision-making process.

Another example comes from the language used in job ads themselves. Startups like Textio have analyzed job ads and uncovered how certain words and phrases attract different applicants, often based on gender. This finding demonstrates how AI systems can inadvertently perpetuate gender biases in hiring practices.

Gaming AI Recruiting Systems

One alarming trend is the emergence of videos on platforms like YouTube that aim to train potential candidates in the art of succeeding in AI-analyzed interviews. These videos provide strategies to navigate the signals that AI systems look for, potentially undermining the validity and integrity of the hiring process. This further highlights the discrepancy between the intended purpose and the actual impact of AI systems in recruiting.

Despite the promises, AI-driven recruiting systems are not always aligned with the needs of the people. Instead of addressing biases, they can inadvertently reinforce them, leading to an even more biased and unfair hiring process.

AI in Healthcare

Promises of AI in Healthcare

The potential benefits of AI in healthcare are immense. AI systems hold the promise of reducing benefits fraud, minimizing biases in decision-making, increasing efficiency in scheduling, and automating risk detection. However, as we dig deeper, we find that the reality is often far from ideal.

Erroneous Withholding of Benefits

In the case of automated eligibility for benefits, the removal of human caseworkers can result in serious consequences for vulnerable individuals. Virginia Eubanks highlights a specific incident involving a young girl named Sophie, whose benefits were automatically withheld due to a simple administrative oversight. The system, calibrated to default to reducing benefits whenever a mistake occurred, failed to consider the circumstances properly. This system designed to automate decision-making ended up disadvantaging those who needed support the most.

Racial Bias in Scheduling

Another example of AI's impact is demonstrated in automated scheduling systems used in safety net clinics. These systems allocate appointment times based on historical data, aiming to balance patient loads. However, research revealed that these systems tended to overbook appointments for black patients. The algorithms unwittingly perpetuated racial biases, leading to longer waiting times and inequitable access to healthcare services.

Racism in Automated Risk Detection

Automated risk detection programs designed to allocate high-risk management programs to patients also revealed bias. Research conducted by Ruha Benjamin and her colleagues demonstrated that black patients with the same risk score as white patients tended to be much sicker. The tool, based on cost prediction as a proxy for health needs, inadvertently perpetuated racism against already underserved black patients.

The use of AI in healthcare, despite its promises, can inadvertently perpetuate inequalities and exacerbate existing biases.

Intervention Points for AI Systems in Public Service

As we witness the limitations and unintended consequences of AI systems, it is crucial to identify intervention points that can rectify these issues. Here are three key areas to consider:

Reconsidering Motivations for AI Adoption

Before implementing AI systems, it is essential to reflect on the motivations driving their adoption. Saving money should not be the sole objective. Understanding who has access to necessary public services and who does not, even before AI systems are introduced, is critical for creating equitable solutions.

Analyzing the Purpose of AI Tools

Careful reflection and analysis of AI systems' purpose are paramount in creating fair and effective tools. Decision-making processes based solely on past outcomes can perpetuate existing inequalities. It is essential to consider whether the system was designed to distribute public benefits and services or to boost revenue for for-profit entities.

Evaluation Process for AI Systems

A robust evaluation process is necessary to ensure that AI systems are serving the public interest. Stakeholders from various fields, not just technology experts, should be involved to provide expertise in public health, research, communication, and governance. Meaningful community consultation and the ability to stop the implementation of a system should be integrated into the evaluation process to ensure accountability and fairness.

Further Reading

To Deepen your understanding of the implications of AI systems, consider exploring the following books:

  1. "Automating Inequality" by Virginia Eubanks - This book provides an insightful analysis of how automated decision-making systems perpetuate inequality and offers a critical perspective on the role of technology in public service.

  2. "Race After Technology" by Ruha Benjamin - By examining the intersection of race and technology, this book reveals the ways in which biases manifest in AI systems and provides tools for creating more just and equitable technologies.

  3. "Design Justice" by Sasha Costanza-Chock - This book explores how communities can be meaningfully involved in the design and evaluation of technological systems, emphasizing the importance of inclusivity and justice.

Organizations to Follow

Stay informed about the latest developments and discussions by following these organizations:

  • Black in AI: This community brings together black researchers in the field of AI, fostering conversations and promoting inclusivity in AI research and development.

  • Algorithmic Justice League: Founded by Joy Buolamwini from the MIT Media Lab, this organization combines technology and activism to address the biases and harms perpetuated by AI systems.

  • Data for Black Lives: This organization focuses on data justice and works to empower black communities through the responsible and equitable use of data.

By staying engaged with these organizations, you can gain valuable insights and contribute to the ongoing conversation surrounding AI systems and their impact on public service.


AI systems have the potential to transform public service for the better. However, it is essential to critically examine their promises, limitations, and unintended consequences. By reconsidering motivations, analyzing purpose, and implementing robust evaluation processes, we can work towards creating AI systems that address the needs of all citizens, particularly those who are underserved or disadvantaged. Let us strive for a future where AI serves the public good with equity, compassion, and fairness.


  • AI systems in work and labor and healthcare domains have limitations and unintended consequences.
  • AI Recruitment systems can perpetuate biases and invite gaming of the system.
  • ai Healthcare systems can erroneously withhold benefits and perpetuate racial biases.
  • Intervention points include reconsidering motivations, analyzing AI systems' purpose, and conducting robust evaluation processes.
  • Recommended books: "Automating Inequality" by Virginia Eubanks, "Race After Technology" by Ruha Benjamin, and "Design Justice" by Sasha Costanza-Chock.
  • Follow Black in AI, Algorithmic Justice League, and Data for Black Lives for more insights on AI and equity in public service.


Q: How can AI recruitment systems perpetuate biases?

A: AI recruitment systems can perpetuate biases when they are trained on biased datasets. For example, if historical resumes submitted to a company predominantly come from male applicants, the system may learn to favor male candidates, inadvertently penalizing resumes with words associated with women.

Q: What are the unintended consequences of AI systems in healthcare?

A: Unintended consequences of AI systems in healthcare include the erroneous withholding of benefits due to system errors, racial biases in scheduling appointments, and automated risk detection systems that disproportionately impact black patients, perpetuating disparities in healthcare outcomes.

Q: What are some intervention points for improving AI systems in public service?

A: Intervention points for improving AI systems in public service include reconsidering motivations for AI adoption, analyzing the purpose of AI tools, and implementing a robust evaluation process that involves stakeholders from various fields and allows for community input and accountability.

Q: Can you recommend organizations to follow for insights on AI and equity in public service?

A: Black in AI, Algorithmic Justice League, and Data for Black Lives are organizations that provide valuable insights and foster discussions on AI and equity in public service.


Note: The information provided in this article is based on the content of the original text and the expertise of the AI model.

Most people like

Find AI tools in Toolify

Join TOOLIFY to find the ai tools

Get started

Sign Up
App rating
AI Tools
Trusted Users
No complicated
No difficulty
Free forever
Browse More Content