Download PDF

Algorithmic Accountability in Public Sector Decision Making: A Framework for Auditing Machine Learning Biases in Social Services

Author : Dr. K Jayaprakash

Abstract :

As governments increasingly deploy Machine Learning (ML) systems to automate high-stakes social service decisions—such as welfare allocation, housing prioritization, and child welfare risk assessment—the risk of algorithmic bias has become a critical concern. While fairness metrics exist, they are often applied ad-hoc without a standardized auditing process suited for the public sector's unique legal and ethical constraints. This paper proposes "CivicAudit," a comprehensive end-to-end framework for auditing ML models in public administration. We define a three-layered approach:

  1. Data Provenance Analysis to detect historical systemic inequalities,
  2. Metric-Based Stress Testing utilizing a novel weighted fairness score, and
  3. Counterfactual Explanation Generation for stakeholder transparency.

We demonstrate the efficacy of CivicAudit through a simulated case study on a housing allocation algorithm, revealing latent biases against marginalized demographics that traditional accuracy metrics failed to capture.

Keywords :

Algorithmic Fairness, Public Sector AI, Algorithmic Auditing, Explainable AI (XAI), Bias Mitigation; Social Computing, Digital Governance.