The Hidden Bias in Artificial Intelligence Used for Public Safety Communications
Main Article Content
Keywords
Responsible AI, Ethical AI, Disability Inclusion
Abstract
The increased interest in the adoption of AI to mitigate the strained public safety communication systems risks exacerbating exclusion for the growing population of people with disabilities (27% of Canadians). This article argues that AI inherits societal biases and flawed design choices, leading to systemic failures for disabled communities. Key flaws include a critical data gap that causes algorithms to treat disability as an outlier, thereby amplifying real-world systemic issues. Examples include policing AI misinterpreting atypical movements or behaviours, NLP systems failing with diverse speech/communication patterns, and crisis tools overlooking accessibility needs, potentially delaying or misdirecting emergency aid. To prevent AI from replicating historical inequities, the article proposes the HUMAN framework for inclusive design or procurement of AI systems. The conclusion stresses that AI can only alleviate public safety challenges if developed with, not merely for, disabled communities, urging the industry to choose equitable design to serve all citizens effectively.
