Research Article

The Hidden Bias in Artificial Intelligence Used for Public Safety Communications

Eyra Abraham

Founder, Lisnen Inc.

All articles published in DPG Open Access journals
This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)(https://creativecommons.org/licenses/by-nc/4.0/).

Artificial Intelligence (AI) offers potential operational relief as emergency communication centres experience unprecedented strain from burnout and high turnover. However, in the rush to implement voice recognition and automated surveillance, a critical flaw exists. These systems often exclude the 27% of Canadians with disabilities, a demographic that has grown by 22% since 2017.

My decade of experience developing assistive AI through my startup, Lisnen, combined with my lived experience of disability, reveals a harsh reality. AI doesn’t intentionally discriminate; it inherits exclusion from flawed design choices.

We often reduce the term “disability” to individuals in wheelchairs, a common universal symbol, or envision a fixed medical condition. Disability is much more than that. It’s a fundamental mismatch between human capabilities and the environment.

When we think about disabilities, we must consider the whole spectrum. Consider temporary disabilities like a mild concussion, permanent disabilities as age-related conditions affecting those over 65, or situational limitations, such as trying to communicate in a loud environment. An individual experiencing temporary, permanent, or situational barriers faces challenges in accomplishing essential tasks.

By shifting our perspective on what disability truly means, we begin to realize that many barriers are imposed by society, often due to bias or discrimination. These societal barriers can also manifest in the technology we utilize in our daily work. AI systems can inadvertently recreate these barriers in a digital format.

How AI Amplifies Risks in Emergency Communications for Disabled Communities

The Data Gap Crisis

The essence of AI design involves data collection. Data comes from historical information derived from past beliefs, facts, and experiences. Furthermore, many AI systems scrape content from the internet as data to train their models. However, a notable data gap exists with approximately 96% of websites remaining inaccessible (1). This lack of accessibility limits the inclusive representations of people with disabilities that developers need to design equitable AI systems.

Another challenge regarding the availability of disability data is the issue of disclosure. Many individuals with disabilities are reluctant to disclose their conditions, and it’s important to note that approximately 70% of disabilities are invisible (2). such as hearing loss, epilepsy, or PTSD. Consequently, any collected and labelled data may not necessarily indicate a connection to a disability.

AI developers often misrepresent disabilities as a homogeneous category. In contrast to other homogeneous datasets, such as age, gender, or race, disability is heterogeneous and spans a spectrum. For instance, within the category of vision loss, individuals may have vastly different abilities and experiences—one person might wear eyeglasses, while another may need a cane. Each experiences the world differently. This data complexity is further compounded by intersectionality, which involves the varying experiences of individuals with disabilities based on different socioeconomic backgrounds or races.

When disability data is minimal, it creates a gap that leads algorithms to treat disabilities as statistical outliers. Algorithms struggle to recognize a uniquely different dataset as part of the larger group, often flagging data that represents the experiences of individuals with disabilities as negative. This can translate structural bias from the physical world into the digital realm, perpetuating the systemic disadvantages and ableism that people with disabilities have historically faced, ultimately embedding these issues within AI models and systems once again.

AI systems can perpetuate negative stereotypes

There is a real-world risk of public safety communication failures when AI tools are used without the involvement of people with disabilities. Here are three examples of how biases can emerge in new and developing AI systems:

AI Policing Tools

AI policing tools are increasingly integrated into law enforcement to help reduce, prevent, and respond to criminal activity (3). These systems often employ facial or image recognition technology to assist in managing crime scenes and identifying potential criminal behaviour. However, there are significant limitations to these technologies that warrant careful consideration.

A major issue is that many AI systems lack comprehensive datasets featuring a diverse range of facial features, body sizes, and physical differences. This gap can lead to challenges in accurately recognizing individuals who may use assistive devices due to disabilities. Furthermore, AI tools may struggle to interpret atypical motions or gestures, such as those exhibited by individuals with conditions like Parkinson’s disease or those who have limited mobility.

People with disabilities may display a variety of behaviours that these systems can misinterpret. For instance, it is not uncommon for individuals with disabilities to react differently in high-stress situations. For example, they might flee from police, use repetitive movements to alleviate anxiety, avoid eye contact, or not respond to verbal commands due to a hearing impairment.

When AI systems flag these behaviours as “suspicious,” it can lead to increased workload, greater difficulties, and unintended consequences. It’s crucial to address these limitations to ensure that AI policing tools serve all community members fairly and effectively Command Centre AI Systems.

Modern 911 systems are adopting AI that leverages Natural Language Processing (NLP) models. This innovative technology processes voice and text inputs to transform them into actionable insights, such as extracting essential keywords from a 911 call and swiftly assigning the most suitable emergency response (4).

Unfortunately, these systems often neglect the unique needs of individuals with disabilities. They may struggle to comprehend diverse speech patterns, accents, or atypical speech characteristics, including variations in pitch, pace, and clarity. Additionally, they often do not account for people who cannot speak or hear, such as deaf individuals who use sign language. The challenges become even more pronounced for people with dyslexia or dementia, who may express themselves using unconventional vocabulary.

This failure to recognize diverse communication styles can lead to a serious risk. The AI system might misinterpret the urgency of a call, diminishing the emergency’s priority while failing to fully grasp the complexities of each individual’s lived experience. We must advocate for a more inclusive approach to ensure that every person receives the appropriate care and assistance they deserve in critical moments.

AI in Crisis Communication

There are two distinct types of AI-powered communication tools currently available. The first type uses AI to analyze social media platforms for situational insights during critical events, such as natural disasters or acts of terrorism (5). The second type has AI serving as a media representative, rather than a human (6).

However, these systems often fail to consider the unique communication needs of people with disabilities. For instance, utilizing inaccessible images to assist blind users. There exists a pervasive bias in our society regarding how individuals with disabilities communicate, the contexts in which they communicate, as well as the technologies that help them share or receive information.

If we fail to include these essential considerations in the development of communication tools, we risk exacerbating societal inequities and hindering access to critical information during crises. By prioritizing inclusivity, we can ensure that our communication systems effectively and equitably serve everyone.

The solution to building inclusive AI systems in public safety

The HUMAN Framework

To mitigate risks when developing or procuring AI systems, structural shifts are essential rather than afterthoughts. I propose the HUMAN framework to guide anyone in taking the necessary steps for inclusion. This is an easy-to-remember framework which compiles the recommendations from Accessibility Standards Canada’s Accessible and Equitable Artificial Intelligence Systems (7) and ForHumanity’s Disability Inclusion and Accessibility audit curriculum (8).

H - Hire and Involve Disabled Experts: Incorporating individuals with disabilities into your workforce is crucial for gaining diverse perspectives and enhancing understanding of the complexities surrounding AI systems. People with disabilities bring unique lived experiences that can help identify knowledge gaps and highlight potential oversights in the deployment of AI technologies. To minimize exclusion, organizations should consider employing experts with disabilities and collaborating with disability advisors when assessing AI tools. These experts can contribute to testing the systems or help establish criteria for procurement, ensuring that products meet the needs of a diverse population. Additionally, investing in training on disability inclusion is essential for cultivating a workplace culture that values and promotes inclusion.

U - Universal Design Standards: Numerous standards play a crucial role in bridging the digital divide. Accessibility Standards Canada offers a variety of standards that enhance Information and Communication Technology (ICT) products and services, as well as Artificial Intelligence (AI) systems, to make them accessible. Additionally, global benchmarks like the Web Content Accessibility Guidelines (WCAG 2.2) ensure compliance with the AA level, promoting a more inclusive web environment. Universal design considers the diverse ways individuals with disabilities engage with AI systems while providing accessible options. This approach also emphasizes the importance of validating these systems with assistive technologies, such as screen readers, ensuring that everyone has equitable access to technological advancements.

M - Mitigate Algorithmic Bias: When developing or procuring AI systems, it’s essential to select those that offer disability-specific accuracy metrics, include some completion of an equity audit, and mention of intentionally curated, inclusive datasets for fairness. Systems that have been tested for accessibility can provide greater confidence in their ability to effectively address unique edge cases encountered by people with disabilities.

A - Accountable Oversight: Building responsible AI systems hinges on accountability and oversight, with transparency to the public playing a crucial role. It is important for individuals to be informed when they are interacting with an AI system. They should have the option to opt out of using it. Additionally, users should be aware of any failures in the AI system and the reasons behind those failures. Implementing a transparent incident reporting system that tracks AI failures and outlines processes for resolution is essential for promoting inclusion. This system should be designed to be accessible to individuals with disabilities and presented in plain language to ensure that everyone can easily comprehend the information and take appropriate action.

N - Necessary Alternatives: There is an emerging discourse regarding the potential ban on surveillance systems because they do not adequately consider the needs of individuals with disabilities. Implementing a policy that favours human-only methods or the successful use of non-AI tools, such as Next Generation 911 (NG911), could foster a more inclusive environment. NG911 allows contact via voice, text, video, and photos, letting people communicate in ways that are natural to them. Its interoperability with devices like smartphones helps overcome communication barriers. The deaf and many disability-related communities strongly advocate for NG911 because it removes the slow third-party Video Relay Services, enabling direct, accurate communication with emergency services. Prioritizing equity is the most effective approach to serving the public and enhancing organizational efficiency.

Conclusion

AI has the potential to lessen public safety challenges, but this is only possible if it is designed with the involvement of disabled communities. As emergency systems evolve, the industry faces an important ethical decision: to either continue replicating historical exclusions or to develop equitable, inclusive AI solutions. With an increasing number of people experiencing some form of disability, neglecting these communities will deepen the divide in access to public services. The HUMAN framework presented here aims to transform public safety communications, ensuring they are inclusive and accessible to all Canadians.

REFERENCES

1. “The WebAIM Million.” Webaim.org, 2024, webaim.org/projects/million/.

2. Mutebi N, Kelly R. Invisible Disabilities in Education and Employment [Internet]. POST. 2024. Available from: https://post.parliament.uk/research-briefings/post-pn-0689/

3. Epstein B, Emerson J. Navigating the Future of Policing [Internet]. Policechiefmagazine.org. 2024. Available from: https://www.policechiefmagazine.org/navigating-future-ai-chatgpt/

4. Philimon, W. The Policing Project [Internet]. The Policing Project. 2025. Available from: https://www.policingproject.org/rethinking-response-articles/2025/5/8/part-two-body-worn-camera-analytics-e3zg9

5. Integrating Artificial Intelligence Into Crisis Mangaemant [Internet]. Juvare. Available from: https://www.juvare.com/integrating-artificial-intelligence-into-crisis-management

6. Salih MHB. Use of Artificial Intelligence (AI) in Public Relations (A Case Study of Government Offices in Pakistan). SSRN Electronic Journal [Internet]. 2025. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5412555

7. Accessible and Equitable Artificial Intelligence Systems [Internet]. Accessible.canada.ca 2025. Available from: https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systemsforhumanity.center. Available from: https://forhumanity.center/