In 2021, the American Association of People with Disabilities (AAPD) and the Center for Democracy & Technology (CDT) released a report entitled “Centering Disability in Technology Policy: Issue Landscape and Potential Opportunities for Action.” This represented a significant milestone in a partnership between AAPD and CDT to ensure that people with disabilities are properly represented in the field of technology policy. The report provided technology policy advocates with an overview of tech issues that disproportionately impact people with disabilities, as well as recommendations for how those individuals can include disability perspectives in their advocacy.
Since this release, AAPD and CDT have expanded their partnership and have worked together to bring awareness and provide policy solutions that benefit people with disabilities in their interactions with technology, particularly AI and algorithmic systems. This report (released in tandem with a shorter brief) furthers this important work by specifically providing recommendations for disabled community members, disability rights and justice advocates, government agencies, and private-sector AI practitioners regarding best practices for ensuring that people with disabilities are able to enjoy the benefits of AI and algorithmic technologies while being safeguarded from their risks.
It does this by presenting major areas of concern for people with disabilities when they interact with technologies in the context of several major systems: employment, education, government benefits, information and communications technology (ICT), healthcare, transportation, and the criminal legal system. Some of these systems (including employment, education, law enforcement, and healthcare) were briefly covered in the “Centering Disability” report, and this expands on that work; some areas are entirely new. These are, of course, not the only rights-impacting areas wherein people with disabilities are affected by technologies. However, providing recommendations for inclusion for people with disabilities in these high-stakes areas can hopefully serve as a useful resource, building on AAPD and CDT’s earlier work in this area. .
In the midst of a significant expansion of anti-DEIA measures and a significant decrease in the regulatory ambition of the federal government, it may seem a strange time for CDT and AAPD to engage in this work, and particularly to focus on federal agency recommendations. However, it remains as important now as it was in 2021 to ensure that people with disabilities are properly considered in the development of AI technologies and regulations. Further, at least some of the recommendations geared towards federal agencies may be applicable to state and local agencies as well. Further, even if agencies do not act on these recommendations in the short term, they will likely remain useful touchpoints for any future attempts to create a disability-inclusive AI ecosystem.
Disabled people are at a specific risk of discrimination when interacting with AI and algorithmic systems, for several reasons. First, many AI and algorithmic tools are trained on pattern recognition, and make determinations based upon typical patterns within any particular dataset. However, many disabled people (by virtue of their disability) exist outside of typical patterns — they may have gait differences, vocal differences, atypical eye movements, etc. These tools may inadvertently discriminate against people with these sorts of disabilities, particularly when they rely on biometric inputs.
Second, AI and algorithmic technologies create outputs based on inputs, which are again derived from datasets (sometimes referred to as “training data”). Oftentimes, these datasets are not properly inclusive of people with disabilities — they may have inaccurate data about disability, undersample or improperly tag information as being related to disability. These can all lead to AI tools that can discriminate against disabled people, and potentially contribute to negative outcomes.
And third, many people with disabilities are multiply-marginalized, meaning that they are both disabled and identify as members of another marginalized group (like a disabled person of color, or a disabled LGBTQ+ person). Many AI and algorithmic tools have been shown to pose unique risks to other marginalized groups as well, meaning that multiply-marginalized disabled people are at a particular risk of facing discriminatory outcomes as a result of their interactions with these tools. For these reasons and more, this partnership is an important step towards mitigating the potential harms of technology-facilitated disability discrimination, while bolstering innovation that allows for the development of helpful tech tools for people with disabilities to flourish.
People with disabilities can benefit from AI, algorithmic tools, and other technologies. But these tools can also serve as vectors of discrimination, and concerns over accessibility, bias, and privacy abound, particularly when biometric data is involved. Ensuring that people with disabilities are centered in the creation, deployment, and auditing of these technologies and of the policies that govern them can help ensure that the promise of these tools can eventually be realized for all.
Read the full report by AAPD’s Henry Claypool and CDT’s Ariana Aboulafia here.