Solution Overview & Team Lead Details

What is the name of your solution?

AI Assistant for Accessible As

Provide a one-line summary of your solution.

Leveraging generative AI will help transform all assessments into plain-language, easily read and easily understood text for all learners.

What type of organization is your solution team?

Nonprofit (may include universities)

What is the name of the organization that is affiliated with your solution?

Understood

What is your solution?

Most assessments are not as accessible as they could be, especially for students with disabilities that impact reading fluency, comprehension, etc. Assessment questions are often written with complex language; they can be too long or contain words with too many syllables, may contain compound questions, or are otherwise not optimized for all learners. This inaccessibility prevents students from fairly and accurately representing their knowledge and skills.

Our proposed solution will transform any assessment into an accessible assessment. Doing so will help to build confidence for those taking the assessment. Increasing accessibility in this way will help to reduce the feeling of confusion and loss of hope that can be experienced when taking an assessment or test that is hard to understand. By leveraging generative AI, assessment questions of all types will be recreated to maximize inclusion. Whether a large assessment company creating assessments to be administered at scale, or a single teacher creating a short check-for-understanding, this tool will ensure that assessment questions are delivered in a way that is inclusive of all learners.

The first iteration of the tool will simply determine whether or not an assessment question meets accessibility standards, and to what extent. The next iteration will provide this determination and suggest alternative language to make the assessment question accessible. We will augment the LLM of this new software to pull from a knowledge base that has been reviewed by experts utilizing neutral tonality.

Educators would simply submit their assessment questions and answer types, and the technology will reword all text to align with appropriate grade-level parameters, syllable restrictions, sentence length, etc. Users can review the revised assessment to ensure that it meets their standards, and we will have channels for reporting. Understood would learn from the types of questions submitted, the tool’s output, and any reports filed to continuously refine and improve the tool. The simplicity of this tool and its capabilities have the potential to shorten the amount of time required of educators to make sure assessments can be easily understood by all. For transparency, we will publish the standards we are using for accessibility. 

This solution will be the next evolutionary phase of our AI assistant, launching publicly on May 20th. This AI assistant is based on a LLM we are currently augmenting to pull from our extensive content library featuring thousands of evidence-based and expert-vetted resources. This initial tool is the first step toward a vision of an “Intelligent Hub” which will leverage AI in a multitude of ways to support neurodivergent people (and all people). This proposed solution would represent the next iteration of our AI Assistant, providing an additional use case to increase impact. 

How will your solution impact the lives of priority Pre-K-8 learners and their educators?

Our tool will help educators and administrators rapidly reword assessments to make them more accessible for all students. This is especially important for K-8th grade students who are developing fundamental language and literacy skills that serve as the building blocks for their future success. Accurate assessments will further enable educators to provide more appropriate and personalized support to students.

Students who struggle with literacy skills, are Multilingual Learners, or experience other neurodivergent traits that make it difficult to interact with the written word often find it easier to communicate their knowledge and ideas verbally. By providing simpler language in assessments, they will expend less effort processing questions. This standardization of language will be important because it should be in accordance with the level of language that each student is used to as adjusted by the teacher. Future iterations of this tool would include the ability to add further customization surrounding differences in dialect in order to further increase confidence of students during testing. 

Accessible language increases comprehension for a multiplicity of students. Many students in the school system are not proficient in English. According to Globo, “about 25 million (8% of the population) speak English ‘less than very well’ which is the threshold to LEP (2022).” They go on to explain that in the 2014-2015, there were 4.8 million (or 10% of all students) English Language Learners registered for schools as a part of the Department of Education system. As they build their vocabulary and grammar, they will benefit from assessments consistently using standardized language that aligns with the appropriate grade level.

An estimated 68% of fourth graders in the United States were reading below grade level in 2022 (Nation­al Assess­ment of Edu­ca­tion­al Progress, 2023). Dyslexia, a learning and thinking difference associated specifically with poorer reading skills, is one prevalent factor (Ambitions ABA, 2023). 

Research shows that clear and concise text can help students improve their reading skills. Our tool will provide widely-accessible appropriate language that aligns to grade-level reading standards to drive better outcomes. As a result, it will be easily integrated into classrooms. The first launch of the tool will translate the simply translate assessments into accessible language as outlined.

How are you and your team (if you have one) well-positioned to deliver this solution?

Understood has a 10-year track record of developing digital resources and interventions designed to foster inclusive environments through open access of our clinician approved virtual resources, partnership with mission-aligned organizations, and deep connection to our community. We reach an average of 20 million people per year, and we are committed to continuous improvement to consistently increase impact through depth of connection with our users. Our core mission is anchored on accessibility, we leverage Universal Design for Learning in all of our content and product development, and our work is informed by an Expert Network of 80+ neuroscientists, psychologists, neuropsychologists, inclusive technologists, and other relevant subject matter experts.

Some of our impactful offerings include a content library of more than 6,000 expert-vetted articles and videos, a rapidly growing podcast network, engaged and facilitated community groups, and behavioral health exercises available on our app. Particularly relevant, we are about to release (May 20) the first iteration of our AI assistant, which will harness our vast content library to deliver real-time responses to user questions that provides both information and links to specific resources for users to learn more..

Further, we are nearly finished with a partnership project aiming to create a more accessible assessment administered to 125,000 students annually. For over a year, Understood has worked with Equal Opportunity Schools (EOS) to ensure that their annual student survey is accessible and inclusive for students who learn differently. EOS serves 125,000+ students of color from low-income backgrounds through 25,000+ educators teaching in 400+ schools within 80+ districts in 26 states. This partnership exemplified the laborious, manual processes we believe could be automated through our AI tool. For example, experts had to manually review the survey in alignment with project priorities, conduct Cognitive Interviews with students, capture demographic data, and match that data to specific parameters and how students responded. The expert team collaborated with partners for recommendations about customizing the survey, conducted accessibility trainings, and provided recommendations for assistive technology. With our AI tool, we believe we can simplify this process in more immediate and responsive ways.

Which dimension(s) of the challenge does your solution most closely address?

  • Encouraging student engagement and boosting their confidence, for example by including playful elements and providing multiple ‘trial and error’ opportunities

Which types of learners (and their educatiors) is your solution targeted to address?

  • Grades 1-2 - ages 6-8
  • Grades 3-5 - ages 8-11
  • Grades 6-8 - ages 11-14

What is your solution’s stage of development?

Concept

Please share details about why you selected the stage above.

We are currently in early release of a Generative AI assistant, leveraging Understood expert-vetted resources about learning differences to answer user questions. This has provided us deep insight into Generative AI capabilities, and spurred motivation for this assessment proposal concept.

We have not begun building a prototype, we are still in the concept stage. We know that this is a viable concept that our internal teams have the skill to execute on but we are working to gain funding for it.

In what city, town, or region is your solution team headquartered?

New York, NY, USA

In what country is your solution team headquartered?

  • United States

Is your solution currently active (i.e. being piloted, reaching learners or educators) within the US?

No, but we will if selected for this challenge

Who is the Team Lead for your solution?

Rahul Rao

More About Your Solution

What makes your solution innovative?

The experience of neurodivergent students and the potential implications of learning and thinking differences are not adequately considered in many assessment approaches. Using AI technology to automate and standardize accessible language use for assessments will significantly expedite the process and ensure these factors are considered in assessment design and execution. This will also save the creator of the assessments time and the money it costs to hire contractors to address these concerns, and also improve the consistency and quality of assessment questions. While other tools that evaluate the accessibility of bodies of text, our AI tool would do the work of altering the questions and answers for the creator of the assessment. This innovative application of AI is simple and impactful, significantly improving the way that tests are created and revised. Teachers will now have a tool designed specifically for helping them to connect with their students. This proposed solution is far superior to the status quo which is time-consuming, inconsistent, and expensive, with accessibility typically handled manually by an expert or not at all. 

This proposed solution would also catalyze broader positive impacts from others in the space through additional use cases, and by inspiring accountability from other actors. For example, further development of this technology beyond this initial build has the potential to impact learning in schools as a whole. In the future, this tool could be adapted for use with more than just assessments. Administrators could use it to edit text for communications with students, parents, and educators, and even outside of the school setting, it can be used to edit copy and other pieces of externally facing text that would benefit from use of language standardization. This capability should also become expected rather than extra for any AI-powered resource, tool, or intervention. 

This innovation will have significant benefit for neurodivergent students, and will also have significant benefit for all students. Accessibility is good for everyone. We know that inclusive and accessible assessments ensure that students with disabilities are accurately assessed on their knowledge and skills, but inclusive and accessible assessments also ensure that all students are accurately assessed rather than inadvertently penalized because of faulty assessment questions. Application of this principle could drive significant change in the market/landscape as developers come to realize and internalize that accessibility is good for all users and design accordingly.

Describe the core AI and other technology that powers your solution.

Understood has become an early adopter in Generative AI technologies, applying these techniques to improving the services and resources we offer our users. Natural Language Processing (NLP), Knowledge Representation, Speech Recognition, and Machine Learning have been at the center of many of our recent initiatives with foundation models, large & efficient language models (incl. closed and open source, fine-tuned), and vector stores. Leveraging these for retrieval augmentation Q&A, content summarization, language translation, assessing reading comprehension levels, multimedia content etc. has enabled us to reach a wider set of users at scale, while also accelerating our educational research areas.

Evaluation and assessment approaches have been key components in ensuring our AI offerings are safe, effective, and not harmful to our users. With this proposal, we are hoping to take some of those learnings and provide our expertise more broadly.

How do you know that this technology works?

There are three key factors driving our confidence in this technology, each of which is discussed in more detail below. First, in our work with Equal Opportunity Schools we identified repeatable patterns we believe can be automated by AI. Second, AI adoption, particularly among professional-aged adults is already high. Third, Understood’s long history of implementing Universal Design for Learning makes us well-positioned to design for accessibility.

As indicated earlier in the background, Understood has worked, in a manual way, to support Equal Opportunity School (EOS) on a similar challenge to what we are proposing to automate via AI. Through our learnings, we feel there are repeatable patterns in generating accessible assessments that we hope to validate can be mostly automated via AI. We do not anticipate a 100% accuracy solution, but one which is able to automate common scenarios, and highlight example questions or passages where an expert review is warranted. 

Ideally this will provide the opportunity for focusing expert time and attention to the more challenging and nuanced areas of any assessment. Research shows that people who use large language models complete tasks more quickly and with ease (Microsoft Research, 2023). This is why AI has so quickly become a resource present in people’s lives, assisting with simplification and expediting of tasks. Currently, 40% of millennials engage with digital assistants daily, indicating that professionals are likely to be open to the innovation we are proposing (OvationCXM, 2023). Automation to drive the building of an AI Assistant that will make existing materials more equitable is in line with building for universal understanding and learning. 

Universal Design for Learning and accessible language have been proven to create better outcomes for students. In 2007, Proctor, Dalton, and Grisham tested whether universal designs like Universal Literacy Environments (ULE) are beneficial to struggling readers and English language learners. ULE is defined as a “digital approach to supported reading (Proctor et al, 2007).” They concluded that this framework is beneficial for both groups after evaluating the effects of a series of universal digital interventions conducted on 30 fourth graders of various backgrounds.

What is your approach to ensuring equity and combating bias in your implementation of AI?

At Understood, our mission is to shape the world for difference so that all of us can thrive. Our focus is on ADHD and dyslexia, and we are experts in accessibility and nonprofit industry leaders in equity and inclusion for neurodivergent people. We promote neuroinclusion through our content, products, and campaigns, and we also understand that understanding intersectional identity markers and compounded marginalization is critical to achieving our mission. We have done deep research to understand the diverse backgrounds and needs of neurodivergent learners, and we’ve designed our digital resources and interventions to be responsive to research and evidence-based. Our development and deployment process monitors and avoids bias through best practices of human-centered, equitable design. 

At the most basic level, our goal is for this software to transform question and answer choices into sentences that follow accessibility guidelines and its outputs can be adjusted by grade level. These accessibility guidelines include, but are not limited to syllable length, sentence length, sentence structure, and use of active language. On top of this, we would like it to be devoid of bias, able to successfully simplify language without changing meaning, and able to remove or flag harmful language. We will measure our success in reaching these goals through continuous review of responses by subject matter experts within Understood’s network. In this way we will ensure our tool is helping to create equitable experiences and address current disparities in access to accessible assessments.

Understood is currently building an empathetic chatbot trained on our research, open access resources to answer questions regarding learning and thinking differences. To ensure that it is without bias, does not regurgitate harmful information, and that responses are accurate, our engineering team has built in a “thumbs up/thumbs down” rating system which populates for each response. The thumbs down response prompts a request for further information. A multiple choice question asks whether the response is harmful, biased, or inaccurate. There is also an open textbox for more specific responses. Each of these are reviewed by our experts and upon completion of the evaluation, edits are made to ensure that these delusions and/or harmful responses are not duplicated. We could easily replicate this system to evaluate the assessments received, but will go a step further to complete a series of all encompassing reviews. 

We hope to have educators and experts in mental health and learning and thinking differences available to evaluate. This evaluation could be an iterative event where we first collect Pre-K-8 assessments and ask for teachers to assess the changes the AI made, looking specifically for evidence of bias and harm, inclusive language, sentence structure, jargon, and other relevant markers. 

How many people work on your solution team?

8 full time staff members and 2 contractors

How long have you been working on your solution?

We have just begun, still in the concept stage

Solution Team

 
    Back
to Top