Learning for Civic Action Challenge
AI Assistant Mediators: New Tools for Better Conversations
What is the name of your solution?
AI Assistant Mediators: New Tools for Better Conversations
Provide a one-line summary of your solution.
AI chat assistants can make online discourse on divisive topics better by providing real-time recommendations that improve conversation quality, reduce polarization, and maintain authenticity, without altering participants' policy attitudes.
What specific problem are you solving?
The problem we aim to solve centers on the escalating polarization amplified by online discourse, a phenomenon with far-reaching social and psychological consequences. While social media platforms initially emerged as unifying tools—connecting people across geographies and enabling real-time sharing of experiences, thoughts, and ideas—they have now inadvertently become potent catalysts for division and societal fragmentation. This has put everyday people in a challenging predicament: choosing between maintaining treasured relationships and escaping the rancor of polarized online exchanges.
Globally, over 4.2 billion people are active social media users. This staggering figure underscores the scale and the universality of the problem. The divisive nature of online conversations has affected not just major global powers but also localized communities, estranging friends, families, and colleagues. On a local level, the issue exacerbates existing community tensions, creating fault lines that traverse neighborhood groups, school boards, and municipal governance.
The divisive landscape of social media is typified by conversations about contentious topics. People who wish to remain connected and engaged with distant friends and family, or to integrate with new acquaintances and community dialogues, often find themselves embroiled in heated, polarizing exchanges. The fallout from these encounters can be profound, prompting users to disconnect from loved ones or deactivate their accounts altogether.
What is your solution?
Our solution fulfills two core tasks:
1. Creating better conversations: We have designed an AI chat assistant to facilitate more respectful and understanding online conversations. At its core, the tool serves as a real-time mediator, gently guiding conversations in a direction that fosters empathy, mutual respect, and productive exchange. The solution is not intended to censor or alter the content of the discussions but to subtly shift the tone and the manner of the discourse. To do this, the assistant makes repeated, tailored suggestions on how to rephrase specific texts in the course of a live, online conversation, without fundamentally affecting the content of the message.
The technology underpinning our solution is a large language model. It uses machine learning algorithms to assess the tone, context, and potential triggers within an online conversation. Based on its assessment, it provides real-time, evidence-based recommendations to users on how they might rephrase their responses to demonstrate active listening and good-faith engagement to foster a more constructive dialogue. These recommendations appear in real-time, giving users the opportunity to adjust their responses before hitting send.
2. Recruiting unlikely participants to these conversations: The good conversations that the AI assistant enables is useful only when people are engaged in the conversation, and it is interesting only when 'unusual suspects,' or those who wouldn't self-select into an intervention like this, are using the tool. We have a tested model of using automated partisan categorization based on someones network and activity on social media. We use this classification system to identify and then invite everyday social media users to be matched in these conversations. Recruitment methods include direct messages, replies to polarizing content, and targeted ad-groups.
Who does your solution serve, and in what ways will the solution impact their lives?
Our solution is designed to serve the broad population of online users who engage in digital conversations on social media platforms, messaging apps, and other online forums. This includes everyone from individuals casually browsing social media, to those deeply engrossed in online discussions about political, social, and community issues. Currently, these users often find themselves navigating through polarized and divisive discussions, which can lead to emotional distress, misinformation, and a breakdown in communication and relationships.
Two particular groups within this population are especially served by our solution. First, individuals who want to engage in meaningful conversations online but are discouraged by the toxicity and divisiveness they encounter. These individuals are often left feeling unheard, misunderstood, or even attacked, which can deter them from participating in important discussions about societal issues. Second, individuals who unintentionally contribute to the divisiveness due to a lack of effective communication skills or unawareness of how their words may be perceived. These individuals might unknowingly escalate conflict or shut down productive dialogue.
We expect that serving these target populations will have a downstream effect on their un-mediated conversations, which will lead to broader impacts in shared online communities.
How are you and your team well-positioned to deliver this solution?
Our team is uniquely positioned to deliver this solution because we are not only researchers, technologists, and conflict resolution practitioners but also active participants in the digital communities we aim to serve. As U.S.-based social media users, we each have firsthand experience of the divisiveness and conflict that can occur in online conversations. This personal experience grounds our work and constantly reminds us of the importance and urgency of our mission.
Our team lead, in particular, has spent an extensive amount of time actively engaged in online communities, often in the midst of heated and toxic discussions. Over the course of a three-year pilot project focused on improving online conversations, they spent hundreds of hours – sometimes up to 20 hours a week – manually coaching conversations. This immersive experience provided deep insights into the challenges and nuances of online discourse, and these insights directly inform the design and implementation of our solution.
In developing our solution, we have made it a priority to understand the needs of the diverse communities we serve. We have employed human-centered design approaches, conducting workshops in diverse communities across the U.S., including Kalamazoo, Loveland, Denver, Boston, and Laie. These workshops gave us an opportunity to hear directly from individuals about their experiences and needs when engaging in online conversations. Their input, ideas, and agendas have been instrumental in shaping our solution.
We believe that the combination of our personal experiences as social media users, our extensive engagement in online communities, and our commitment to user-centered design make us uniquely equipped to deliver this solution. We understand the challenges that online users face because we have experienced them ourselves, and we have engaged in deep, meaningful dialogues with the communities we aim to serve. This proximity to the problem, and our commitment to continual engagement with our user communities, will guide us as we refine and implement our solution.
Which dimension of the Challenge does your solution most closely address?
Help learners acquire key civic skills and knowledge, including how to assess credibility of information, engage across differences, understand one’s own agency, and engage with issues of power, privilege, and injustice.
In what city, town, or region is your solution team headquartered?
Kalamazoo, MI
In what country is your solution team headquartered?
What is your solution’s stage of development?
Growth: An organization with an established product, service, or business model that is rolled out in one or more communities
How many people does your solution currently serve?
By October 2022, 1,574 people completed participation in our AI assistant field experiment
Why are you applying to Solve?
We are applying to Solve because we are interested in securing funds to bring our solution to its next stage, and to engage in a network of support that will help broaden the impact of our work.
We have found that while the problem and the need is well-recognized here, there are limited funding mechanisms for a solution like ours outside of university research-oriented areas and/or pilot project funding. We have used both and have a proof of concept, and are now especially interested in growth and larger impact. The financial access and in-kind resources available through Solve can accelerate our development and scaling processes
We also highly value peer and industry networks that can provide us with strategic advice and mentorship, and potentially provide us with strategic advice and mentorship, and potentially open doors to collaborations that could enhance the effectiveness and reach of our solution. We anticipate learning from their experiences and insights and sharing our own in return. Furthermore, we believe that increased visibility through media exposure and conference presentations can not only amplify our reach but also stimulate public discussions around the importance of constructive online dialogues.
In which of the following areas do you most need partners or support?
Who is the Team Lead for your solution?
Julie Hawke
What makes your solution innovative?
Our solution is innovative in its approach because it uses advanced AI technology to address the fundamental issue of online divisiveness in real-time, an angle not thoroughly explored before. While many solutions focus on moderating content after it has been posted or flagging inappropriate content, ours intervenes at the source - during the crafting of the message itself. This approach aims to prevent toxic behavior before it even becomes part of the online conversation.
Utilizing a large language model trained on vast amounts of text data, our AI chat assistant can understand the nuances of a conversation, pick up on potential conflict triggers, and offer real-time suggestions for more empathetic and respectful responses. It's not about censoring content, but fostering a shift in tone and discourse style. This approach respects users' freedom of expression while promoting healthier conversation.
By proving successful, our solution could inspire a wave of new technologies and strategies aimed at enhancing online conversation quality. We're setting a standard that online platforms can aspire to when designing their community guidelines and moderation practices. The use of AI in this space is relatively new and holds vast potential to reshape how we interact online.
In terms of market impact, our solution could influence how social media platforms are designed and moderated. It could encourage a shift towards more proactive measures for maintaining online civility, rather than the prevalent reactive strategies. It also opens a new market for AI-based conversation facilitation tools that can be integrated into various online platforms - from social media to online classrooms to professional collaboration tools.
By focusing on real-time intervention, inclusivity of unlikely participants, and respect for freedom of expression, our solution brings an innovative approach to a pressing and widespread issue, setting the stage for broader impacts in the online communication sphere.
What are your impact goals for the next year and the next five years, and how will you achieve them?
Over the next year, our immediate goal is to refine our AI chat assistant's capabilities and broaden its user base. We aim to serve at least 5,000 individuals in the first year, increasing the quality of their online conversations by at least 30%. We plan to achieve this by recruiting targeted audiences on various social media platforms where toxic conversations are already present.
In the next five years, our vision is to make steps towards transforming online discourse on a global scale. Our goal is to expand into comparable social media contexts in other countries, beginning with presence in Kenya and Lebanon, reducing divisiveness and enhancing the quality of online conversations significantly. We aim to expand the implementation of our AI chat assistant across multiple platforms and languages, making it accessible to individuals across the globe.
To achieve these ambitious goals, we will invest in technological development to ensure our tool stays at the forefront of AI capabilities. We will also form strategic partnerships with social media companies (we are already in Meta's trusted partner network), educational institutions, and civil society organizations to promote the use of our tool in different online settings.
Importantly, we will continuously engage with our user community and the broader public, incorporating their insights and feedback into our tool's ongoing development. By being user-centric and data-driven in our approach, we aim to create a tool that truly meets the needs of its users and effectively addresses the issue of online divisiveness.
Which of the UN Sustainable Development Goals does your solution address?
How are you measuring your progress toward your impact goals?
We will use four sets of indicators to measure impact:
Use: Who is using the tool and where: # of individual users, stratified by type (individual, activist, etc.); # of organizational users, stratified by type (size, locality); % increase in conflict contexts applied
Influence: What effects did use have: % of toxicity reduced in online conversations facilitated by the tool. We will analyze anonymized conversation data and assess the level of toxicity before and after the implementation of our AI chat assistant. We aim to see a significant reduction in toxicity levels in online conversations facilitated by our tool.
Engagement Rate: We will measure how often users engage with the AI's suggestions. High engagement rates would suggest users find the tool helpful, and their conversation behavior is being positively influenced.
Partnerships and Collaborations: The number of strategic partnerships and collaborations we form with social media companies, educational institutions, and civil society organizations will also be an indicator of our progress. These partnerships are crucial for the widespread adoption and effectiveness of our tool.
Our indicators specifically align with SDG Target 16.7: "Ensure responsive,
inclusive, participatory, and representative decision-making at all
levels", given that our solution aims to promote more inclusive and
understanding online conversations. Even if current cleavages or platform policies change, digital media and its impact on conflict won’t go away. Outcomes in these four areas will contribute to the reduction of violence, the promotion of trust, access to information, and the strengthening of institutions and social cohesion in conflict-affected settings.
What is your theory of change?
If users receive real-time suggestions on how to improve the quality of how they express their opinions online, then their immediate online conversation will improve, fostering a more respectful and understanding dialogue. This enhanced interaction will not only positively influence the current conversation, but also shape the users' future communication behaviors. As a result, the quality of downstream conversations will also improve. Over time, this leads to a broader change in online discourse, contributing to a significant decrease in online toxicity and polarization. Thus, by transforming the way individuals converse online, we can cultivate a more inclusive, respectful, and constructive digital communication environment.
Inputs: Targeted partisan-based social media recruitment and an AI chat assistant that uses a large language model to provide real-time, evidence-based suggestions on how to improve the quality of online conversations.
Activities: Users are matched across difference in an online platform and interact with our AI chat assistant during their online conversation.
Outputs: Users adopt the suggestions of the AI chat assistant, leading to improved tone and content of online conversations.
Outcomes: Over time, users who consistently interact with the AI chat assistant change their online communication behaviors. This results in a significant decrease in online toxicity and polarization.
Impacts: Ultimately, our solution contributes to a more respectful and understanding online discourse, promoting norms and values of social cohesion.
The above theory of change is backed by various research showing that the manner and tone of conversation significantly impact the quality of discourse and the level of understanding among participants. Our own user feedback and surveys further support this, showing a positive correlation between the use of our AI chat assistant and the perceived improvement in online conversations.
Describe the core technology that powers your solution.
Our solution utilizes cutting-edge Artificial Intelligence (AI) technology, specifically a large language model, to facilitate more respectful and understanding online conversations. Here's how it works:
The AI chat assistant acts as a real-time mediator during online conversations. It utilizes machine learning algorithms to analyze the tone, context, and potential triggers within the discourse. The AI model has been trained on a massive dataset of diverse conversations, which allows it to understand nuances in language and predict how certain phrases might be received.
When the AI detects a potentially divisive or inflammatory statement, it suggests a more constructive alternative in real-time. These suggestions aim to help the user express the same idea but in a more respectful and empathetic manner. Importantly, the AI does not alter the user's intended message or censor their speech. It simply provides a suggestion for a potentially more constructive phrasing.
In addition, our solution includes a component for recruiting unlikely participants to these conversations. This involves using automated partisan categorization based on someone's network and activity on social media. We use this classification system to identify and then invite everyday social media users to be matched in these conversations.
In essence, our solution distills the science and evidence-based knowledge of mediators and conflict resolution practitioners for how to improve communication and build relationships: namely active listening, validating and perspective taking, and envisioning shared futures.
Which of the following categories best describes your solution?
A new application of an existing technology
Please select the technologies currently used in your solution:
If your solution has a website or an app, provide the links here:
The solution is a mash up of two existing pilot projects: https://arxiv.org/pdf/2302.07268.pdf https://howtobuildup.org/programs/digital-conflict/the-commons-project/
In which countries do you currently operate?
In which countries will you be operating within the next year?
What type of organization is your solution team?
Other, including part of a larger organization (please explain below)
If you selected Other, please explain here.
Our solution team is a partnership between a nonprofit (Build Up), and two education institutions (BYU and Berkeley Center for Human-Compatible AI (CHAI))
How many people work on your solution team?
5 full time staff at various institutions with various levels of effort on the solution team
How long have you been working on your solution?
3-5 years independently, under one year as a new partnership
What is your approach to incorporating diversity, equity, and inclusivity into your work?
Build Up was founded by underrepresented populations, and operates as an employee-owned non-profit collective. We will continue to use the diversity of our own team to critically assess processes and impacts on people, and commit to transparency and humility in all aspects of the project’s design and implementation for further accountability.
What is your business model?
Our business model revolves around providing value to both individual users and platforms or social impact organizations that host online conversations.
Individual Users: Our primary service is the AI chat assistant, which users can integrate into their social media and online platforms. The chat assistant is free to use for individuals. The value proposition for these users is that it facilitates more respectful and constructive online conversations. The assistant helps users to express their ideas in ways that are less likely to incite conflict or misunderstanding. This enhances the quality of online interactions and reduces the likelihood of users disengaging from platforms due to negative experiences.
Online Platforms: Our secondary users are the platforms and organizations that host online conversations. These could include social media sites, forums, and other digital communities. With further development, to these users we could offer a service where they can integrate our AI chat assistant into their platforms to improve the quality of interactions among their users. By reducing the prevalence of toxic and divisive conversations, platforms can enhance user satisfaction and retention.
In terms of distribution, our AI chat assistant is a digital product that can be easily distributed and integrated into existing platforms. We plan to market our product through direct outreach to platforms, digital marketing campaigns, and partnerships with organizations that advocate for healthier online conversations. As a non-profit, we do not aim to make a profit with this service, but rather to cover service costs and re-invest any additional resources into further development.
Do you primarily provide products or services directly to individuals, to other organizations, or to the government?
Individual consumers or stakeholders (B2C)What is your plan for becoming financially sustainable?
Our plan for financial sustainability consists of a multi-faceted approach combining grants, service contracts, and we also see a potential use of a B2B model in the future.
Grants and Donations: As a non-profit organization, we will continue to apply for grants and seek donations from individuals and institutions that align with our mission of fostering more respectful online discourse. We aim to secure sustained funding from a variety of sources to support our operations and further development of our AI chat assistant.
Service Contracts: We will be able to establish service contracts with government entities, educational institutions, and NGOs that are interested in using our AI chat assistant to improve the quality of online conversations within their respective platforms. These contracts will provide a steady stream of income, while also expanding the reach and impact of our solution.
B2B Model: We are open to exploring this additional revenue stream, which would come from offering our AI chat assistant as a paid service to online platforms. While the assistant would be free for individual users, platforms could pay to integrate the tool into their systems. This could help platforms enhance user satisfaction and retention, which could, in turn, lead to increased revenue for these platforms. Our pricing model would be designed to cover our service costs and any additional revenue will be re-invested into further development and scaling of our solution.
Solution Team
to Top
Our Organization
Build Up