A candid exploration of how feminist organisations can navigate the ethical complexities of AI while staying true to their values and serving their communities.
A conversation with Francesca Jarvis from Chayn, hosted by One Future Collective on the Feminist Leadership Hub.
August 12, 2025
About the #NoRightAnswers Series
#NoRightAnswers is a conversation series hosted on the Feminist Leadership Hub by Vandita Morarka (Founder & CEO, One Future Collective). This is a space for feminists to come together and hold space for the questions that don’t have easy answers, especially those that are cyclically stuck or often avoided in feminist work. While the series is called No Right Answers, the idea is not that we won’t find any answers but rather that we move away from the burden of needing to be perfect or to always offer solutions that meet every single goal. Each session leads to a short reflection or insights blog to share key learnings with the wider community.
In this edition of #NoRightAnswers, Francesca Jarvis from Chayn joined Vandita Morarka for a deep exploration of what it means to build and use AI technology with feminist values at the center. Together, we unpacked the contradictions, possibilities, and practical challenges of working with AI as feminist organisations.
“Can we build or use AI in ways that centre survivors, care, and community trust? How do we balance pragmatism with values in technology choices?”
These were some of the core questions that guided our latest conversation. This session held space for the nuances, tensions, and possibilities of integrating AI into feminist work, not just the technology itself, but how we approach it with integrity, transparency, and care.
Technology as a Space for Healing, Not Just Harm
Francesca opened by sharing Chayn’s foundational belief that challenges much of how we think about technology in feminist spaces:
“We’re trying to really center the idea that technology doesn’t have to just be a space for harm, but that it can also be a space for healing.” – Francesca Jarvis
As a feminist tech nonprofit creating trauma-informed digital tools for survivors of gender-based violence globally, Chayn’s approach moves us beyond the binary of embracing or rejecting AI entirely. Instead, they ask: How can we use these tools intentionally to extend our capacity to support survivors?
This reframe became central to our conversation, shifting from whether AI is inherently good or bad to how we can approach it with feminist values and genuine care for the communities we serve.
Creating Space for Complexity: Chayn’s Internal Process
Before building any AI tools, Chayn did something remarkable: they created a structured space for their team to grapple with their feelings about big tech and AI. During their organisational retreats, they introduced “cross-cultural learning” sessions focused on big tech, and completed with:
- Diverse resources from around the world covering both progress and harm.
- Open-ended prompts like “What words or feelings do you associate with big tech?”
- Team responses ranging from “potential for good if directed well” to “evil data extraction.”
- Facilitated discussions led by different team members with varying perspectives.
The result? A rich landscape of tensions and contradictions that they chose to name rather than resolve:
“We believe that if we want to practice feminist values, then we have to have these sorts of transparent and open discussions, and they have to be foundational in the work that we do so that there’s space for disagreement, for tension, for opportunity, possibility, and hope.” – Francesca Jarvis
Building in Context: The Contradictions Aren’t Theoretical
Francesca spoke with honesty about the context they’re building in, one where big tech’s role in data colonialism, exploitation of Global South workers, and complicity in genocide in Gaza and Palestine are not abstract concerns but current realities.
“When we talk about building feminist AI, we’re building in this context, so the contradictions aren’t theoretical; they’re built into the current structure of the technology that we all depend on and that we all use.”- Francesca Jarvis
From Theory to Practice: Survivor AI
Chayn’s approach came to life with their development of Survivor AI, a tool that helps survivors of image-based abuse generate formal takedown requests to platforms. The idea was deeply personal. Francesca’s previous experience writing exhausting, often intimidating legal letters for survivors navigating the criminal justice system, and recognising how difficult it was for survivors to “speak the language of a system and get a system to listen.”
Their process was grounded in feminist principles:
- Co-design with survivors: Two in-depth consultations, paying participants for their time, and offering therapy sessions for debriefing
- Trauma-informed design: Building on their principles of Safety, Agency, Equity, Privacy, Accountability, Plurality, Power Sharing, and Hope
- Focus on empowerment: Creating tools that put survivors’ needs at the center rather than stripping agency
- Transparency about data: Being clear about what happens to user data and the limitations of their approach
Two core hypotheses guided their work:
- Platforms are more likely to remove non-consensual images when they receive formal, platform-specific letters
- These letters can be generated via AI without needing a lawyer
Importantly, Francesca noted: “I’m not going to tell you that we’ve proved these hypotheses yet, because we have not.” This commitment to ongoing experimentation rather than claiming premature success felt refreshingly honest.
The Tensions We’re All Sitting With
Our breakout rooms surfaced tensions that resonated across organisations:
- Environmental Impact: Participants, especially those from Latin America, spoke about the tension of using AI while being acutely aware of its environmental cost, particularly painful given how environmental exploitation from big companies disproportionately affects their regions.
- Labor Rights and Economic Justice: At a time when early-career workers already face economic insecurity, how do we navigate using tools that might replace human work? The conversation challenged us to think beyond efficiency toward broader implications for labor justice.
- The Humanisation of AI: Concerns about companies naming AI agents and treating them as equivalent to human employees raised questions about what excessive humanisation means and its pitfalls.
- Trust and Hypocrisy: The observation that wealthy AI developers often don’t use these technologies in their own households, yet promote their use widely, highlights questions about accountability and responsibility.
Vandita brought in a reflection that resonated with a lot of people around Perfect Solutions vs. Continued Harm:
“What harm do we allow to continue to exist because we just did not decide to do something, or we remain stuck in trying to find that perfect solution?”
Scaling Care, Not Replacing Humans
A crucial insight emerged regarding the reframing of AI’s role. Instead of asking “How can AI replace human work?” we explored “Where can technology be the best solution to help our work go further?”
Survivor AI exemplifies this approach. Rather than replacing advocates or counselors, it scales access to a specific type of support, formal takedown requests, that many survivors can’t otherwise access.
“…a way to scale a certain minimum quality of care to a much larger audience which otherwise would not have access to said care.”- one participant noted
This connects to broader possibilities for the sector. Instead of 50 organisations duplicating the same work, what if we could share resources, templates, and tools that allow each organisation to focus on its unique contributions?
Holding Space for All Approaches
Perhaps the most important insight was Francesca’s observation that “all approaches have value”:
“Some organisations are going to embrace AI and others will resist. Some experiment cautiously, and for me, I think we need all of those approaches… rather than trying to get territorial or judgmental about each other’s choices, we can learn from the different strategies and think about what works and what doesn’t.”
This felt like a crucial reframe for movements often caught up in ideological purity. Different organisations can take different approaches while still learning from each other.
The Questions That Guide Us Forward
Rather than ending with solutions, our conversation left us with guiding questions:
- How might your organisation create structured space for honest conversations about AI and technology choices?
- Where in your work could technology extend care rather than replace it?
- How do you balance the urgency of community needs with concerns about imperfect tools and systems?
- How do we create space within organisations for differing views on these complex issues?
What Happens When We Name Contradictions?
As our conversation drew to a close, we returned to a fundamental principle that echoed our previous #NoRightAnswers session on resourcing:
“When we name contradictions, we move toward more honest and liberatory practices.”
Chayn uses Claude while questioning big tech power structures. They encourage AI use while worrying about environmental impact. They work within systems they critique because they believe survivors deserve these tools now.
Rather than seeing these as failures of ideological consistency, naming these tensions creates space for more nuanced, honest approaches to complex challenges.
Moving Forward Together
The future of feminist AI isn’t predetermined; it’s being shaped by conversations like this one, by organisations like Chayn willing to experiment transparently, and by movements that refuse to choose between technological progress and social justice.
What matters isn’t finding the “right” answer, but approaching these decisions with intentionality, transparency, and commitment to our values and communities.
“These things are hard, and there are no easy answers. But I think talking about it can really help us figure out the direction that we want to go.” – Francesca Jarvis
The power lies not in perfect solutions, but in creating spaces for honest dialogue, learning from different approaches, and building collectively rather than in silos.
What conversation have you been avoiding because it’s risky, unpopular, or hits too close to home?
Some of the resources referenced during this conversation, along with additional resources, are listed here for further reading. If you find yourself navigating such complex and layered questions, do join us for our next conversation. You can join the Feminist Leadership Hub by clicking here to receive regular updates on the upcoming conversations in this series.