https://www.theconduit.com/wp-content/uploads/2026/02/shutterstock_2705988101-scaled.jpg
1707
2560
Charlotte Kilpatrick
https://www.theconduit.com/wp-content/uploads/2024/01/Conduit-logo.svg
Charlotte Kilpatrick2026-02-04 14:44:502026-03-16 13:39:22Chocolate’s sustainability conundrumThe Nonprofit Sector Is Not Facing an “AI Adoption Challenge.” It Is Facing a Systems Transition
On March 23, The Conduit convened a cross-sector group of funders, nonprofit leaders, technologists, and policy experts in New York to explore the role of AI in philanthropy. What emerged was not simply a conversation about tools, but about systems.
From the outset, it was clear that this is not a question of whether organizations should adopt AI. The sector is navigating a far more fundamental shift in how it allocates resources, collaborates, governs, and ultimately delivers impact.
Across the discussion, a set of underlying tensions came into focus, each pointing to the same conclusion: AI’s potential is inseparable from the systems it sits within.

AI enables prevention, but funding still rewards reaction
Participants shared examples of AI already being used to anticipate crises — from predictive models that flag flood risk before extreme weather events, to systems that identify emerging food insecurity patterns by combining climate, health, and economic data. These tools make it increasingly possible to act earlier, targeting interventions before problems escalate.
And yet, the funding landscape remains overwhelmingly reactive. Capital continues to flow toward visible, urgent need, rather than toward the systems that could prevent those needs from materializing in the first place. Several participants described a form of “pilot fatigue,” where promising early-warning systems struggle to secure sustained funding because their success is measured in crises avoided rather than crises responded to. The result is a growing disconnect between what technology makes possible and what financial structures allow.

AI rewards coordination, but the sector remains fragmented
AI delivers its greatest value when organizations operate as part of a coordinated system — pooling data, aligning interventions, and orchestrating responses across geographies and actors. In practice, however, the social sector remains highly fragmented, characterized by siloed data, duplicated effort, and collaboration that is often more aspirational than operational.
One example discussed was The Conduit’s FloodAction coalition in the UK, which brings together dozens of organizations to restore wetlands as a form of flood mitigation. AI models have supported decisions on land use, funding allocation, and intervention timing — enabling coordination on a scale no single organization could achieve alone. The lesson is clear: without structures for orchestration, AI’s ability to scale impact remains constrained.
Realizing this potential will require new forms of shared infrastructure — from data platforms and interoperable systems to governance models that allow organizations to collaborate while maintaining trust and autonomy. It also requires funding mechanisms that reward collective outcomes, rather than individual organizational performance.

AI requires trust, but governance is underdeveloped
The effectiveness of AI depends not only on capability, but on legitimacy. Models are only useful if users trust their outputs, and the public will only accept their use if appropriate safeguards are in place. Yet governance frameworks in the social sector remain underdeveloped, particularly in comparison to the pace of technological advancement.
Participants pointed to emerging practices — such as third-party audits, bias testing, and responsible AI assurance layers — but these remain unevenly applied. In many cases, organizations are building and deploying systems without clear standards for accountability or oversight. This creates both ethical risk and practical constraint: without trust, adoption will stall.
Designing governance into systems from the outset — rather than attempting to retrofit it later — will be critical to ensuring that AI can be used responsibly and at scale.
AI creates efficiency, but risks reinforcing inequity.
AI is already improving efficiency across many parts of the sector. Organizations are using it to automate administrative tasks, accelerate grant writing, streamline reporting, and surface insights that inform strategic decisions. For some, this is freeing up significant staff time to focus on mission-critical work.
But these gains are not evenly distributed. Organizations with access to flexible capital, technical expertise, and robust data infrastructure are moving quickly, while smaller or under-resourced nonprofits risk falling further behind. One participant described how a predictive public health tool performed effectively in data-rich urban environments, but struggled to generate accurate insights in under-resourced rural communities where data was sparse.
There is also a broader concern that AI systems, if not carefully designed, may replicate or amplify existing biases — particularly where the communities most affected by social challenges are least represented in underlying datasets.
Addressing these risks will require deliberate investment in capacity-building, inclusive data practices, and tools that are accessible beyond a small subset of well-resourced organizations. The challenge is not to slow adoption, but to ensure that efficiency gains do not come at the expense of equity.
Taken together, these tensions point to a larger conclusion: the sector is not facing a technology adoption challenge. It is navigating a system-wide transition.
AI is not simply a new tool to be integrated into existing models. It is a forcing function — exposing the limits of current funding structures, coordination mechanisms, governance systems, and organizational capacity.
The opportunity, then, is not just to adopt AI, but to redesign the system around it.
This will require shifts in how capital is deployed: toward more flexible, capacity-oriented funding that enables organizations to adapt in real time. It will require new models of coordination, including backbone organizations and shared infrastructure that allow actors to operate as part of a broader ecosystem. And it will require governance frameworks that build trust, accountability, and legitimacy into AI systems from the outset.
A forthcoming whitepaper from the Conduit will explore these themes in greater depth, including practical models for funding, coordination, and governance that can enable the sector to move from reactive to anticipatory systems.
The conversations at the Philanthropy Futures Forum made clear that both the appetite and the building blocks for this shift already exist. What remains is alignment and the willingness to act ahead of visible need.
Note: As part of of our Expert in Residence Programme, we are hiring a Solutions Lead – AI for Good. Details can be found here.
Share This Article



