https://www.theconduit.com/wp-content/uploads/2026/02/shutterstock_2705988101-scaled.jpg
1707
2560
Charlotte Kilpatrick
https://www.theconduit.com/wp-content/uploads/2024/01/Conduit-logo.svg
Charlotte Kilpatrick2026-02-04 14:44:502026-03-16 13:39:22Chocolate’s sustainability conundrumCan AI reshape insurance without breaking it?
Predicting the future use of any technology and its side effects on society requires an understanding of both humanity and science that no machine can yet compute.
If somebody had been asked to predict 40 years ago how the ubiquity of the internet would impact our lives, it would have been near impossible to imagine a world where the average person would spend 88 days a year looking at a small electronic rectangle called a mobile phone, that reported loneliness would skyrocket, or that the number of democracies in the world would decline. Each of these surprising turns of events could be partially explained by how life online has made us more isolated, polarised our news, and zapped our attention spans.
But although the ramifications of AI are just as difficult to predict, that doesn’t mean there’s nothing to be gained from imagining the future. When we contemplate the uses of a new technology, we aren’t so much imagining the evolution of the technology itself, but rather considering how we as humans should use it to create the type of future that reflects the best of our values.
Last month at The Conduit policymakers, industry leaders, and technologists gathered to hear a debate on this very question. The motion put before them: Will artificial intelligence make the world uninsurable? The discussion did not attempt to predict the future with certainty. Instead, it explored the competing forces already reshaping insurance, inviting the audience to consider both the risks and opportunities that new technologies may bring.
The speakers also made clear what they were not addressing. The discussion considered the environmental impact of AI through the growing demand for energy and the linked challenges for insurability, but of significant urgency to the audience was how AI is currently being deployed within insurance systems. One thing that was made clear by the end of the hour, was that even if opinions in the room diverged on how the future will unfold, we all share responsibility for deploying technologies in a way that shapes a more stable and inclusive world.
Siddarth Shrikanth, Investment Director, Just Climate (For the Motion)

Opening for the motion, Siddarth Shrikanth argued that artificial intelligence threatens the fundamental structure that allows insurance to function as a system of shared risk. He defined insurability not simply as the availability of policies, but as a collective arrangement that depends on uncertainty and broad participation. According to his argument, AI weakens this arrangement by making risk increasingly predictable at the level of the individual.
At the centre of his reasoning was the idea that insurance relies on what he described as a “veil of ignorance” of who will suffer losses. AI, he said, reduces that uncertainty by enabling insurers to identify high risk individuals with increasing precision. While this might appear to improve efficiency, it changes the incentives within the system. Insurers can avoid covering those most likely to claim, or price them out of the market altogether. Over time, this leads to smaller and less diverse risk pools, undermining both fairness and financial stability.
Shrikanth extended his critique to the use of AI in claims processing. Automated systems, he argued, can deny claims with limited transparency, leaving policyholders with fewer avenues for recourse. He cited an alarming and mind-boggling statistic that 92% of AI-denied claims are overturned on appeal, but only 0.2% are ever appealed. For Siddarth this demonstrates how power asymmetries and access barriers lead to widespread unfair outcomes.
He also placed AI within the context of climate change, which he described as already eroding insurability in many regions. Rather than addressing this trend, AI may accelerate it by allowing insurers to identify and withdraw from high-risk areas more effectively. By example, he pointed to the devasting LA fires of 2025. Insurance policies in the city had gone up by 80% a year before the fires broke out, and one in five insurance policy holders had withdrawn because insuring against fire had become too expensive. The result is that the government becomes the insurer of last resort.
“We see the evidence today in the number of uninsured people in the UK. Today homes are only covered by flood insurance because of a political subsidy called Nature,” Shrikanth argued.
“And as we socialise more of the risks that we can no longer ensure, I would argue that’s actually not a particularly insurable world.” If such a world comes to pass, he argued, many of the people in the audience would have to find new jobs.
In his conclusion, Shrikanth described a future in which private insurance plays a diminished role, with governments increasingly stepping in as insurers of last resort. This shift, he argued, represents a breakdown of the existing system, with fewer risks covered privately and greater reliance on public support.
David Carlin, Founder, D.A Carlin and Company (Against the Motion)

Arguing against the motion, David Carlin began his rebuttal by framing exactly what is meant by AI in insurance. Instead of pigeonholing AI as a destabilising force that will use ChatGPT to automatically deny all claims, Carlin argued that we should consider all the ways the technology can be used as a tool. For Carlin the real risk to insurability is climate change, not AI. With this in mind, the growing scale and complexity of risks require more sophisticated methods of analysis, not less.
Carlin challenged the assumption that AI eliminates uncertainty. While it can improve predictions, the world remains inherently unpredictable. AI allows insurers to process large volumes of data, including climate information, and to translate that complexity into more informed decisions. This capability is essential in a context where risks are becoming more interconnected and harder to assess.
Addressing concerns about exclusion, Carlin argued that these issues are not unique to AI. Problems such as discrimination and unequal access have long existed within the insurance industry. The appropriate response, he said, is to strengthen governance and regulation. AI should be understood as a tool whose impact depends on how it is used, rather than as a force that inevitably leads to negative outcomes.
“AI is a powerful tool doesn’t mean it needs to be exploitable. We already have all the information we need to be exploitative,” he argued. “What we are saying is AI is going to help us get to a safer world, a more personalised world and one that is ultimately going to provide coverage facing this insurance crisis.”
Carlin concluded that without AI, insurers would be less capable of understanding and managing the risks they face. In that scenario, the retreat from certain markets could be even more pronounced. From his perspective, AI is not the cause of declining insurability but a means of preserving it under difficult conditions.
Ed Dowding, technologist and consultant (For the Motion)

Ed Dowding reminded the audience that contrary to what the opposition argued about how AI would make insurance better for consumers, “the insurance industry exists for profit.” He then expanded on the argument that AI introduces structural pressures that could undermine the insurance model. He suggested that the industry risks becoming so efficient, that it could undermine its broader purpose of creating a communal safety net.
He also expressed scepticism about whether technological innovation necessarily benefits policyholders. In the case of parametric insurance, he suggested that clearer conditions for payouts may also limit the circumstances in which claims are paid. This can improve efficiency for insurers while narrowing coverage for customers.
“[The opposition] spoke of climate models giving us a better granular understanding of where the risk is going to happen. But that doesn’t help us solve the problem, this merely helps us better the market for the retreat from coverage,” he added.
In addition, he raised concerns about the use of AI in claims processing, where automation may reinforce existing imbalances between insurers and policyholders. If individuals are less likely to challenge decisions, insurers may benefit from efficiencies that come at the cost of fairness.
Dowding concluded by warning that the industry could lose its social legitimacy if it becomes too exclusionary. “The social contract is breaking down here and the licence to operate is breaking down if the insurance industry does not change its structure and how it comports itself.”
As governments take on a larger role in providing coverage, the relevance of private insurance may decline, raising questions about its long-term future. He warned, “insurance will be the victim of its own success.”
Geneva Roy, Chairperson, World Schools Debating Championship Ltd (Against the Motion)

Geneva Roy began her rebuttal by questioning the assumption that AI will remove the “veil of ignorance” that protects the social contract underpinning the insurance sector. “The problem with this argument is that the veil of ignorance was long gone [before AI]. This doesn’t exist anymore,” she argued. Roy then expressed confusion why the opposition believed that governments couldn’t regulate the use of AI in ways it has for other technologies.
Her second point was about the framing of the debate in that it doesn’t properly address the purpose of insurance: “Insurability is about whether or not you account for risk in a way in which two people will engage in a contract and form that [policy].” She simplified her side’s case with a logical argument: AI is widely accessible, therefore both insurers and customers will have access to the technology. This means that policyholders will also be able to use AI if their claims are denied or us AI to see how the companies determined their policy.
As a budding lawyer, her third argument was about the law and how AI serves as a gateway for transferring future harms into enforceable insurance relationships. “I think insurability can be understood quite simply as the question of whether or not parties can lawfully create and enforce a contract by transferring future risk, and then whether courts can handle disputes that arise.”
A central part of her argument was that AI improves insurability by making risk more legible within a legal framework. It can help clarify who is responsible for a loss, what caused it and how damages should be quantified. These improvements strengthen the enforceability of insurance contracts and increase the likelihood of fair outcomes in disputes. Roy emphasised that insurance is fundamentally about managing uncertainty, not eliminating it, and AI in her view, provides better tools for navigating that uncertainty.
At the conclusion of the debate, the audience indicated its position by a show of hands, voting in favour of the motion.
Share This Article



