Artificial intelligence is reshaping industries at remarkable speed. In financial services, healthcare, and other regulated sectors, AI innovation is advancing within strict governance frameworks aligned to principles-based regulations in the UK. Every model is vetted, every data flow scrutinised, every output reviewed for fairness and compliance. Each initiative seeking to monetise a competitive advantage.
Yet, while firms inside the regulatory perimeter are building safeguards, a growing number of AI tools are proliferating outside it, directly into the hands of customers. AI can play a vital role in increasing inclusion and accessibility, but how do we all create an environment in which customers can be confident that, at the point of use, the information they’re accessing is safe, accurate and appropriate to their need.
For many vulnerable customers, this point of use is exactly where the risks begin. AI-generated content is polished, precise and deeply personalised, making it hard to distinguish between what is legitimate information from what is simply model-generated output from probabilistic determined algorithms. The surface confidence of the response masks the fact that prices, eligibility checks or recommendations may be nothing more than guesses.
With AI tools now capable of offering what appears to be personalised financial advice to UK consumers, particularly in areas such as unsecured lending and credit cards, the potential for poor customer outcomes is increasing. We already see examples of these persuasive recommendations without the safeguards, suitability checks, or accountability that regulated firms must uphold. For a financially savvy consumer, this may trigger healthy scepticism, for someone vulnerable, the distinction between a regulated customer journey or advice and an AI-generated suggestion can become almost invisible.
If we think about this landscape against the four pillars of consumer protection, can we really be confident that 1) access is universal and real 2) charges and costs are fair, transparent and proportionate 3) redress is accessible and binding, and 4) the policing of the landscape is effective and enforced. No doubt, there are many organisations wrestling with each of these pillars in the context of AI and, although we can see clearer routes through to some of them, I worry particularly about redress and policing.
It’s not difficult to imagine a future mis-selling incident triggered not by a rogue adviser, but by an unregulated algorithm dispensing advice and exposing unwitting consumers to financial harm. And in that scenario, it is the vulnerable customer, perhaps someone already anxious, already unsure, who is least likely to understand they were dealing with an unregulated tool. Many may assume that if something looks like a financial recommendation, it must be protected by the same rules as everything else online. We should ask ourselves whether it’s reasonable for any customer to understand they’re dealing with an unregulated broker and what that means if things go wrong – who do they believe is protecting them?
The risk not only being to the impacted customer, but also trust in certain markets more generally, something which, in financial services, Consumer Duty has been central in protecting. Being principles-based, Consumer Duty can adapt to context switching and change, the question is how we protect the principles regulated firms have already embraced for the benefit of all customers.
This is not a new pattern. We often see similar trends as new technologies emerge. Now, anyone with an internet connection can access models capable of generating investment advice, analysing legal contracts, or processing sensitive personal data, all without compliant audit trails, security controls, or governance frameworks.
The intent of such activities are understandable, and often attractive, experimenting, increasing productivity, simplifying the complex, saving time, but the potential consequences are serious.
The goal, surely, must be that all customers can use AI to generate information to make life easier, without worrying about how the information gets to them or what happens ‘behind the scenes’. But, it must then be up to regulators and regulated firms to work together to ensure the ‘behind the scenes’ steps are compliant and grounded in the right regulatory framework, and that we all understand who is ultimately accountable if something goes wrong.
The compliance paradox
Within regulated firms, the use of AI is already subject to a complex web of obligations: data protection, fairness, explainability, operational resilience, and auditability.
But outside that ecosystem, there are few guardrails. Consumers can use a multitude of accessible AI tools to make high-impact decisions, ranging from financial planning to medical queries, without the protections that regulated entities are required to uphold. This unregulated space is precisely where vulnerable customers face the greatest exposure. When presented with a number, a comparison, or a suggested course of action, it feels authoritative, even when it is simply guessing with sophisticated confidence.
This creates a compliance paradox: the safest AI environments may exist within firms most constrained by rules, while the most dangerous uses proliferate among those least bound by them. Regulation is succeeding at the firm level, but the risk is showing up at a market level.
The wider implications
If unchecked, this imbalance in regulatory requirements, could erode public trust in AI generally. A single high-profile misuse such as an AI-generated investment scam, a misleading medical chatbot, a data privacy breach could undermine confidence across the ecosystem, including among responsible actors.
From an economic perspective, it also risks creating an uneven playing field for firms. Those that invest heavily in compliance and governance face higher costs and slower deployment timelines, while unregulated players can move faster, free from equivalent accountability. Over time, this could discourage responsible innovation, unless regulators and industry leaders address the issue collectively.
Toward ecosystem responsibility
The solution is not to slow AI progress, far from it, but to extend responsible practices beyond the regulated perimeter. Creating shared standards and a common goal, which means customers can trust the information they are receiving, whether it relates to their finances, their health or anything else where the implications are important.
Regulated firms can play a leadership role by helping customers understand the risks of using open or unverified AI tools, by sharing best practices, and by advocating for common governance frameworks. Policymakers, in turn, should consider how to encourage safe use of AI, whether through labelling, certification, or simplified guidance that promotes informed adoption rather than stifling innovation.
The safe financial layer
Although, at Compare the Market, we are actively involved in considering the big questions and opportunities presented by AI, for our business, for our partners and for customers, we focus specifically on household financial products, where most of our customers interact with us.
For more than 20 years, we’ve been a cornerstone of the UK’s world-leading insurance and personal finance sectors, markets renowned for their competitiveness, responsiveness, and focus on good customer outcomes. By pioneering digital price comparison, we gave millions of consumers the power of choice, bringing transparency and fairness to complex, regulated markets, building customer confidence in knowing what they’re buying.
As AI-search technology reshapes how people find and understand information, that same mission has never been more relevant. AI promises speed, simplicity, and personalised results, but in regulated industries, accuracy, fairness, and full-market visibility still matter most. These are high-stakes decisions: insurance, mortgages, medical cover. They demand precision and trust, not probability and assumption.
Compare the Market is uniquely positioned to meet this challenge. Our platform connects consumers to over 900 UK insurers, generating 8.8 billion quotes a year and saving customers £2.4 billion annually. Behind this scale sits deep expertise, a team of 150 data scientists working with leading academic institutions such as Cambridge, Oxford, and Imperial, and a growing suite of AI models already enhancing marketing, pricing, and customer experience.
Our innovations, like AutoSergei, have transformed journeys that once took nine minutes into experiences of less than a minute, so that customers can focus on making the best decision for their needs. We’ve spent 20 years earning consumer trust. Now, we’re using that foundation to build the next generation of trusted marketplaces, ensuring that, in the AI era, technology serves people, not the other way around.
Ultimately, AI safety is a whole ecosystem challenge. The tools are already too powerful and too accessible for responsibility to stop at the boundaries of regulation. We need to think not only about how firms regulate themselves, but how the wider community of users can operate safely and confidently, especially for vulnerable customers where Consumer Duty has been highly effective. Building this shared trust perimeter, one that protects customers while harnessing the huge positive potential of AI, means recognising that customers cannot be expected to understand when an AI tool is speculating, when it is generating rather than retrieving, or when it lacks the safeguards they assume. Protecting them must be a collective effort across government, regulatory bodies, firms and consumer groups alike.
Whatever your specialist area, wherever you are in your journey, we’d love to hear from you.