HomeAbout usCase studiesBook a demoCareersPricingLog in
February 1, 2026

Marloo Response to FCA Call for Input: The Impact of AI on Financial Services

Our submission to the FCA's consultation on the impact of AI on financial services, drawing on our direct experience deploying AI inside the financial advice process.
Marloo Response to FCA Call for Input: The Impact of AI on Financial Services

February 2026 — Submitted by Marloo (www.gomarloo.com)

Introduction

Marloo is an AI-native platform built from the ground up to power the financial advice workflow. Founded by former builders of Sharesies (New Zealand's largest retail investing platform, with 20% population penetration and £6B AUM) and Lightyear (launched across 22 European countries), we bring deep experience in building technology that democratises access to financial services while maintaining rigorous regulatory compliance.

Our platform sits at the top of the advice workflow: we capture every client-adviser conversation, generate compliant documentation including Suitability Reports and Annual Review Letters, and build searchable client knowledge bases that compound in value over time. We save advisers 3–5+ hours per week on administrative tasks and cut document generation from a two-week, £300-per-document outsourced process to three to four minutes of AI generation plus 30 minutes of adviser review.

We operate at the sharp end of AI in regulated financial advice. Our technology helps advisers augment the service they provide clients through enhanced, multimodal data capture and document generation, directly shaping client outcomes. This submission draws on our direct, daily experience deploying AI inside the financial advice process, informed by five former financial advisers on our team, including our Head of Product Development who has personally produced thousands of advice documents.

A central theme runs through this submission: the UK's financial advice gap is large, structural, and worsening. Approximately 91% of the UK population does not currently access financial advice. There are fewer than 500 registered financial advisers under the age of 25 in the country. The adviser workforce is ageing, and the pipeline of new entrants is not keeping pace with retirements. Previous regulatory interventions to close this gap, including the introduction of targeted advice, have not achieved their objectives. AI represents the most credible pathway to closing the advice gap at scale, by fundamentally changing the unit economics of advice delivery. But this will only happen if the regulatory framework enables it.

We believe the FCA's consultation arrives at a pivotal moment. AI is already reshaping how financial advice is delivered. The regulatory choices made in the next two to three years will determine whether that transformation benefits consumers, expands access to advice, or creates new vectors of harm.

Theme 1: Future Evolution of AI Technology

1.1 Transformative AI Technologies for UK Retail Financial Services

Which emerging or maturing AI technologies do you expect will most transform UK retail financial services from 2030 onwards, and why? Please cite evidence, pilots, or data where possible.

From our vantage point building AI that operates within the financial advice workflow, we see three technology clusters that will most transform UK retail financial services from 2030 onwards.

First, large language models applied to domain-specific document generation. This is our core business. We have invested heavily in prompt engineering, retrieval-augmented generation, and template-guided compliance to produce Suitability Reports that meet regulatory standards. Our approach is built on two layers: a compliance style layer (what language is appropriate, how things should be framed, aligned to the regulatory environment through hidden QA workflows) and a specific compliant text layer (mandatory sections, disclosures, and structure carried implicitly through templates). Firms bring their own compliance programmes to life within Marloo rather than conforming to a generic standard. The technology today produces first-draft documents that advisers review and approve, reducing preparation from weeks to minutes. By 2030, we expect these systems to reach a level of reliability where they produce near-final documents with minimal human amendment, though always with human oversight and sign-off.

Second, multimodal real-time capture and natural language understanding. Advisers meet clients in person, on Zoom, on Teams, on the phone, at clients' homes, and in the office. Supporting all of these modes is what makes AI tooling sticky and universal rather than a narrow plugin for a single platform. We have built multimodal capture from day one. Transcription accuracy has improved materially over the past eighteen months, and we expect continued gains, particularly in domain-specific financial vocabulary.

Third, longitudinal knowledge systems that build compounding client profiles from unstructured conversation data. Our platform constructs searchable client knowledge bases from every meeting transcript, document, and (imminently) five years of historical email correspondence. This transforms the adviser-client relationship by ensuring continuity of service regardless of staff turnover and enables more personalised and proactive advice grounded in the client's full history. Advisers and support staff tell us that the ability to ask Marloo to prepare them for a meeting, drawing on years of accumulated client context, is one of the most valuable capabilities we offer.

Looking further ahead, we see these three clusters converging into proactive intelligence systems. To give a concrete example: the Bank of England raises rates, and the next morning Marloo shows an adviser how much every client is impacted, which clients have previously expressed concerns about rates, who is holding excess cash, and suggested actions for each. This kind of pattern recognition and opportunity matching across an entire client base represents work that was simply never done before because nobody had the context or the time. AI makes it not only possible but automatic.

1.2 Agentic AI: Potential and Implications

What do you see as the future potential and direction of agentic AI? What are the implications for retail finance over the coming decade (including accountability, assurance, and market structure)?

Agentic AI represents both the greatest opportunity and the greatest regulatory challenge for retail financial services. Our perspective is informed by the trajectory we are building toward.

In the near term, we see agentic AI being deployed to orchestrate the administrative workflow around financial advice: scheduling reviews, populating client files, drafting follow-up communications, preparing pre-meeting briefs, and generating compliant documentation. These are high-value applications with relatively contained risk, because they augment rather than replace adviser judgment. We are building this today.

In the medium term, agentic capabilities extend into substantive workflow steps. Marloo will be able to recommend product suitability from an adviser's approved list, a core paraplanning activity. It will surface client opportunities and flag compliance risks proactively. The system moves from capturing and documenting the advice process to actively supporting the analytical and research steps within it. Every persona in the firm benefits: the adviser sees paperwork eliminated and client opportunities surfaced; support staff see document generation and task management automated; the head of compliance sees 100% conversation coverage versus the current industry standard of roughly 1% sampling; the managing director sees efficiency metrics and firm-wide intelligence. The shift is from every person being a doer to every person being a reviewer.

The implications for closing the advice gap are direct. If AI can handle the administrative overhead that currently consumes a large share of an adviser's time, each adviser can serve significantly more clients. With fewer than 500 advisers under 25 entering the profession and an ageing workforce, the UK cannot train its way out of the advice gap. It must productise its way out. Agentic AI is the mechanism.

The key implications for the FCA are threefold. On accountability, any agentic system that operates in the advice chain must have a clearly identified responsible human under SM&CR. On assurance, agentic systems will require new testing and validation methodologies. Traditional software testing is insufficient for systems whose outputs are probabilistic. We invest significant engineering effort in evaluating our outputs against compliance standards, including parallel QA and assurance workflows on every document, with explicit placeholder insertion where information is insufficient rather than hallucination. On market structure, agentic AI could fundamentally alter the economics of financial advice, making it viable to serve the 91% of the UK population who currently go without.

1.3 AI Combined with Other Digital Technologies

How do you anticipate AI combining with other digital technologies, resources, and infrastructures through 2030? What specific markets, products, controls, standards, and risks could emerge?

We anticipate AI combining with several adjacent technologies in ways directly relevant to retail financial services.

Calendar and CRM integration allows AI to operate contextually, understanding not just what was said in a meeting but who the client is, what their portfolio looks like, and when their next review is due. We already integrate with calendar platforms and are building deeper connections with practice management systems. Our experience is that calendar connection is a strong predictor of ongoing user engagement.

Open banking and open finance data, combined with AI, will enable advisers to build more complete pictures of client circumstances without manual data gathering. This has significant implications for suitability assessments and fact-finding. When combined with the longitudinal client knowledge base that platforms like ours build, the result is a dramatically richer information environment for advice delivery.

Email integration represents a near-term convergence point. We are launching an integration that pulls five years of historical client emails against each client profile, instantly giving the AI far more context for suggestions, reminders, and document generation. For existing clients, this bridges the gap between historical relationships built before AI adoption and the AI-native workflow going forward.

Cloud compute infrastructure continues to reduce the cost of running AI models, which directly affects the accessibility of AI-powered tools for smaller advisory firms. When we launched in 2024, the cost of processing a single meeting was materially higher than it is today. We expect this trend to continue, making sophisticated AI tooling available to sole practitioners and small firms that could never have afforded dedicated paraplanning resource. This cost trajectory is a direct enabler of the advice gap closing: every reduction in the cost of AI infrastructure translates into a lower floor for the economics of advice delivery.

1.4 Impact on Operating Models

How will AI change your operating model, operating environment, and dependencies from 2030? How might you respond to wider adoption of AI?

AI is already changing the operating models of the advisory firms we serve. The most significant shift is from a world where documentation is a bottleneck to one where it is near-instantaneous.

To put specific numbers on this: in the UK, outsourced document generation currently can cost in excess of £300 per document with a two-week turnaround. A typical adviser produces six to ten documents per month, representing over £20,000 per year in outsourced costs alone. We reduce generation time to three to four minutes and review time to approximately 30 minutes. One document pays for the monthly subscription.

The paraplanner-to-adviser ratio in the industry varies from 0.5:1 to 2:1 depending on firm size. Where paraplanners are in place, the residual administrative burden for advisers has historically centred on meeting note-taking, which blocks downstream workflows because support staff cannot act until notes exist. We remove that blocker instantly: the conversation is recorded, anyone on the team can search and generate notes from the transcript, and the rest of the firm can act on them straight away. This is why the product has penetrated the market so effectively.

From 2030, we expect the distinction between firms that use AI and firms that do not to become a meaningful competitive divider. Our longer-term vision is to become the operating system for financial advice: handling meeting capture, documentation, client intelligence, product research, compliance monitoring, and proactive opportunity identification so that all the adviser has to do is engage with their clients and deliver advice. We want to reduce the cost and complexity of running an advice practice to the point where a single adviser, supported by AI infrastructure, can serve a client base that today would require a full support team. This is not about replacing the adviser. It is about making each adviser dramatically more productive at a time when the supply of advisers is structurally constrained.

1.5 The UK's Position

What are the UK's comparative advantages and gaps in AI including compute, data, talent, standards, and regulation relative to other jurisdictions? Which targeted actions might most improve competitiveness?

The UK has genuine comparative advantages in AI for financial services: a deep pool of financial services talent, a sophisticated regulatory framework respected globally, and strong academic institutions producing AI research. The FCA's principles-based approach is well-suited to governing AI, in contrast to more prescriptive regimes that risk being quickly outdated.

The UK's primary gap is in compute infrastructure and foundational AI model development. However, for the application layer where Marloo and similar firms operate, the UK is well-positioned. The UK regulatory environment is increasingly seen as supportive of AI tooling because it helps advisers meet their obligations and encourages better customer outcomes. This is a competitive advantage the FCA should protect and build upon.

We would highlight one specific gap: regulatory clarity on AI-generated documentation. Advisory firms want to use tools like Marloo to generate high-quality draft Suitability Reports for adviser review, but face uncertainty about where AI-assisted drafting sits within existing regulatory expectations. We have observed that this uncertainty slows adoption. Clear guidance from the FCA on the standards AI-generated documents must meet, and the human oversight required, would significantly accelerate adoption and strengthen the UK's competitive position. The opportunity for the UK is in enabling AI-native approaches that rethink the workflow from the ground up.

Theme 2: Future Impact of AI on Markets and Firms

2.1 Market Structure and Customer Passthrough

How might AI change concentration in your market? What are the current drivers of concentration, and which could AI disrupt or reinforce? Do you expect AI to increase or decrease barriers to entry? Do you expect AI to increase concentration, reduce it, or reshuffle who the dominant players are? Where there are cost reductions do you expect these to be passed on to customers as lower prices?

AI has the potential to meaningfully reduce concentration in the financial advice market, and we are building our business on this thesis.

Today, the financial advice market is concentrated among larger firms partly because of the administrative overhead of compliance. Generating Suitability Reports, maintaining client files, and managing regulatory documentation requires significant back-office resource. This creates economies of scale that favour larger firms with dedicated compliance and paraplanning teams. Smaller firms either absorb this burden directly (with advisers working evenings to complete documentation) or outsource at significant cost.

AI tools like Marloo flatten this advantage. A sole practitioner using our platform can generate compliant documentation at a quality level comparable to a large firm with a full paraplanning team. We have deliberately priced and structured our product to ensure it does not itself become a barrier to entry.

We expect cost savings from AI adoption to be substantially passed through to consumers, primarily in the form of advisers being able to serve more clients (increasing supply), reducing waiting times for documentation, and spending more time on advice quality rather than administration. When document turnaround goes from two weeks to the same day, the client experience improves materially. More fundamentally, AI reduces the minimum viable cost of delivering advice, which directly expands the population of consumers for whom professional advice is economically accessible. This is the most important passthrough effect: not lower prices for existing clients, but the creation of viable advice services for the millions who are currently priced out entirely.

2.2 Self-Reinforcing Dynamics

What evidence do you see of 'winner takes most' dynamics in AI, such as data feedback loops, economies of scale, or network effects, that could entrench market positions? Conversely, could AI reduce switching costs and increase competition? Please distinguish between dynamics you observe today and those you anticipate.

We observe several self-reinforcing dynamics that deserve regulatory attention, and it is important to distinguish between the foundational model layer and the application layer.

At the foundational model layer, a small number of AI model providers are building increasingly dominant positions through data feedback loops and massive capital requirements. This is largely outside the FCA's direct remit but has significant implications for the financial services firms that depend on these providers.

At the application layer, platforms that accumulate large datasets of financial advice interactions can improve their outputs over time. Marloo benefits from this dynamic: the more meetings we process and the more documents we generate across multiple jurisdictions, the better our models become at understanding financial advice conversations and producing appropriate outputs. The meeting is where all the context surfaces. By capturing it, we earn the right to power everything downstream. This data advantage compounds: every meeting, every note, every document, every email, all structured against the client profile, makes every subsequent capability more powerful.

However, we believe this dynamic at the application layer is broadly healthy and pro-competitive. Switching costs remain relatively low. An advisory firm can change its meeting intelligence platform without losing its underlying client relationships. We have multiple examples of firms switching to Marloo from competitors after one-week pilots, including firms that had been on competing platforms for six months or more. The competitive advantage accrues to firms that build genuinely useful products, not to those that lock customers in.

The more significant concentration risk is at the model provider layer, and this is where the FCA should focus its monitoring.

2.3 Control of the Customer Relationship

Who do you expect will control the primary customer relationship by 2030 onwards: incumbent FS firms, Big Tech, specialist AI intermediaries, or consumers' own AI agents? Do you see parallels with mobile wallets, where value is captured without becoming a traditional regulated provider? What would this shift mean for customers and for competition?

We have a strong view on this question: the primary customer relationship in financial advice should remain with the adviser, augmented by AI rather than replaced by it.

Our product is designed to strengthen the adviser-client relationship, not to intermediate it. We provide tools that make advisers more effective, more responsive, and better documented. The adviser remains the regulated professional who owns the client relationship and bears responsibility for the advice given. Our deliberate decision not to charge for support staff seats reflects this philosophy: we want the entire firm oriented around serving clients better, not optimising our short-term revenue per user.

We are concerned about scenarios where Big Tech platforms or specialist AI intermediaries attempt to capture the customer relationship by offering AI-powered financial guidance that is functionally equivalent to advice. The mobile wallet parallel is apt: these platforms could capture significant value without becoming regulated advice providers, while consumers may not understand the distinction between guidance and advice.

By 2030, we expect to see consumer-facing AI agents that manage household finances, suggest product switches, and prompt users to seek advice at appropriate moments. Whether these agents serve the consumer's interest or the platform's commercial interest will depend heavily on the regulatory framework.

Our own trajectory illustrates the competitive dynamic. We are building toward becoming the system of action for financial advice: if we handle meeting capture, documentation, client intelligence, product research, and compliance, we effectively bookend and then displace the traditional CRM. This is a fundamentally different proposition from a Big Tech platform seeking to intermediate the customer relationship. We make the adviser more powerful; we do not seek to replace them. The FCA should consider how the regulatory framework can distinguish between these two models.

2.4 Regulatory Perimeter

Could AI systems provide services functionally equivalent to regulated activities such as advice or intermediation, while remaining outside the regulatory perimeter? How might this occur in your market, and what proportion of value could migrate to such unregulated services?

This is perhaps the most important question in the consultation. We see a clear and present risk that AI systems will provide services functionally equivalent to regulated advice while remaining outside the regulatory perimeter.

This is already happening in rudimentary forms: chatbots that suggest investment allocations, robo-advisers that blur the line between guidance and advice, and AI-powered comparison tools that make implicit recommendations. As these systems become more sophisticated, the distinction between information, guidance, and advice will become increasingly difficult to maintain.

The lack of a clear, enforceable definition of where the perimeter of advice sits creates a growing risk. Generic large language models from major technology companies are already being asked financial questions by millions of consumers. These models provide responses that are, in substance, financial guidance and in some cases cross into personal recommendation. They do so without any of the regulatory safeguards that apply to authorised advice: no suitability assessment, no know-your-client obligation, no professional indemnity insurance, and no accountability under SM&CR.

This risk is compounding. In recent months, major generic LLM providers have introduced sponsored content and commercial partnerships that influence model outputs. When a consumer asks a generic AI assistant which investment platform to use or how to structure their pension, the response may be shaped by undisclosed commercial relationships between the AI provider and financial services firms. This creates a conflict of interest that is entirely invisible to the consumer. It is, in effect, undisclosed financial promotion embedded within what the consumer perceives as impartial guidance. The FCA's financial promotion regime was not designed for this medium, and the gap is widening rapidly.

There are two related but distinct structural concerns around transparency in AI-assisted advice.

The first is data locality. UK financial services legislation requires that advice is visible and local in its provision. Many AI tools process data in jurisdictions outside the UK and retain conversational data in ways that are opaque to the consumer. The FCA should provide clear guidance on data residency expectations for AI tools operating within the advice framework, so that firms can make informed decisions about the services they use.

The second is auditability. Consumers have a right to know how advice was arrived at, and regulators have a right to inspect the process. Many generic AI tools provide no audit trail of the reasoning behind their outputs. The source documents and logic chain that informed a recommendation are not visible to the consumer, the adviser, or the regulator. This is fundamentally incompatible with the transparency and accountability requirements of the UK advice framework. Purpose-built AI tools like Marloo maintain full audit trails and produce outputs where the source material and reasoning are visible and inspectable.

We would urge the FCA to adopt a functional approach: if a service walks like advice and talks like advice, it should be regulated as advice, regardless of whether it is delivered by a human or an AI system. The alternative, allowing a growing share of de facto advice to migrate outside the perimeter, would undermine consumer protection and create an uneven playing field for regulated firms.

Our own approach to this boundary is instructive. We are building capabilities that will eventually support product suitability recommendations from an adviser's approved list, a core paraplanning activity. We are deliberately building the infrastructure to enable AI-assisted advice delivery when regulation allows. But we believe it would be irresponsible to deploy autonomous advice capabilities outside a regulated framework, and we would welcome clear FCA guidance that prevents others from doing so.

Without clear regulatory action, we estimate that a meaningful proportion of what is functionally personal financial advice could migrate to unregulated AI services within the next five to seven years. Worse, the quality of that unregulated advice may be compromised by undisclosed commercial interests. This would not serve consumers well.

Theme 3: Future Consumer Trends

3.1 Benefits and Risks

How might consumers benefit from AI-enabled retail finance from 2030 and what do you foresee as the greatest risks for consumers?

The greatest benefit AI can deliver for consumers in retail financial services is the democratisation of high-quality financial advice. Today, comprehensive financial advice is effectively rationed. Approximately 91% of the UK population does not currently access financial advice. Advisers can serve a limited number of clients, and the economics of advice delivery mean that lower-wealth individuals are systematically underserved.

This supply constraint is structural and worsening. There are fewer than 500 registered financial advisers under the age of 25 in the UK. The adviser workforce is ageing, and the pipeline of new entrants is not keeping pace with retirements. Without a step-change in adviser productivity, the advice gap will widen regardless of consumer demand. AI is not simply an efficiency tool in this context; it is essential infrastructure for maintaining and expanding the supply of financial advice.

Previous regulatory efforts to close the advice gap have had limited success. The FCA's introduction of targeted advice was intended to make it easier for firms to offer focused, lower-cost advice on specific needs without triggering the full suitability requirements of holistic advice. The regime requires specificity in its application, asking firms to advise on a narrowly defined question, but its purpose is to provide more general access to guidance. Firms have found this tension difficult to navigate: the risk of inadvertently straying beyond the scope of the targeted advice engagement, and the compliance burden of demonstrating they did not, has deterred widespread adoption. The result is a well-intentioned initiative that has not materially narrowed the advice gap. AI offers a fundamentally different approach. Rather than trying to create a lighter-touch category of advice, AI reduces the cost and complexity of delivering full, compliant advice. The question becomes not "how do we give people less advice more cheaply" but "how do we give people better advice more efficiently."

AI changes this equation fundamentally. By automating the administrative overhead of advice delivery, AI enables advisers to serve more clients and improve quality. Our data shows that advisers using Marloo save 3–5+ hours per week on administration. Document turnaround drops from weeks to hours. These are not marginal improvements; they represent a step-change in adviser capacity. When an adviser can serve 20% more clients at the same quality level because AI handles documentation and preparation, the supply of advice increases without any dilution of standards and maintaining compliance with Consumer Duty or other future frameworks.

The greatest risk, in our view, is a bifurcation between consumers who receive AI-augmented human advice (high quality, regulated, with clear accountability) and those who rely on unregulated AI tools that provide something that feels like advice but lacks the safeguards of the regulated advice process. The FCA's approach to the regulatory perimeter will be the decisive factor in whether this bifurcation materialises.

3.2 Inclusion versus Exclusion

Which consumer segments might 'win' or 'lose' in this new world of AI-enabled retail finance?

AI-enabled financial advice has the potential to be genuinely inclusive, but only if the regulatory framework supports it.

Consumers who stand to benefit most are those currently underserved by the advice market: younger adults, lower-wealth individuals, and those with financial needs who cannot justify the cost of traditional advice. We are working toward a future where the unit economics of advice are so fundamentally improved by AI that a single adviser, supported by AI infrastructure, can profitably serve client segments that are uneconomic under the current model. This is not a distant aspiration. The tools we are building today, from automated documentation to proactive client intelligence, are laying the groundwork.

The consumers most at risk of exclusion are those who are digitally disengaged or who lack the confidence to interact with AI-mediated services. There is also a risk that AI systems, if trained on biased data, could systematically disadvantage certain demographic groups.

AI also offers a significant opportunity to improve outcomes for vulnerable consumers. One of the most powerful capabilities of AI applied to client conversations is the ability to surface vulnerabilities that might otherwise go undetected. When AI analyses the full transcript of a meeting, it can identify indicators of vulnerability, including cognitive difficulty, signs of undue influence from third parties, inconsistencies that may suggest a client does not fully understand the advice being given, or changes in circumstance that a busy adviser might not register in the flow of conversation. Human advisers, operating under time pressure, inevitably miss some of these signals. AI does not get tired, does not rush to the next meeting, and can flag patterns across multiple interactions that a human reviewer working from memory alone would not catch. This capability has direct implications for the Consumer Duty's requirement to deliver good outcomes for vulnerable customers.

3.3 Changes to Products and Services

How might AI drive changes and personalisation in products and services, and what impact will evolving consumer expectations have?

AI will drive personalisation across financial products and services. In the advice context, this means more tailored recommendations, more responsive communication, and documentation that more accurately reflects the client's specific circumstances.

We already see this in our own product. Our Suitability Report generation draws on the client's full interaction history to produce documents specific to their situation, rather than relying on generic templates. The compliance structure is maintained (mandatory sections, disclosures, regulatory framing) but the substance is deeply personalised. This is a step-change in documentation quality that benefits both the adviser and the client.

There is a deeper dynamic at play here that we believe the FCA should consider carefully. Despite the Consumer Duty's focus on outcomes, and despite advice documentation nominally being addressed to the client, the practical reality is that Suitability Reports and similar documents are overwhelmingly written with a different reader in mind. Advisers and firms draft these documents on the assumption that the ultimate reader will be an FCA investigator, an internal compliance reviewer, or a Financial Ombudsman Service adjudicator. The documents are therefore optimised for regulatory defensibility: exhaustive in their detail, laden with industry jargon, and structured to demonstrate compliance rather than to communicate clearly with the client.

This is a rational response to the regulatory environment. The FCA's primary mechanism for assessing advice quality is the review of advice documentation. This focus on the document, rather than on the actual client outcome, has created an industry-wide incentive to over-engineer documentation for a professional audience. The result is that the very documents intended to help consumers understand the advice they have received are frequently too complex, too long, and too full of technical language for the average consumer to meaningfully engage with. The Consumer Duty aspires to good consumer outcomes, but the enforcement mechanism inadvertently prioritises the design of documentation over the substance of those outcomes.

AI is uniquely well-positioned to break this cycle. A well-designed AI system can simultaneously satisfy both audiences in a single document: generating content that is rigorous, complete, and defensible for compliance and regulatory review, while also being clear, plain-language, and genuinely comprehensible to the client. This is not a theoretical capability. We build this today. Our document generation produces outputs that meet the structural and substantive requirements of compliance review while maintaining readability for the consumer. AI can modulate tone, complexity, and emphasis in ways that would be prohibitively time-consuming for a human drafter trying to serve two audiences at once. This dual-audience capability is one of the most practically significant contributions AI can make to improving consumer outcomes in financial advice.

Looking ahead, AI will enable a shift from reactive to proactive advice. Rather than waiting for scheduled reviews, AI systems will continuously monitor client circumstances against market conditions and flag when action is warranted. An adviser will come in each morning to a prioritised list of clients who need attention, with suggested actions and supporting context for each. This transforms the advice model from periodic reviews to continuous service.

Consumer expectations will be a significant driver of this shift. As consumers become accustomed to personalised, AI-driven experiences in other sectors, they will increasingly expect their financial adviser to deliver a similarly responsive and tailored service. Firms that cannot meet this expectation will lose clients to those that can.

3.4 Agency and Understanding

With the balance shifting between consumer agency and delegation to AI, how might this affect consumer understanding, financial literacy and vulnerability?

The delegation of financial decision-making to AI systems raises legitimate concerns about consumer understanding and financial literacy. If consumers rely on AI to manage their finances without understanding the underlying decisions, they may be poorly equipped to identify errors or to make informed choices when AI systems produce unexpected recommendations.

Our approach is to keep the human adviser at the centre of the decision-making process. AI handles the administrative burden; the adviser exercises professional judgment. The documents we generate are designed to be clear, comprehensive, and comprehensible to the client, because they must withstand regulatory scrutiny. The adviser is always the reviewer and signatory. Where our system does not have sufficient information to make a determination, it inserts an explicit placeholder rather than generating a plausible-sounding fabrication. This design principle, leaving gaps rather than hallucinating, is fundamental to maintaining trust and consumer understanding.

As we noted in our response on changes to products and services, AI can materially improve consumer understanding by producing advice documentation that is genuinely written for the consumer rather than for a compliance reviewer. If the FCA's objective is informed consumers who understand the advice they receive, AI-generated documentation that is simultaneously compliant and comprehensible represents a meaningful advance on the status quo.

We believe the FCA should encourage models where AI augments human advice rather than replacing it, particularly for complex or consequential financial decisions. Consumer Duty's focus on consumer understanding is well-aligned with this approach. Over time, as AI capabilities mature and the regulatory framework evolves, the appropriate boundary between AI-assisted and AI-autonomous advice may shift, but it should do so deliberately and with clear consumer safeguards.

3.5 Fraud

How could AI-driven fraud evolve as consumers increasingly delegate decisions to AI, and what would this mean for consumer agency, harm, and protection in retail financial services?

AI-driven fraud is an escalating threat that the FCA must address proactively. Deepfake technology is already capable of impersonating individuals in voice and video, and we expect these capabilities to become increasingly accessible.

In the financial advice context, specific risks include the impersonation of advisers to obtain client information or authorise transactions, the fabrication of documents that mimic legitimate advice correspondence, and the use of AI to craft highly personalised phishing attacks that reference real details from a client's financial situation.

Paradoxically, AI is also one of the most effective defences against AI-driven fraud. Platforms like ours that maintain authenticated, timestamped records of every client conversation create an auditable trail that makes impersonation and document fabrication significantly harder. Real-time analysis of communication patterns, document authentication, and anomaly detection are all areas where AI can provide significant protection. The FCA should consider how to encourage the deployment of defensive AI capabilities alongside its focus on AI-driven fraud risks.

3.6 Trust

What might help make AI-driven decisions more understandable and trusted by customers, including how the use of AI may be monetised?

Trust in AI-driven financial services will be built through transparency, accuracy, and accountability.

From our experience, the most effective way to build trust is to demonstrate that AI augments rather than replaces human judgment. Our clients trust our platform because they review every document before it reaches the consumer. The consumer trusts the process because they know a qualified professional has reviewed and approved the output. This model, where AI does the heavy lifting and a human professional provides the quality assurance and judgment layer, is the right framework for building trust during this transitional period.

Our compliance approach reinforces this. We do not impose a compliance standard on firms. We build strong default templates for each jurisdiction and advice type, and we enable firms to upload their own existing templates which we faithfully reproduce. The firm's compliance programme, their regulatory obligations, and their house style are all embedded in how they structure their documents. We run parallel QA and assurance workflows on every output. This approach, where AI conforms to the firm's compliance framework rather than imposing its own, is essential for trust.

We believe the FCA should encourage clear disclosure of where and how AI is used in the advice process, without requiring disclosures so onerous that they undermine the efficiency gains AI delivers. A simple, standardised disclosure that AI tools were used in the preparation of documentation, subject to human review and approval, would be a proportionate approach.

On monetisation, we are transparent with our clients about how our product works and how it is priced. We do not monetise client data or use it for purposes beyond providing our service. We would support regulatory expectations that require similar transparency from all AI providers in the financial services sector. As we noted in our response on the regulatory perimeter, the emergence of sponsored content within generic LLM responses to financial questions represents a significant trust risk. Consumers cannot currently distinguish between impartial AI-generated guidance and responses influenced by undisclosed commercial relationships. The FCA should consider how its financial promotion and disclosure frameworks apply to AI-generated financial content, and should require that any commercial influence on AI outputs is disclosed to consumers in a clear and prominent manner.

Theme 4: Future Regulatory Approach

4.1 Outcomes-Based Regulation

What are the opportunities and challenges for the FCA in ensuring an outcomes-based approach to retail regulation in an AI-enabled FS industry?

The FCA's outcomes-based approach is, in our view, the right framework for regulating AI in financial services. Technology-specific rules would quickly become outdated as AI capabilities evolve. Outcomes-based regulation allows firms to innovate in how they achieve regulatory objectives while maintaining clear expectations about what those objectives are.

Consumer Duty is particularly well-suited to this purpose. Its focus on good outcomes for customers provides a technology-neutral standard against which AI-enabled services can be assessed. Whether a Suitability Report is drafted by a paraplanner or generated by AI is less important than whether it meets the Duty's requirements for clear communication, appropriate recommendations, and fair value.

However, we would offer a candid observation. Despite the FCA's stated commitment to outcomes-based regulation, the practical reality of how advice quality is assessed often falls short of this aspiration. The FCA's primary supervisory mechanism for assessing advice quality is the review of advice documentation. This creates a de facto input-based regime: the quality of the document becomes a proxy for the quality of the outcome. Firms are assessed not on whether the client achieved a good outcome, but on whether the Suitability Report was sufficiently detailed, correctly structured, and adequately evidenced.

This has produced a perverse dynamic. The industry has developed an obsession with the design and detail of documentation itself, rather than with the actual consumer outcomes the documentation is meant to support. Advisers invest disproportionate time and resource in producing documents that will withstand regulatory scrutiny, at the expense of time spent actually advising clients. The irony is that an outcomes-focused regime has, through its enforcement mechanisms, created an inputs-focused industry behaviour.

AI presents an opportunity to resolve this tension. If AI can produce documentation that is both rigorously compliant and genuinely useful to the consumer, the time advisers currently spend on defensive documentation can be redirected to actual advice delivery. Furthermore, AI enables the FCA itself to move toward genuine outcomes-based supervision. Rather than sampling a small percentage of documents and assessing their technical compliance, the FCA could use AI to analyse patterns across large volumes of advice, assessing whether clients are achieving good outcomes at a systemic level. This would be a meaningful shift from assessing documents to assessing outcomes, and would bring supervisory practice into alignment with the stated regulatory philosophy.

The key challenge is supervisory capacity. Outcomes-based regulation requires regulators to assess outputs rather than prescribe inputs, which demands a different skill set and potentially more resource. AI itself can help here. A regulator equipped with AI tools could review a far larger sample of advice documentation than is currently possible. Today, compliance teams typically review roughly 1% of client conversations. AI enables 100% coverage. The FCA should invest in its own AI capabilities to support supervision of AI-enabled firms at scale.

4.2 Regulatory Levers

Are the key FS regulatory levers (Consumer Duty, Operational Resilience, SM&CR, Critical Third Party regime etc) suitable to manage future risks and to enable firms to fully take advantage of AI?

We believe the existing regulatory toolkit is broadly suitable, with some areas where clarification or extension would be valuable.

Consumer Duty provides a strong foundation for assessing AI-enabled advice. SM&CR provides a clear accountability framework for decisions made with AI assistance, provided there is guidance on how responsibility attaches when AI tools are used in the advice process. Operational Resilience requirements are directly relevant to firms' dependence on AI systems and their underlying model providers.

The Critical Third Party regime is particularly important. A small number of foundational model providers underpin a growing share of financial services AI. If one of these providers experiences a significant outage or changes its terms of service, the impact on financial services firms could be systemic. The FCA should consider whether foundational AI model providers should be designated as critical third parties.

We would also highlight the need to extend existing frameworks around data locality, processing transparency, and auditability to cover AI tools used in the advice process.

On data locality, UK legislation requires that advice provision is visible and local. The FCA should consider providing clear guidance on data residency expectations for AI tools operating within the advice framework, so that firms can make informed decisions about the services they use and how client data is handled.

On auditability, firms using AI tools that do not provide transparent audit trails, or that cannot demonstrate the reasoning chain behind their outputs, may be in tension with existing regulatory requirements, even if the AI tool itself sits outside the regulatory perimeter. The FCA should consider guidance that clarifies expectations on firms regarding the AI tools they use, including whether the source material behind AI-generated outputs is visible and inspectable, and how firms satisfy their record-keeping obligations when AI is involved in the advice process.

One further gap we would highlight is the absence of clear guidance on AI-assisted regulatory documentation. Firms need to know whether Suitability Reports, advice documents, and other regulatory outputs drafted with AI assistance and reviewed by an adviser meet the FCA's expectations, and what level of human oversight is required. Across the jurisdictions we serve, advice legislation is becoming more complex and obligations on liable parties more stringent. This is a tailwind for AI tooling, but only if firms have confidence that AI-assisted documents are regulatory compliant. Clear guidance would accelerate responsible adoption.

4.3 Supervisory and Enforcement Approach

Do you have views on the way the FCA should improve or develop its approach to supervision and/or enforcement to respond to increased AI use in the future, including using AI itself?

We would encourage the FCA to adopt a supervisory approach that distinguishes between AI that augments regulated professionals (lower risk, should be encouraged) and AI that replaces regulated processes (higher risk, requires closer scrutiny).

Marloo is an example of the former. We help advisers do their job more efficiently, but every output is reviewed by a qualified professional before it reaches the client. The adviser is always the reviewer and signatory. This model preserves accountability and should be treated differently from an AI system that provides advice directly to consumers without human oversight.

On enforcement, the FCA should use AI itself to monitor for unregulated AI services that are providing de facto financial advice. As we noted in our response on the regulatory perimeter, this risk is significant, and traditional enforcement approaches may be insufficient to identify and address it at scale. The emergence of sponsored content within generic AI responses to financial questions adds urgency to this. The FCA's financial promotion enforcement capability needs to evolve to address AI-delivered content that may constitute promotion without being labelled as such.

We would also encourage the FCA to consider how its supervisory approach can keep pace with AI-native entrants that are fundamentally different from legacy technology providers. The firms that are successfully deploying AI in financial services are not bolting AI onto existing systems. They are rethinking workflows from the ground up. Supervisory frameworks designed for traditional technology vendors may not capture the distinctive characteristics of these AI-native platforms.

4.4 Growth and Competitiveness

In what ways can the FCA continue to support growth and competitiveness in an AI-driven financial services industry in the future?

The most impactful thing the FCA can do to support growth and competitiveness is to provide regulatory clarity that enables firms to invest confidently in AI adoption.

Uncertainty about whether AI-generated documents meet regulatory standards, about what level of human oversight is expected, and about how liability attaches when AI tools are used in the advice process: this uncertainty slows adoption and disadvantages UK firms relative to competitors in jurisdictions with clearer frameworks. We have observed this directly across the markets we serve. Australia's ASIC has published more detailed guidance on AI in financial advice, and adoption is further advanced in that market partly as a result.

We would suggest the FCA consider a safe harbour approach: clear standards for AI-generated documentation that, if met, provide firms with confidence that they are meeting their regulatory obligations. This would not eliminate the need for human oversight but would provide a framework for responsible adoption that encourages investment and innovation.

The FCA should also recognise that enabling AI adoption in financial advice is not solely a technology policy question. It is a workforce and access policy question. With an ageing adviser workforce and fewer than 500 advisers under 25 entering the profession, the UK's ability to maintain and expand access to financial advice depends on making each adviser dramatically more productive. Regulatory frameworks that slow AI adoption are, in effect, policies that constrain the supply of advice at a time when the advice gap is already critical. The FCA has the opportunity to position regulatory clarity on AI as a direct intervention to improve consumer access to advice.

Additionally, the FCA should consider how its regulatory sandbox and innovation pathway programs can be specifically tailored to support AI-enabled financial services firms. These programs have been valuable in other contexts and could accelerate responsible AI innovation in the UK. The UK has an opportunity to become the global leader in AI-enabled financial advice. The regulatory framework is the critical enabler.

4.5 Frameworks for Inspiration

Are there other regulatory frameworks (UK or international, other non-FS sectors) which the FCA might consider or emulate to respond to increased AI use in retail financial services?

We would draw the FCA's attention to several frameworks that offer useful precedents.

The EU AI Act takes a risk-based approach that categorises AI systems by their potential for harm. While more prescriptive than the UK's preferred approach, the risk categorisation framework is a useful conceptual tool. Financial advice AI would likely fall into the high-risk category, which is appropriate given its potential impact on consumers.

Australia's approach to AI governance in financial services is instructive. ASIC has published detailed guidance on the use of AI in financial advice, including expectations around human oversight, testing, and documentation. This guidance has been valuable for firms like ours operating in the Australian market and could serve as a model for FCA guidance.

More broadly, the UK's own approach to regulating autonomous vehicles offers a useful analogy. The framework distinguishes between assisted driving (human retains ultimate responsibility) and autonomous driving (the system bears responsibility). A similar distinction between AI-assisted advice (adviser retains responsibility, reviews and signs off every output) and AI-autonomous advice (the system makes and delivers decisions independently) could provide a practical, durable framework for financial services regulation. We operate firmly in the former category today and believe the regulatory framework should clearly distinguish between the two.

Conclusion

Marloo occupies a distinctive position in the AI and financial services landscape. We are not a theoretical contributor to this consultation. We build and deploy AI that operates inside the regulated financial advice process every day, across multiple jurisdictions, serving hundreds of firms. Our perspective is grounded in the practical realities of making AI work reliably, compliantly, and at scale in financial services.

Our central message to the FCA is this: AI is already transforming financial advice, and the transformation is overwhelmingly positive for consumers when deployed responsibly. The advisers who use our platform are not being replaced by AI. They are becoming more effective, more responsive, and better documented. Their clients receive faster, more personalised, and more comprehensive advice as a result. Document turnaround drops from weeks to hours. Compliance coverage goes from 1% sampling to 100%. Vulnerable consumers are identified more reliably. And the cost savings create headroom to serve the 91% of the population that currently cannot access advice.

The advice gap is the defining challenge of UK retail financial services. It will not be closed by regulatory initiatives that create lighter categories of advice. It will not be closed by an ageing workforce of fewer than 500 new young advisers entering the profession each year. It will be closed by technology that makes full, compliant, high-quality advice dramatically more efficient to deliver. That technology exists today, and it is improving rapidly.

The regulatory choices the FCA makes now will determine whether this transformation continues to serve consumers well. We would urge the FCA to prioritise three things: clarity on the standards AI-generated documentation must meet, so firms can invest with confidence; vigilance on the regulatory perimeter as AI services become more sophisticated, particularly regarding undisclosed commercial interests in generic AI platforms, so consumers remain protected; and a proportionate approach that encourages responsible AI adoption rather than creating barriers to innovation.

We welcome the opportunity to engage further with the FCA on these questions and would be happy to demonstrate our platform and discuss our approach to compliance, testing, and human oversight in more detail.

Contact: For further discussion of any points raised in this submission, please contact Millie at millie@gomarloo.com

Your calling is advice,
not admin

Try Marloo for your next meeting – it's completely free to get started.

Try Marloo for free
Book a free demo