Finance has always evolved around trust. From the first instruments of credit to the emergence of central banks, from clearing houses to electronic trading, the story has been one of institutions slowly absorbing new technologies, sometimes with enthusiasm, often with resistance. Each transformation brought greater speed or efficiency but also new risks, new duties, and in many cases new failures. Artificial intelligence does not fit easily into this pattern. It unsettles something more fundamental: how financial reasoning itself is produced, communicated, and relied upon.
What we call financial analysis has never been a mechanical reading of numbers. Every analyst note, every rating, every investment memorandum is a structured argument about the future, rooted in premises about the present and assumptions about how long those premises will hold. We sometimes disguise this under the convenient labels of bullish or bearish sentiment, but finance has always been argumentative at its core. A claim about future margins or risk persuades because of how it links premises together and because of the confidence we attach to its horizon.
Recent developments in AI reveal something unsettling about this process. Machines can now extract these argumentative structures from financial texts, distinguish claims from premises, attach time limits to forecasts, and generate their own reasoned analyses.
This is not sentiment analysis or simple pattern recognition. It is argument mining. When a system can map the logical chains that drive financial reasoning, it can also simulate them and eventually produce them independently. At that point we no longer speak of a tool but of a participant in the same reasoning space once reserved for analysts, managers, and regulators.
I have written before about how our digital wallets are beginning to outgrow us, anticipating choices and making payments before we reflect on them. That represents more than efficiency. It marks a transfer of agency. When a system infers our risk appetite and acts accordingly, we cease to be the sole authors of our financial decisions.
What we observe in financial services broadly is the extension of that same logic. AI systems now play different roles in simulated trading environments: one takes the part of the analyst, another the trader, another the head trader. Together they approximate the operation of a real trading desk. Scale it up and you simulate a market. Scale further and you simulate the very process of persuasion itself.
The danger here is obvious yet underexplored. Persuasion is not the same as truth. I have witnessed too often how polished presentations can carry weak analyses past sceptical audiences.
Machine-generated reports already persuade human experts without demonstrating superior accuracy. This matters profoundly. Fluency without accuracy becomes dangerous when deployed at scale. In financial markets, it amounts to algorithmic mis-selling. If reports can be produced rapidly with confident tone but without sound reasoning, we risk not only individual losses but systemic erosion of the trust that markets require to function.
This connects directly to concerns I have about the invisible prison we build when we treat algorithmic outputs as if they possessed independent authority.
The European Union has begun constructing regulatory frameworks for this landscape through the AI Act, crypto-asset regulation, and digital operational resilience requirements, yet none speaks directly to arguments themselves. None demands that AI systems disclose their premises, their assumed scenarios, or the time horizons attached to their forecasts. We regulate the infrastructure but not the reasoning that flows through it.
This gap points toward necessary extensions of our regulatory approach. Just as prospectuses must disclose material risks, machine-generated financial analyses should disclose their reasoning chains. We need standards that require AI systems to make explicit their assumptions and the scenarios under which their conclusions might fail. This is about maintaining the connection between persuasion and accountability that markets depend upon.
The question of sovereignty emerges directly from this analysis. We cannot afford to import opaque reasoning systems built elsewhere and accept their conclusions without scrutiny. As I argued in examining how European law extends beyond its borders, financial authorities need the capacity to certify, audit, and if necessary reject AI agents whose decision-making processes cannot be adequately examined. Financial sovereignty has always meant more than controlling currency. It means retaining the ability to set standards, enforce accountability, and contest the reasoning that shapes economic decisions.
This challenge will likely require new professional competencies. We may need specialists whose role is auditing reasoning rather than arithmetic, examining not only whether calculations are correct but whether logical chains are coherent and premises justified. Regulators might impose reasoning disclosure requirements similar to current financial disclosure mandates. This will prove uncomfortable because it exposes the weaknesses in analyses that hide behind confident presentation, but it represents a necessary evolution if we mean to keep persuasion anchored to truth.
The fundamental tension running through these developments is between liberation and surrender. AI agents offer genuine liberation: they can simulate complex scenarios, identify risks human analysts might miss, and streamline compliance processes. Yet they also invite surrender, often without our noticing. We hand over the act of reasoning itself, the weighing of assumptions, the framing of forecasts. As with the smart wallet, the convenience is real, but so is the gradual erosion of our own analytical capabilities.
Financial services in the age of AI will not be determined by which institution deploys the largest models. The decisive factor will be whether trust can survive when arguments are produced by machines rather than humans. Trust will endure only if persuasion remains anchored in verifiable reasoning and clear accountability. If we fail to insist on these requirements, we risk discovering that the arguments shaping our markets are no longer truly ours. We will not have lost control in a dramatic crisis, but gradually, by surrendering to the convenience of systems that speak with confidence but demonstrate little concern for truth.
Article by Dr Ian Gauci
This article was first published in The Times of Malta of the 21 September 2025.
Photo credits: Times of Malta