What if AI can stop us getting another Liz Truss?

After seeing the TV debate between Rishi Sunak and Sir Keir Starmer, I was struck by how acceptable it has become for leaders to make unfounded or at best dubious financial claims. Chief among these was Sunak’s accusation that a Starmer government would cost every working family an extra £2,000 in taxes. As the BBC reported, this was based on assumptions made by Conservative-appointed special advisers, not the Treasury.

Not a single one of Sunak’s advisers, I confidently predict, will have raised concerns that this claim may have crossed a regulatory line with consequences to follow. And that’s because, despite being more influential than any single financial institution, the office of the prime minister, working in tandem with the Treasury, is not regulated by the Financial Conduct Authority (FCA) or any other body. Even though it oversees more than £2.7tn in economic activity per annum. In fact, although an independent financial regulator, the FCA reports to the Treasury and Parliament.

Of course, in a democracy there is a strong argument that government institutions cannot operate within the regulatory constraints required of commercial financial institutions such as banks and insurance companies. But it is also an oddity that various forms of dishonesty, from outright lying through to wilful misrepresentation, does not face guardrails of any kind. Indeed, as the rise of populism attests, we have seen numerous examples of how dishonesty pays politicians – across the political spectrum – in spades.

So what, if anything, can be done? As with so many ecosystems, the AI spotlight is falling on the political sphere also. Before we can meaningfully discuss countermeasures to questionable pledges or accusations, we must find ways to call them out in real time. It is all very well for media outlets to clarify such matters after the fact, but in almost every case, the time delay coupled with the likelihood of a lesser audience for those outlets – the Sunak/Starmer debate was watched by five million – means the strategy will continue so long as the benefits outweigh the consequences.

Advances in AI will make it possible, I believe, to call out such inaccuracies in real time, before the roots of the dishonesty take hold. If industry-wide protocols can be agreed, a framework would then exist to consider meaningful consequences for bad conduct. Indeed, AI could also play a role in policy and past performance assessment, providing the public with a central reservoir of data to assess the records of different politicians and parties across the different portfolios.

So long as the voting public agrees it is not OK to lie on binary, provable matters (to be distinguished from policy differences) and that all politicians, especially front bench and their shadow equivalents, should be held to the same professional standards as doctors, lawyers and other professionals who face the very real prospect of being struck off should they engage in similar activity, this is a problem that can be fixed.

And it is a major problem. Many will recall a certain verifiable lie, printed on a bus, about £350m a week going to the NHS if the UK left the EU. Others might cite Liz Truss pressing a big red button at the same time as asking: “What’s this for?” In the spirit of honesty, I must declare there was no button and (probably) no such remark was made, but there may as well have been, such was the disastrous immediacy of the 2022 mini-budget’s impact on the bond markets and the broader economy.

As well as calling out inaccuracies and untruths, I foresee future elections in which AI will be able to codify the records, similar to a credit ratings agency, of individual parties and candidates and make this data available to the public – as opposed to the slogan fest which currently characterises our elections.

From this we might conceive a body equivalent to the General Medical Council or the Bar Standards Board to oversee key political canons. It could be composed of cross-party politicians with good records, perhaps biassed towards those who have retired and are not operating in the polarised maelstrom of the daily Commons dogfight. Operating with the very latest technology and a set of agreed behavioural parameters, those most adept in the dark arts could be directed, depending on the body’s mandate, to leave public life. In this way, a public body could play a role similar to that of the FCA for banks in holding those seeking elected office to account.

If AI can prompt this shift, why not? It might just save us all from the next Liz Truss.

By Ronan Donohue, founder of Q4 Capital Advisors.

Related Post