Breakingviews – Bank of England’s AI approach to harden

LONDON (Reuters Breakingviews) – The Bank of England last year chose artificial intelligence pioneer Alan Turing as the face of its new 50 pound note. This is a smart move given that the central bank will need to quickly find a way to ensure that the spread of intelligent robots in finance is a positive force rather than a destabilizing influence.

A tour bus passes the Bank of England in London Great Britain Aug 1-2018.

Computer programs with human problem-solving skills are already an integral part of the UK financial system. Two-thirds of companies polled by the BoE use such systems, Tom Mutton, central bank fintech director, told Breakingviews in an interview. Banks and insurers are more advanced than most. They have anti-money laundering programs in place that scan and interpret documents, or help assess the risk of underwriting loans or insurance policies.

The central bank welcomes such efforts. One reason is that the deployment of AI could lower the costs of financial services and the savings could be passed on to consumers. The Boston Consulting Group, for example, estimates that AI software could by 2027 reduce the tedious administrative work required of staff by 2.4 hours per day per employee in banking and 2.9 hours in enterprises. capital markets. Another reason is that machines have proven to be more efficient than humans at detecting suspicious transactions. AI could reduce the £ 1.2 billion stolen through frauds and scams per year, according to data from UK Finance.

Artificial intelligence can also help people dependent on informal credit regain access to the banking system. Consumers who have uneven credit histories may be turned away by lenders and therefore be forced to resort to high cost payday lenders. But AI could offer a fairer representation of credit history and give those people a chance to take out a normal loan, according to Mutton.

There is evidence to support the thesis. AI models can assess creditworthiness better than traditional models, according to a recent Bank for International Settlements study based on data from a Chinese fintech firm. Because AI models can incorporate a range of data, such as phone bill payments and other transactions, they can help people who might fall through the cracks of the traditional credit scoring process.

OVERLORDS ROBOT

But despite all the potential benefits, AI comes with worrying risks. These will likely be disseminated at a forum that will bring together the public and private sectors and which the BoE and the Financial Conduct Authority will launch in March. One potential problem is to what extent AI models influence human decisions and their willingness to take risks. This is something that would be of interest to regulators regardless of the technology used. But more attention is warranted if, for example, banks increasingly postpone credit reports to robots and become lulled into a false sense of security about the safety of their loans.

Another possible problem is what Mutton called “pro-cyclicity and herd”. Different AI trading programs can be developed using similar data sets. These so-called algorithms could then choose to buy and sell the same securities at the same time, which could amplify asset price fluctuations and exacerbate liquidity shortages in the event of a market panic. A 2017 Financial Stability Board document pointed out that AI algorithms could make decisions that humans find incomprehensible, and therefore difficult to decipher once they have been implemented.

Mutton also says that AI algorithms have the potential to engage in what regulators call “market abuse,” for example by performing frontrunning or impersonation operations. These are illegal attempts to gain an advantage in the financial markets by trading in private information or by pretending to be interested in a security in order to fuel demand. Equally disturbing were the findings of researchers at the University of Bologna who examined the AI ​​algorithms that have been taught to optimize the price of consumer goods in order to maximize profits. Through trial and error, and without communicating directly, the algorithms ended up agreeing to reduce competition and increase their revenues.

Finally, there are ethical questions, such as how to prevent AI credit scoring models from importing programmers’ racial or gender biases, or even developing new biases. Without close supervision, bank managers might find it difficult to explain why some borrowers have been denied a loan. The regulator can help by asking tougher questions about artificial intelligence as early as possible.

Breakingviews

Reuters Breakingviews is the world’s leading source for financial information on agenda making. As the Reuters brand for financial commentary, we dissect big business and economic stories from around the world every day. A global team of around 30 correspondents in New York, London, Hong Kong and other major cities provide real-time expert analysis.

Sign up for a free trial of our full service at https://www.breakingviews.com/trial and follow us on Twitter @Breakingviews and to www.breakingviews.com. All opinions expressed are those of the authors.



Source link

Comments are closed.