Quantcast
Channel: The Park Paradigm » Anthemis
Viewing all articles
Browse latest Browse all 10

This is no way to run a financial system

$
0
0

The micro-cracks are turning into fissures, soon to be gaping crevasses as (finally) the obsolescence of our industrial age banking system plays itself out in spectacular front page headlines. Meanwhile it would seem that our society and our leaders are (mostly) frozen in some kind of macabre trance – eating popcorn and mesmerized by the inevitable Crash.

If you look at the LIBOR scandal in the context of the technology of the fast emerging information economy, it is absolutely mind-boggling that such an anachronistic process even exists in the world of 2012. In a world where every financial flow is digitized and only really exists as an entry in a database. In a world where truly enormous real-time data sets (ones that make the underlying data required for a true LIBOR look puny) are routinely captured and analyzed in the time it takes to read this sentence. In a world where millions (soon billions) of people have enough processing power in their pocket to compute complex algorithms. In a world where a high school hacker can store terabytes of data in the cloud.  In this world, we continue to produce one of the most important inputs into global financial markets using the equivalent of a notebook and a biro… WTF???

You think I’m joking? Libor is defined as:

The rate at which an individual Contributor Panel bank could borrow funds, were it to do so by asking for and then accepting inter-bank offers in reasonable market size, just prior to 11.00 London time.

For each (of 10) currencies, a panel of 7-18 contributing banks is asked to submit their opinion (yes, you read right) each morning on what each rate (by maturity) should be. The published rated is then the “trimmed arithmetic mean”; basically they throw out the highest and lowest submissions and average the rest. No account is taken of the size or creditworthiness or funding position of each bank and the sample size after the “trimming” for each calculation is between 4-10 banks. However, the BBA assures us that this calculation method means that:

…it is out of the control of any individual panel contributor to influence the calculation and affect the bbalibor quote.

You don’t need to be a banker or a quantitative or statistical genius, or an expert in sociology, or even particularly clever to figure out that this is a pretty sub-optimal way to calculate any sort of index, let alone one that has an impact on the pricing and outcomes of trillions of dollars worth of contracts…

In the 1980s when LIBOR was invented – and (lest the angry mob now try to throw the baby out) it should be said an important and good invention – this methodology just might have been acceptable then, as the “best practical solution available given the market and technological context.” Banks used to have to physically run their bids in Gilt auctions to the Bank of England (thus why historically banks were located in the City, tough to compete on that basis from the West End or Canary Wharf, at least without employees a few Kenyan middle distance Olympians…) But you know what?  And this is shocking I know… They don’t do it that way anymore!!!

So if LIBOR is important (and it is), how should we be calculating this in the 21st century? Here’s a few ideas:

  • include all banks participating in the market – and not necessarily just those in London – how about G(lobal)IBOR??
  • collect and maintain (in quasi-real time) important meta-data for each contributing bank (balance sheet size and currency breakdown of same by both deposits and loans, credit rating, historical interbank lending positions, volatility/consistency of submissions, derivative exposure to LIBOR rates, etc.)
  • collect rates and volumes for all realized interbank trades and live (executable) bids and offers (from say 9-11am GMT each day)
    build robust, complex (but completely transparent and auditable) algorithms for computing a sensible LIBOR fixing arising from this data; consider open-sourcing this using the Linux model (you might even get core LIBOR and then forks that consenting counterparties might choose to use for their transactions, which is ok as long as the calculation inputs and algorithms are totally transparent and subject to audit upon request1)

This is not only possible, but in fact relatively trivial today. Indeed companies like the Climate Corporation*, Zoopla*, Metamarkets*, Palantir, Splunk (and dozens and dozens more, including newcomers like Indix* and Premise Data Corp) regularly digest, analyze and publish analogous datasets that are at least (almost certainly far more) as big and complex as the newLIBOR I’m suggesting.

Indeed, the management of this process could easily be outsourced to one – or better many – big data companies, with a central regulatory authority playing the role of guardian of standards (the heavy lifting of which could actually be outsourced to other smart data processing auditors…) In theory this “standards guardian” could continue to be the BBA (the “voice of banking and financial services”) but the political and practical reality is that it should almost certainly be replaced in this role, perhaps by the Bank of England, but given the global importance of this benchmark, I think it is also worth thinking creatively about what institution could best play this role. Perhaps the BIS? Or ISO? Or a new agency along the lines of ICANN or the ITU - call it the International Financial Benchmarks Standards Insitute (IFBSI)? The role of this entity would be to set the standards for data collection, storage and computation and vet and safekeep the calculation models and the minimum standards (including power to subsequently audit at any time) required to be a calculation agent (kitemark.) Under this model, you could have multiple organizations – both private and public – publishing the calculation and in principle if done correctly they should all get the same answer (same data in + same model = same benchmark rate.) Pretty basic “many eyes” principal to improve robustness, quickly identify corrupt data or models.

As my friend (and co-founder of Metamarkets and now Premise Data Corporation) David Soloff points out:

TRUST ONLY THE MACHINES.

And it’s not just LIBOR as Gillian Tett highlights in the FT:

If nothing else, this week’s revelations show why it is right for British political figures, such as Alistair Darling, to call for a radical overhaul of the Libor system. They also show why British policy makers, and others, should not stop there. For the tale of Libor is not some rarity; on the contrary, there are plenty of other parts of the debt and derivatives world that remain opaque and clubby, and continue to breach those basic Smith principles – even as bank chief executives present themselves as champions of free markets. It is perhaps one of the great ironies and hypocrisies of our age; and a source of popular disgust that chief executives would now ignore at their peril.

Rather than join the wailing crowd of doomsayers, I remain optimistic. The solution to this – and other similar issues in global finance – either exist or are emerging at a tremendous pace. I know this because this is what we do here at Anthemis. But I’m clear-headed enough to know that we only have a tiny voice. Clearly it would seem that our long predicted Financial Reformation is starting to climb up the J-curve. I just hope that if Mr. Cameron does launch some sort of parliamentary commission that voices that understand both finance and technology are heard and listened to. Excellent, robust, technology-enabled solutions are entirely within our means, I’m just not confident that the existing players have the willingness to bring these new ideas to the table.

* Disclosure: I have an equity interest, either directly or indirectly in these companies.


1 There may exist some good reasons for keeping some of the underlying data anonymous, but I think it would be perfectly possible to find a good solution whereby the data was made available to all for calculation purposes but the actual contributor names and associated price, volume and metadata were kept anonymous and only known to the central systemic guardian. Of course you’d have to do more than just replace the bank name by some static code, it would need to be dynamically changing, different keys for different calculation agents etc. but all very doable I’m sure. You’d be amazed what smart kids can do with computers these days.

Enhanced by Zemanta

Viewing all articles
Browse latest Browse all 10

Trending Articles