01: The Evolution of eFX

David, you began your career in FX in 1987. Tell me a little bit about your experience.

David: We were a group of currency options traders, and it was at Dresner where we really started off, and we had anything to do with FX, and it was very late in our time at Dresner. They had built a thing called ‘Piranha’, which was a pretty slick looking front end, with very little sitting behind it in terms of the rate formation, or risk management, it was real shop window kind of stuff. The limitations of that, or the lack of pricing sophistication or any risk management, was starting to become evident to my business partner and I, we had started that process when Barclays came calling, and they wanted to get us in there to run the options business. 

Our boss, Ivan Ritossa, mentioned it might be a good idea if we looked at their FX pricing and risk management at BARX. It was kind of a similar story to be fair, they had a great looking shop window, they had reasonable… looking back and it wasn't that big a deal, but they had a few customers on it, and bit of flow going through, enough to get us some data and so on. And we started having a wee look at it, you know, predictably enough, we took the same sort of approach to the rate formation and risk running, as we did with the currency options business, which was very much data market making and, efficient market theory, portfolio of risk, and skew and all that kind of stuff. And we progressively just built out, some, some simple pricing and risk management stuff. 

At the same time, we had some smart friends of ours, who we had hired, they were building, what we thought was going to be like an analytics system, but because they were bloody clever, they built it around a complex event processing engine that became the foundation of everything we sort of did going forward. Now the CPE was awesome, because you could just try stuff, an instrument, a particular node, it would give you an update on how it was doing, and we could isolate what was happening. If it wasn't going so good, we would rip it out and try something different. It was a real process of Darwinism, try stuff out and just the adapt. That was how we did it and then it was just a process from there on really, of improving the thing and running it super data driven, being really diligent about instrumentation and as much as anything, it was really just about trying things out and isolating what didn't work. If you do that for long enough, you end up with what's left being stuff that does work and that's really what BARX was about.


What were the biggest changes you experienced back then? 

I suppose the big thing was when we got started at Barclays there were really very limited number of consistent sources of price discovery and electronic price discovery, the dominant environments were EBS, and Reuters, then there were the challenger ECNs, like Hotspot, Currenex and whatnot, and obviously, those things have taken off. So now you've got a lot of admittedly fragmented price discovery, they are clearly things you can do to try and get an edge, we always thought about rate source manipulation. 

So we would be back at Barclays streaming to market user banks, who would themselves have access to Reuters or EBS, it was possible, and some did, that they would manipulate the rate, you know, bid it up in order to sell it and stuff. Yeah, they could do that. Latency arbitrage was, just a fact of life. One of the things that we were terrible at dealing with initially, but you quickly learn by being kicked in the head often enough. And then there’s dealing not full... I'm not 100% that, those things were not exploited by voice traders back in the day, I mean, spoofing was pretty endemic, dealing not full, sweeping. I mean, that was a fact of life, you know, going on calls, that was exactly what you did, make me a market at 10, and at the same time, as you're selling 10 other guys at 10 translated to the electronic space.

These things present challenges to electronic market makers, but because we've got so much more data to deal with, you have to acknowledge that it's a vulnerability in the in the system… if you've calibrated to work in 1 and somebody is selling across the market, something in 10, that's probably not going to work out that well. But you can use data to isolate when the stuff is going on, you get a number of samples, and the picture becomes pretty clear. And you modify the quality of service that that particular consumer receives. 

So I guess the biggest change is everything just becoming data driven, right. Rather than being driven by anecdote, it became driven by data because the machine was producing the rate in you could customize that rate, you're able to tune prices, so that you delivered the price that a particular customer deserved. I think what's relevant here is, the whole debate about last look, right? Last look really exists, as protection against people who are systematically taking advantage of the turn of the rate. If you've got clients who don't do that, then the likelihood is competitive pressure means that no one will last look, their business. If on the other hand, you got somebody, the only time they trade is when they're trying to capture the turn of the price, by which I mean, you know, going from 18.2, 18.6 to I don't know, whatever, 17.8, 18.2, if that's the only time that you're looking to trade, you're probably going to have either punitive, defensive, last look parameters, or you're going to maybe go well, I don't really want to deal with a client who's only doing that kind of business. 

So the ability to customize the client experience was kind of key in eFX, it wasn't the kind of thing that you could do as a human, you started dealing in million micro seconds, it was never really an issue in the voice space. So you had a real revenge of the nerds thing going on, where it went from the crafty trader veteran to the guys who understood, networked apology, it's not a static environment, and it’s not static now.

Considering your background in banking, what advice would you give to the guys in the retail space?

The retail market is very much on its way to looking like how the institutional space has for a long time… spread compression, the tighter spreads get, the more predatory customers, people that want to take advantage of latency arbitrage or, the ability to deal not full or to deal for effect if you like, the more they can play those kind of games, right? 

Let's go back to a fantasy land where your EURUSD was three pips wide. Yeah, you got a struggle to capture a turn of a price against that kind of spread. So in other words, the spread covers up a multitude of sins, you start getting spreads down to point 0.1, 0.2, and whatnot, and it becomes kind of trivial for people to trump what little spread there is, with superior information to what you've got.

The classic sort of model of banging stuff into the B book, that model starts to get seriously challenged when you get spreads having compressed to the point that they have now. So it's becoming institutional in character, the distinction between retail and institutional, once you get retail spreads institutional levels, it really becomes very blurred. And so I guess you got to start applying the same sort of like institutional techniques and discrimination and judgment about what you want to put in your B book and what you don't.

So how can retail brokers improve the way they run their business? 

They are effectively institutional market makers, right. In a lot of cases, they've outsourced the rate formation to prime of primes or heavily aggregated price streams, which pretty much guarantees that their rate evolution is going to be slow. If you get a move, let's say down at a price and you have 15 or 20, LPs and your pool, one guy is going to move fastest, and the other guys who move in succession, like cars, pulling away from traffic lights or something, you know, and we all know, right? That it's sort of like accordion means that the last guy to move away from the lights, does it quite a long time after the lights have changed. That's what happens with prices on the turn of a heavily aggregated environment, because of phase differences with which the LPs receive the information, differences in the speed with which they process it, throttles by which they can publish it guaranteed by putting more liquidity providers into the mix, that you're predicting your best bid or offer, which is generally what people use to determine their B book decision, not on the first car to get away from the lights, but the guy who’s the last, in his crappy old Scoda. 

So instead, you need to listen to the information earlier in the piece, so as soon as you practically can, you need to be able to a display rate that represents your business, one that is crafted to match the different client sets that have a B book broker faces and perhaps incorporate some skew, to encourage the portfolio to, clear itself. 

If your using other people's rates, obviously, it's got whatever skew, positional skew information it's got on it, is appropriate to their position not yours. If you're running risk seems to me you have a preference for one side of the rate or the other. So that you can achieve greater internalization, lower ambient amounts of risk, less VaR utilization, and less volatile p&l as a result. So a little bit of skew is kind of good. Now, the only way you can incorporate that is to have the pricing being aware of what your position is. If you're taking responsibility for being able to put up different client types, you need to have different models, different behaviors, to map to the different class of client, etc, obviously you probably want a different model, different characteristics in that model to face off against pure, very pure retail versus, say b2b business. You end up needing technology to allow you to create to listen to the source information as quickly as possible to create bespoke models appropriate to the classes of client you face and incorporates skew according to your position. 

Basically, you need to adopt the same technology as the institutional guys have had for a while and that's kind of what Mahi is about. The ability to drop in with Compass bank, right institutional grade pricing and risk management technology into the business, which is originating the business instances separate  and distinct to to each client is so that it can become something which reflect to the business, it isn't a one size fits all proposition.

What do you think will be the outcome for brokers that choose not to operate on a more institutional level?

Take the B book broker first, right? If they're predicting what goes into the book, on a heavily aggregated stack, we see it in the analyses we run, that there is a decent proportion of the flow going into the B book, which is going and on the wrong sort of mid at inception. Now that's going to destroy value. Back in the day you would take the view… well, you know, the fact that it was offside at inception isn’t going to be too big a deal, because there's a behavioral overlay where they will stop themselves out. It's not true when you get into the institutional space, when the spreads are so tight, people can be in and out of the positions, which basically locks the loss inside of the B book. The profitability of the B book will diminish, if spreads compress and if you're not filtering out the stuff which is at the wrong price. So that's the first thing, kind of got to do that. Second thing, is doesn't make sense to me neither has, is to run gargantuan amounts risk. At the point of inception, you've got the best proportion, of spread to mid vs the risk that you're ever going to have. So the ratio if you like of signal to noise, and the birth of the trade, it is its absolute best. And then very quickly, you edge as time goes by,  and you hold that position, your edge is gone and all your left with is noise. So the ratio of signal to noise just gets worse and worse and worse. So in other words, the sharp ratio, the volatility of the p&l just gets worse by virtue of holding the risk. So what you actually do want to be doing, is encouraging the portfolio to clear to self clear, judiciously hedging at the fringes of your risk, and that ends up with just greater predictability of returns, higher quality of returns. And if you're doing it well, there's no diminishing in return. In fact, it becomes greater generally, because most of the yield profiles that we ever see, there is more yield early on in the piece, than there is later on the piece. So in other words, if you can take your holding periods to the left, shorten them, not only do you improve improved sharp ratios and whatnot, but you actually end up making more cash. 

If you take your holding period is zero, and then you're in A book broker, but at that point, you paying somebody else's spread. So that's probably probably not an optimal model either. So what it ends up being, is kind of what we believe is intelligent market making, where you're looking to keep holding periods as short as possible to have the highest yield for the least amount of risk, highest quality of returns, you know, the guys who do this most successfully in that with some degree of network effect, because the holding periods become shorter and shorter, the quality of the returns get higher and higher. And they can therefore, they either enjoy, better profits. Or if they wish for the elasticity of all of this makes sense. They can actually reduce costs for their customers and acquire more business. So I think, in the end, the people are going to grow the business, they're going to end up looking a lot like institutional market makers, and they're going to achieve the network effect and ends up with you know, a greater amount of consolidation.



Previous
Previous

Women In Fintech