The Avellaneda-Stoikov algorithm is a popular model used in market making. Recently it has been very popular in crypto both for individual use and use by larger organizations such as Elixir.
Hummingbot has been a very popular open source implementation of said algorithm.
However, the options out there are far from robust:
While the resulting equations from that paper is concise, and the concepts used to develop it are good, the algorithm has some significant limitations that must be addressed to make it more effective.
Assumptions too perfect
The algorithm relies too heavily on theoretical concepts and assumes too much about the market. It may work well in a perfect world, but in real-life markets, things are never perfect. Therefore, the algorithm requires a lot of customization to make it work effectively in specific markets. This rigidity makes the algorithm less adaptable and challenging to apply.
Too many first order approximations
It also makes a lot of first order approximations, which can cause the model to be very inaccurate when value far from the radius of convergence are input. More on radius of convergence here:
Trying to optimize global profit over extend periods of time
The equations provided are optimal conditions for utility for the time frame proposed. However as practitioners know that market forces are rarely if ever stationary and also exhibit a large degree of heteroskedasticity.
To address this issue, I propose taking inspiration from game development. Game developers use a formula for position based on acceleration (p = p0 + vt0 + at^2), but they do not assume a perfect system and try to predict the future. Instead, they update variables with a small incremental force each frame. This approach allows them to respond to changing conditions in real-time.
The trading equivalent would be to compute the marginal cost of actions rather than their global cost. Therefore, we could take a similar approach with the Avellaneda Stoikov algorithm. Rather than assuming a perfect system and trying to predict the future, we can look at the model incrementally and try to replicate what the data tells us about what just happened in our system.
Inventory volatility is not assumed to be correlated with inventory gain
The risk of inventory is is modeled with sigma as per usual. However, every time we take on inventory it is highly correlated with a move of price in the ‘bad’ direction for the market maker. Not taking losses due to this effect into account can render the entire result useless for practical purposes.
Price impact mode not observed empirically
The exponential price impact assumptions used in the paper may not be observed empirically in your market of interest. In fact, the power model may not either. In reality we will can observe some curve that may not have an analytical form. In these cases it may be ideal to use an empirical data curve to fit/calculate the rest of the downstream models.
Variables are too tightly coupled
Secondly, the variables in the algorithm are too tightly coupled. This makes it challenging to customize the behavior of each parameter individually, which limits the flexibility of the algorithm. It also makes it difficult to diagnose problems and correct them specifically effectively. Ideally, model parameters should be orthogonal to each other in terms of model behaviour.
In my mind the two factors basic market making concerns itself with are:
Managing inventory
Choosing appropriate spread width
Ideally we want a model in which the trader can directly target these two behaviors themselves.
Closing
Of course, I understand these papers are meant to illustrate simple concepts and pieces are to be taken by practitioners and applied with care. However a lot of the recent literature on market making focuses on finding solutions to the global utility of the market making problem.
My work over the past few months have been on a practical market making model, and I will be writing about that soon.