Market Making with the help of AI

market making with AI

in this article we want to discuss on Market Making with the help of AI.

How AI change market making ?

“Artificial intelligence is to trading what fire was to the cavemen.” 

The Problem:

Market-makers serve important functions in financial markets, providing liquidity to the markets and immediacy to the execution of trades. We are designing algorithms for automated market-making in different market conditions such as competitive vs. monopolistic dealer markets and studying outcomes in terms of indicators of market quality such as the bid–ask spread.

How AI helps Trading?

if you want to know Market Making with the help of AI, first you should know how AI helps trading.

According to a recent study by U.K. research firm Coalition, electronic trades account for almost 45 percent of revenues in cash equities trading.

and while hedge funds are more reluctant when it comes to automation,

also,many of them use AI-powered analysis to get investment ideas and build portfolios.

“Machine learning is evolving at an even quicker pace and financial institutions are one of the first adapters,” Anthony Antenucci, vice president of global business development at Intel net Global Services, recently said.

When Wall Street statisticians realized they could apply AI to investment trading applications,

also,he explained, “they could effectively crunch millions upon millions of data points in real time and capture information that current statistical models couldn’t.”

also,Companies uses AI to develop quantitative trading and investment strategies. By combining evolutionary intelligence technologies with deep learning algorithms.

Algorithms provide seamless execution into market actions the current evolving cry-for-action from the AI space.

It also begins by introducing a set of benchmarks: a group of fixed “spread-based” strategies, and an online learning approach based on the work of Abernathy and Kale [2013] to anchor the performance in the literature.

Next, a pair of basic agents, using an agent-state representation,

non-dampened PnL reward function and QL or SARSA, will be presented.

These basic agents represent the best attainable results using “standard techniques” and are closely related to those introduced by Chan and Shelton [2001].

Then, it moves on to an assessment of our proposed extensions, each of which is evaluated and compared to the basic implementation. This gives an idea of how each of our modifications perform separately. Finally, present a consolidated agent combining the most effective methods.

also show that this approach produces the best out-of-sample performance relative to the benchmarks, the basic agent, and in absolute terms.

As a research in University of Liverpool investigated different learning algorithms, reward functions and state representations, and consolidated the best techniques into a single agent which is shown to produce superior risk-adjusted performance.

The following research represents the key areas for future research based on our results:

(1) Apply more advanced learning algorithms and true online variants.

which provide guarantees of convergence with linear function approximation, options learning and direct policy search methods.
(2) Explore deep reinforcement learning, in particular using recurrent neural networks.

that should be well-suited to the sequential nature of the problem.
(3) Introduce parametric action spaces as an alternative to discrete action sets.
(4) Extend to multiple order and variable order size action spaces.
(5) Investigate the impact of market frictions such as rebates, fees and latency on the agent’s strategy.
(6) Use sequential Bayesian methods for better order book reconstruction and estimation of order queues.