Dont beat the market; beat the bots: Adversarial networks in finance
Automated investing has brought an immense amount of stability to the market, but it has also brought predictability. Garrett Lander and Al Kari examine if an adversarial network can game the behavior of automated investors by learning the patterns in market activity to which they are most vulnerable.
|Talk Title||Dont beat the market; beat the bots: Adversarial networks in finance|
|Speakers||Garrett Lander (Manceps), Al Kari (Manceps)|
|Conference||O’Reilly TensorFlow World|
|Location||Santa Clara, California|
|Date||October 28-31, 2019|
One of the amazing things about AI is how it can simultaneously appear superior and inferior to human intelligence, such as a self-driving car that can react instantly to an accident ahead of it but get confused by a pedestrian walking a bicycle through the street. A massive effort has been undertaken by the ML community to better peer into the black box and understand why and how AI models make the decisions that they do. Unfortunately, the human brain hits a wall when it tries to comprehend a billion-parameter function approximation. The only real candidate for understanding how an AI works is another AI. Garrett Lander and Al Kari use a sample of a financial market to build a simulation in which the players (a mix of erratic human traders and predictable automated traders) attempt to predict the activity of the market by purchasing and selling their holdings. The automated traders are trained on historical data of the holdings, while the humans trade reactively based on biases and the successes or failures of their previous actions. Then TensorFlow 2.0’s robust new reinforcement learning tools construct an adversarial network, and the fun begins. The adversarial network, with only limited capital, learns how to exploit the patterns of the other players to manipulate the market either for gain (maximizing its own holdings) or anarchy (maximizing market volatility). Not only will you get to watch this unfold through a live visualization, you’ll gain firsthand experience with the newest imperative in machine learning: F1, accuracy, root-mean-square error (RMSE), and the like are meaningless if your model isn’t robust to the exploitability of its own pattern recognition.