Machine Learning Trading

Reinforcement Learning

10 minute read



Notice a tyop typo? Please submit an issue or open a PR.

Reinforcement Learning

Overview

Until now, we've built models that forecast price changes and then used these models to buy or sell the stocks with the most significant predicted change. This approach ignores some important issues, such as the certainty of the prediction and when to exit a position. We are going to shift our focus now to reinforcement learning. Reinforcement learners create policies that provide specific direction on which action to take given a particular set of parameters.

The RL Problem

When we talk about reinforcement learning, we are talking about a problem, not a solution. In the same way that linear regression is one solution to the supervised regression problem, there are many algorithms that solve the reinforcement learning problem.

To demonstrate one possible algorithm, let's consider a robot interacting with the environment.

First, the robot observes the environment; more specifically, it reads in some representation of the environment. We call this representation the state, which we denote as SS.

The robot has to process SS to determine what action to take and does so by consulting a policy, which we denote as π\pi. The robot considers SS with regard to π\pi and outputs an action aa.

The action aa changes the environment in some way, and we can use a transition function, denoted as TT, to derive a new environment from the current environment and a particular action.

Naturally, the robot reads in the updated environment as a new SS, against which it consults π\pi to generate a new aa. We refer to this circular process as the sense, think, act cycle.

To understand how the robot arrives at the policy π\pi, we have to consider another component of this cycle: the reward, rr. For any action that the robot takes, in any particular state, there is an associated reward.

As an example, if a navigation robot senses a cliff ahead and chooses to accelerate, the associated reward might be very negative. If, however, the robot chooses to turn around, the reward might be very positive.

In our case, we can imagine that our robot has a little wallet where it keeps its cash. The reward for a particular action might be how much cash that action adds to the wallet. The objective of this robot is to execute actions that maximize this reward.

Somewhere within the robot, there is an algorithm that synthesizes information about SS, rr, and aa over time to generate π\pi.

To recap, SS is the state of the environment that the robot "senses", and it uses a policy π\pi to determine which action aa to take. Each action comes with an associated reward rr, and the robot tries to refine π\pi over time to maximize rr.

In terms of trading, the environment is the market, and the available actions correspond to the actions that we can take in the market, such as buying, selling, or holding. SS corresponds to different factors about our current portfolio that we might observe, and rr is the return we get for making specific trades.

Trading as an RL Problem Quiz

We want to use reinforcement learning algorithms to trade; to do so, we have to translate the trading problem into a reinforcement learning problem.

Consider the following items. For each item, select whether the item corresponds to a component of the external state SS, an action aa we might take within the environment, or a reward rr that we might use to inform our policy π\pi.

Trading as an RL Problem Quiz Solution

Buying and selling stock are both actions that we execute upon our environment.

Holding long and Bollinger value are both parts of the state. Holding long tells us about our position in a particular stock, and Bollinger value tells us about the current price characteristics of a stock. Both of these pieces of information might drive subsequent action.

The return from a trade is our reward. If our return is positive, so is our reward. On the other hand, a negative reward indicates that we lost money on the position. Daily return could be either a component of the state - a factor we consider before generating an action - but it could also be a reward.

Mapping Trading to RL

Let's think about how we can frame stock trading as an RL problem. In trading, our environment is the market. The state that we process considers, among other factors: market features, price information, and whether we are holding a particular stock. The actions we can execute within the environment are, generally, buy, sell, or do nothing.

In the context of buying a particular stock, we might process certain technical features - like Bollinger Bands - to determine what action to take. Suppose that we decide to buy a stock. Once we buy a stock, that holding becomes part of our state.

We have transitioned from a state of not holding to holding long through executing the buy action. We can transition from a state of holding long to not holding by executing the sell action.

Suppose the price increases, and we decide to execute a sell. The money that we made, or lost, is the reward that corresponds to the actions we took. We can use this reward and its relationship to the actions and the state to refine our policy, thus adjusting how we act in the environment.

Markov Decision Problems

Let's formalize some of the components that we have been discussing.

We have a set of states, SS, which corresponds to all of the different states that our robot can receive as input. Additionally, we have a set of actions, AA, which enumerates all of the potential actions we can execute within the environment.

Additionally, we have a transition function, TT. This function takes in three arguments: a current state, ScS_c, an action, AiA_i, and a future state SfS_f. Given these three values, the transition function returns the probability that AiA_i applied to ScS_c brings about SfS_f.

For a particular ScS_c and AiA_i, we are going to end up in some new state. As a result, the sum of the probabilities of state transitions from ScS_c to some SfS_f in SS for AiA_i must equal 1.

Finally, we have the reward function RR, which receives an action, AiA_i, and a state, SiS_i, as input and returns the reward for executing that action in that state.

A problem parameterized by these four components is known as a Markov decision process.

The problem for a reinforcement learning algorithm is to find a policy π\pi that maximizes reward over time. We refer to the theoretically optimal policy, which the learning algorithm may or may not find, as π\pi^*.

There are several algorithms that we can deploy to find π\pi - assuming we know TT or RR - such as policy iteration and value iteration. In this class, however, we don't know TT or RR ahead of time, so we can't use either of these algorithms directly.

Unknown Transitions and Rewards

Most of the time, we don't know the transition function, TT, or the reward function, RR, a priori. As a result, the reinforcement learning agent has to interact with the world, observe what happens, and work with the experience it gains to build an effective policy.

Let's say our agent observed state s1s_1, took action a1a_1, which resulted in state s1s^{'}_1 and reward r1r_1. We refer to the object (s1,a1,s1,r1)(s_1, a_1, s^{'}_1, r_1) as an experience tuple.

The agent iterates through this sense-think-act cycle over and over again, gathering experience tuples along the way. Once we have collected a significant number of these tuples, we can use two main types of approaches to find the appropriate policy, π\pi.

The first set of approaches is model-based reinforcement learning. In model-based reinforcement learning, we look at the experience tuples over time and build a model of TT and RR by examining the transitions statistically. For example, for each state, action pair si,ais_i, a_i, we can look at the distributions of the observed next state and reward, sis^{'}_i and rir_i, respectively. Once we have these models, we can use value iteration or policy iteration to find the policy.

The second set of approaches is model-free reinforcement learning. Model-free reinforcement learning methods derive a policy directly from the data. We are going to focus on Q-learning in this class, which is a specific type of model-free learning.

What to Optimize

We haven't gone into much detail about what exactly we are trying to optimize; currently, we have a vague idea that we are trying to maximize the sum of our rewards. To make our discussion of optimization more concrete, consider our robot navigating the following maze.

This maze has a reward of $1 two spaces away and a much higher reward of $1 million eight spaces away. The $1 reward is unique in that the robot can receive it multiple times; for instance, entering, exiting, and re-entering the $1 square results in a $2 reward. The robot can only receive the $1 million reward once.

The red squares are obstacles, and the robot cannot occupy those spaces. The other squares are marked with their respective rewards or, in the case of negative numbers, penalties.

Let's say we wanted to optimize the sum of the robot's rewards from now until infinity. We refer to this timeframe as an infinite horizon. In this case, we don't particularly care if the robot exclusively grabs the $1 ad infinitum, or if it first grabs the $1 million before returning to the $1. With regard to the reward, either strategy is sufficient since both result in an infinite reward.

More often than not, we have a finite horizon. We might, for example, want to optimize our reward over the next three moves. If we take three steps towards the $1 million reward, we are going to experience three penalties of -1, for a total penalty of -3. If we take three steps towards the $1, however, we experience a reward of zero, a reward of $1, and a reward of 0, for a total reward of $1.

If we extend our horizon out to eight moves, we see that the optimal path changes. Instead of retrieving the $1 reward four times, we can incur seven penalties of -1 in exchange for the $1 million reward.

We can now articulate the expression that we want to maximize. Given a horizon of size nn, and note that nn can equal infinity, we want to maximize the expression

i=1nri\sum_{i=1}^{n} r_i

Remember from one of our earlier lessons where we talked about the present value and future value of money. One of the main points that we arrived at was that a dollar tomorrow is worth less than a dollar today.

We can think about rewards in a similar fashion: a reward of one dollar today is more valuable than a reward of one dollar tomorrow. We can reformulate our expression above to incorporate the discounting of future rewards. That is, given a discount rate, γ\gamma, we want to maximize

i=1γi1ri\sum_{i=1}^{\infty} \gamma^{i-1} * r_i

Note: this expression is almost identical to the expression we used to calculate the present value of all future corporate dividend payments.

We can look at the first few terms of this expression to understand what is happening. The value of an immediate reward, r1r_1, is equal to γ0r1=r1\gamma^0 * r_1 = r_1. The value of the next most immediate reward, r2r_2, is equal to γr2\gamma * r_2. The value of the third most immediate reward, r3r_3, is equal to γ2r3\gamma^2 * r_3. These first few steps demonstrate how we use γ\gamma to devalue rewards that are further out in the future.

The value of γ\gamma is greater than zero and less than or equal to one. The closer it is to one, the more we value rewards in the future (the less we discount them). The closer it is to zero, the less we value rewards in the future (the more we discount them).

Which Approach Gets $1M Quiz

Which of the following approaches leads our robot to a policy that causes it to reach the $1 million reward?

Which Approach Gets $1M Quiz Solution

With an infinite horizon, the robot may exclusively grab the $1 ad infinitum, or it might first grab the $1 million before returning to the $1. As a result, obtaining the $1 million is possible with infinite horizon, but not guaranteed.

With a finite horizon of length four, the robot does not reach the $1 million. A journey towards the $1 million results in four penalties, whereas heading towards the $1 results in a positive reward. However, if we increase the horizon to ten, the robot does reach the $1 million.

In the discounted scenario, each reward in the future is successively devalued by 5%. Even so, the $1 million reward is so large that seeking this reward is still the optimal strategy.

OMSCS Notes is made with in NYC by Matt Schlenker.

Copyright © 2019-2023. All rights reserved.

privacy policy