In the previous post we introduced inverse reinforcement learning. We defined the problem that is associated with this field, which is that of reconstructing a reward function given a set of demonstrations, and we saw what the ability to do this implies. In addition to this, we also saw came across some classification results as well as convergence guarantees from selected methods that were simply referred to in the post. There were some challenges with the classification results that we discussed, and although there were attempts to deal with these, there is still quite a lot that we did not talk about.
Maximum Entropy Inverse Reinforcement Learning
We shall now introduce a probabilistic approach based on what is known as the principle of maximum entropy, and this provides a well defined globally normalised distribution over decision sequences, while providing the same performance assurances as previously mentioned methods. This probabilistic approach allows moderate reasoning about uncertainty in the setting inverse reinforcement learning, and the assumptions further limits the space in which we search for solutions which we saw, last time, was quite massive. A rather important point of concern is that demonstrations are prone to noise and imperfect behavior. The presented approach provides a principled method of dealing with this uncertainty.
In what follows we state the context in which we are working, the assumptions that characterise the ‘maximum entropy inverse reinforcement learning’ approach, then we show the algorithm as well as how to compute the elements that will be new to most readers who are starting out in exploring the field (inverse reinforcement learning).
We have a demonstrator, as before, whose behaviour that will be characterised by trajectories of states and , i.e
We are going to assume that everything is finite once again to simplify matters in both notation as well as the concepts themselves.
Define,
where this is a parameterisation of the the usual reward function , and is the total reward along a trajectory.
Then define
to be the set of expert demonstrations.
The assumption is that the learner observes these demonstrations, and is trying to model or imitate the demonstrator.
Maximum Entropy Formulation
The maximum entropy approach essentially moves forward by defining the probability of a certain trajectory under the expert as being,
This means that high reward trajectories are more likely to be sampled from the expert, and low reward trajectories are less likely.
The inference of the reward function in this case then essentially comes from the maximisation of the log-likelihood of the set of demonstrations with respect to the parameters of the reward function, i.e
In this formulation,
and evaluating this integral, also known as the partition function, turns out to be one of the harder things to do in high dimensional spaces. Next we give the algorithm for this approach.
Maximum Entropy Reinforcement Learning Algorithm:
1. Initialise , and get a set of demonstrations
2. Obtain an optimal policy, , with regards to
3. Obtain .
4. Compute the gradients .
5. Update with one gradient step using the gradients.
Here is the state visitation frequency, i.e the probability of visiting state under the demonstrative policy for a parameterised reward function. This function is important in the computation, and so we would like a way to go about recovering it.
In order to do this, we define as the probability of visiting the state at time , so that
,
and for
,
where the policy needs to be optimal or near-optimal.
This means that
Looking into the business of inferences, we have the following situation. Let be the number of demonstrations. Then observe that
Taking the usual gradients, we obtain
Observe, once again, that the second term can be simplified as follows:
All of this allows us to compute as required, which gets us to a reward function as needed.
One might notice that, in some sense, the definition of the probability distributions of trajectories that we have given carries a deterministic element to it, and we can do more than this (see (Ziebert. B. D., et al. 2008)).
Here, it is clear that one has to solve for in the inner loop in order to find the state visitation frequencies, so one has to know the dynamics of whatever context they work on, and has to work in a low dimensional space in order to be able to compute the policy and visitation frequencies at every iteration. In the next post, we shall talk about how one would go about removing this as an issue, and find a nice link between methods in inverse reinforcement learning and the generative adversarial networks.
If one would like to, immediately, see a practical implementation of these ideas, then I would suggest visiting: Max-Entropy-Git-Notebook .
Depending on how we do with time, given what would preferably be covered as far as concepts go, we might see very specific applications at the end.
Resources:
1. Ziebart. B. D. et al. Maximum Entropy Inverse Reinforcement Learning. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008).
2. Wulfmeier. W. et al. Maximum Entropy Deep Inverse Reinforcement Learning. arXiv:1507.04888v3 [cs.LG] (2016).
Leave a Reply