Deriving the PPO Loss from First Principles
I have been trying to wrap my head around reinforcement learning methods like DPO, GRPO, and RLVR for a while now, especially with all the recent work showing how effective they can be for LLM post-training. Since I amm still pretty new to RL, I figured the best place to start was Proximal Policy Optimization (PPO), the algorithm OpenAI used to show how reinforcement learning could meaningfully improve LLM alignment (InstructGPT paper). My hope is that getting comfortable with PPO will give me the right mental model for the policy-gradient side of things and make it easier to understand the newer LLM-specific RL methods built on similar ideas.
If you start learning RL, you quickly realize it involves a lot of math! So I decided to lean into that and do a few (possibly annoying) derivation sessions to really understand the PPO objective by building it up from first principles, similar to how Umar Jamil does in his video.
A huge shoutout to Umar Jamil's video on RLHF and PPO: it was incredibly helpful for building intuition and understanding the math behind the PPO loss.
Below is my attempt at the derivation based on the original PPO and InstructGPT papers and Umar Jamil’s video.
I: Reinforcement Learning: Core Definitions
| Concept | General RL Definition | LLM Context (RLHF) |
|---|---|---|
| Reinforcement Learning | A learning setup where an agent learns to act in an environment to maximize expected cumulative reward. | Fine-tuning a language model to generate responses that better match human preferences using reward-based feedback. |
| Environment | Everything outside the agent that it interacts with and that produces observations and rewards. | The prompt distribution and interaction loop and the reward signal from a reward model evaluating generated responses. |
| Agent | The learner/decision-maker that observes states, takes actions, and receives rewards. | The language model generating text token by token. |
| Action ( ) | A choice made by the agent, usually conditioned on the state . | Picking the next token at each step of generation. |
| State ( ) | The information available to the agent at a given time step. | The prompt plus the response generated so far (the current token context). |
| Reward ( ) | A scalar signal telling the agent how good or bad an outcome was. | A score from the reward model (trained on preference data) that judges how good or bad a response is. |
| Policy ( ) | A stochastic mapping from states to a distribution over actions. | The model's probability distribution over the next token given the context. |
| Goal | Find an optimal policy that maximizes expected cumulative reward over time. | Update (align) the model so it tends to generate responses with higher reward-model scores. |
II: Reward Model in RLHF for LLMs
A Reward Model (RM) is a neural network that takes a prompt and a response as input and outputs a scalar reward indicating how "good" or "aligned" that response is according to human preferences.
Policy-gradient methods (including PPO) require a scalar objective to update the policy parameters. In standard RL, the environment provides this signal. However for language generation, there is no natural environment giving us rewards for "good" responses. Having humans rate every output is impractical and for gradient-based optimization, we need a differentiable scalar signal to backpropagate through. Thus, we require a cheap, differentiable proxy for human preferences during RL training. A learned RM provides exactly this.
How is the Reward Model Trained?
The standard procedure for training the reward model is:
- Sample prompts ( )
- Generate multiple candidate completions ( ) from a baseline policy (often an SFT model).
- Ask humans to compare candidates (pairwise preferences are easier than absolute scoring).
- Train the RM ( ) to predict those preferences.
Architecturally, the reward model is typically:
- Initialized from a pretrained language model (often the SFT model itself)
- The final non-embedding layer (which projects to vocabulary) is removed
- Replaced it with a linear layer that projects the hidden state of the last token to a single scalar output
Reward Model Loss Function
The reward model is trained using the Bradley-Terry model for pairwise comparisons. The probability that response (preferred) is preferred over (less preferred) for any prompt is modeled as:
where is the sigmoid function:
The negative log-likelihood loss is:
One can verify that this loss forces the reward model to assign higher rewards to preferred responses (see InstructGPT paper or Umar Jamil's video for a detailed walkthrough).
There are two key insights here:
- We don't need absolute scores, we only need the reward model to correctly rank responses.
- The loss depends only on differences ( ), so it is invariant to adding a constant to all rewards. This will be useful later when we discuss the PPO loss.
The reward model serves as a learned proxy for human preferences, converting the intractable problem of getting human feedback on every generation into a tractable supervised learning problem. Once trained, it provides the scalar signal needed to optimize our policy (LLM) using rl algorithms like PPO.
III: Trajectories and Returns
Trajectory
A trajectory (also called a rollout or episode) is a sequence of states ( ), actions ( ), and rewards ( ) generated by an agent interacting with an environment:
In the context of LLMs, a trajectory corresponds to the entire sequence of token generations. It is the prompt followed by all generated tokens until the end-of-sequence token.
Note that the states are always stochastically modeled, and can be represented as . Given a stochastic policy , the probability of a trajectory is the product of:
- The initial state distribution
- The stochastic policy
- The environment transition dynamics
Return
The return is the cumulative reward collected over the full trajectory ( ). The simplest form is the undiscounted return:
More generally, we use the discounted return:
where is the discount factor. The discount factor serves a couple of purposes:
- It ensures the return is finite for infinite-horizon tasks ( ).
- It prioritizes immediate rewards over distant ones.
IV: Policy Gradient Optimization and REINFORCE Algorithm
The goal of reinforcement learning is to find a policy that maximizes the expected return over all possible trajectories:
This is our objective function and we want to find parameters such that:
To maximize using gradient-based methods, we need to compute and perform gradient ascent:
This policy gradient looks simple in equation form but it is intractable to compute. The expectation is over trajectories sampled from , which itself depends on . We can't simply enumerate all possible trajectories. This is computationally intractable for any reasonably sized state-action space (and certainly not possible for LLMs!).
Thus, as a next step we need to derive some sort of reasonable and tractable approximation for . We do this by using the log-derivative trick.
This expectation can be written as an integral:
Bringing the gradient inside the integral:
Now we apply the log-derivative trick:
Rearranging: and substituting back, we get:
which can also be written as the following expectation:
Note, here the gradient is now the expectation of the gradient of the log-probability of the trajectory. This can further be simplified by using the trajectory probability expression (III.I):
Taking the log:
When we take , only the policy term depends on :
The initial state distribution and transition dynamics are independent of , so their gradients vanish. Substituting back, we obtain the policy gradient theorem:
This is a remarkable result. We can compute the gradient of our objective without differentiating through the environment dynamics and only need gradients of the log-probabilities of our policy.
Since we cannot compute the expectation exactly, we approximate it with a sample mean by sampling trajectories:
This gives us the REINFORCE algorithm:
Initialize: Start with a pretrained or supervised fine-tuned (SFT) language model
Sample prompts: Draw a batch of prompts from a dataset
Generate trajectories: For each prompt , generate a response by sampling tokens from the policy . Each trajectory is the sequence of states (prompt + generated tokens so far) and actions (selected tokens).
Compute log-probabilities: For each trajectory, compute the log-probability of each generated token given its context:
Compute rewards: Score each complete (prompt, response) pair using the reward model:
Estimate policy gradient: Compute the gradient estimate using (IV.V):
Update policy: Perform a gradient ascent step:
Repeat: Go back to Step 2 and iterate until convergence
While REINFORCE provides an unbiased gradient estimate, it suffers from two critical issues that make it impractical for LLM training:
High Variance: The gradient estimate suffers from high variance depending on the sampled trajectories. This variance can be large and can lead to noisy gradients and unstable training.
If you look again at (IV.V), the gradient estimate for each action is weighted by the return of the entire trajectory . This means that even if an action was good, it might receive a negative gradient update simply because other actions in the trajectory led to poor outcomes (or vice versa). Over many samples, the noise introduced by this coupling can be substantial, leading to high variance
On-Policy Constraint (Sample Inefficiency): REINFORCE requires trajectories sampled from the current policy . Thus after every gradient update, previously collected trajectories must be discarded and new ones need to be sampled from the updated policy. For LLMs, where each trajectory requires a full forward pass through a billion(s)-parameter model, this is prohibitively expensive especially when we need many small gradient steps to train effectively.
V: Reducing Variance and the Advantage Function
The REINFORCE algorithm provides an unbiased gradient estimate (IV.V). However while unbiased, this estimator suffers from high variance.
Replacing Full-Trajectory Return with Reward-to-Go (using causality)
A first variance reduction comes from noticing that action taken at time cannot influence rewards that were received before time . This is a fundamental consequence of causality. These past reward terms contribute only noise to the gradient estimate and add variance without contributing any signal. Thus, we can remove them and consider only the rewards-to-go :
This gives us a lower-variance estimator:
where is the rewards-to-go for trajectory starting from time .
Subtracting a Baseline
A second complementary technique for variance reduction is to subtract a baseline from the rewards. The key insight is that we can subtract any function that does not depend on the action from our reward signal without changing the expected value of the gradient.
Thus we can subtract a state-dependent baseline from our rewards-to-go to yield an unbiased gradient estimator:
Value Functions: and
The baseline is still an arbitrary function. To make it more systematic and concrete, there are two fundamental functions from RL theory.
State Value Function: The state value function is the expected return when the agent is in state and acts according to policy :
Intuitively, tells "How good is this state on average?" and is used as a baseline .
Action Value Function (Q-function): The action value function is the expected return when starting in state and taking action and then acting according to policy :
Intuitively, tells "How good is this specific action in this state?" and in RL, the rewards-to-go is estimated as .
In the LLM context:
- estimates the expected reward for a given prompt + partial response, assuming the model continues generating according to its current policy.
- estimates the expected reward if, from the current prompt + partial response, the model generates a specific next token and then continues according to its policy.
Advantage Function
The advantage function measures how much better (or worse) a specific action is compared to the average action under the policy:
The advantage function directly tells us: "How much better is this particular action compared to what we would typically do in this state?" This is precisely the signal we want for policy improvement. We want to increase the probability of actions with positive advantage and decrease the probability of actions with negative advantage.
From Umar Jamil's video:
In the LLM context consider a state where the prompt is "Where is Shanghai?" and the model has generated "Shanghai is". From this state:
- If the model samples the token "in" (leading toward "Shanghai is in China"), this action likely has positive advantage. This is because it is better than the average token the model might produce.
- If the model samples the token "delicious" (leading toward an incoherent response), this action likely has negative advantage. This is because it is worse than the average token the model might produce.
Advantage-Weighted Policy Gradient
Substituting the rewards-to-go and the value function as a baseline, we get the following form of the policy gradient:
which can be written as:
and for sample-based approximation:
where is an estimate of the advantage function at time in trajectory . This is the form of the policy gradient often used.
In practice, can be estimated as follows:
Learn a value function: Train a neural network (often called the "critic" or "value head") to approximate . In LLM fine-tuning, this is often a linear layer on top of the same transformer backbone used for the policy.
Estimate from samples: Given a trajectory, the rewards-to-go provides an unbiased (but high-variance) estimate of .
Compute advantage estimates:
More sophisticated methods like Generalized Advantage Estimation (GAE) interpolate between high-variance, low-bias estimates and low-variance, high-bias estimates by using a weighted combination of multi-step returns. See the GAE paper for more details.
VI: Importance Sampling and Off-Policy Policy Gradients
Note: In RL literature, "off-policy" typically refers to methods where the behavior policy (generating data) is arbitrarily quite different from the target policy (being optimized) say where transitions from policies thousands of updates old are reused. In this section, what we will call "off-policy" should more precisely be called "local off-policy".
The advantage-weighted policy gradient (V.IV) requires trajectories sampled from the current policy . ... The advantage-weighted policy gradient (V.IV) requires trajectories sampled from the current policy . This creates a fundamental inefficiency i.e., after each gradient update all previously collected trajectories become "stale" and we must discard these trajectories and sample new ones from the updated policy.
For LLMs, where each trajectory requires a full forward pass through billion(s)-parameter model, this is prohibitively expensive especially when we need many small gradient steps to train effectively.
We need a way to reuse the same trajectories for multiple gradient updates. Importance sampling provides the mathematical machinery to do exactly this!
Importance Sampling
Importance sampling is a technique for estimating expectations under one probability distribution using samples drawn from a different distribution. Consider an expectation for distribution :
We can rewrite this by multiplying and dividing by another distribution (with wherever ):
The ratio is called the importance weight. This identity tells us:
We can now estimate the expectation under using samples from as long as we reweight each sample by the ratio of probabilities.
Applying Importance Sampling to Policy Gradients
We can apply this technique to the policy gradient setting. The on-policy advantage-weighted gradient (V.IV) is:
To apply importance sampling, we work at time-step level rather than trajectory level (full trajectory importance weights have extremely high variance). For a single timestep:
Using importance sampling with samples from :
Now we apply the log-derivative identity , which gives us a surrogate objective whose gradient equals this importance-weighted policy gradient:
where the importance-weighted surrogate objective also known as the Conservative Policy Iteration (CPI) objective is:
We also define the probability ratio as:
Note that by construction. Thus, the CPI objective can be written as:
where is the estimated advantage at timestep , and denotes the empirical average over a batch of samples collected under .
This objective has a clear interpretation:
- If (action better than average), we want to increase , i.e., make the new policy more likely to take this action.
- If (action worse than average), we want to decrease , i.e., make the new policy less likely to take this action.
The corresponding sample-based approximation is:
Off-Policy Learning: Reusing Trajectories
The CPI objective enables off-policy learning: we can sample trajectories from , store them and then perform multiple gradient updates on using the same batch of data. The typical workflow becomes:
- Collect: Sample trajectories from the current policy
- Compute: Calculate advantages and log-probabilities
- Store: Save the trajectories along with their advantages and old log-probabilities
- Optimize: Perform multiple gradient ascent steps on using mini-batches from the stored data
- Repeat: Set and return to step 1
This dramatically improves sample efficiency. Instead of discarding trajectories after a single gradient step, we can extract multiple updates from each batch of expensive LLM rollouts.
The Instability Problem
While the CPI objective improves sample efficiency, unconstrained optimization of is unstable. The core issue is that importance sampling becomes unreliable when drifts far from :
- Extreme probability ratios: The ratio can become arbitrarily large or small, destabilizing gradient estimates.
- Stale advantages: The estimates were computed under and become inaccurate as diverges. The optimizer may exploit these stale estimates, making updates that appear beneficial but are actually harmful.
In practice, unconstrained maximization of often leads to excessively large policy updates that cause catastrophic performance collapse.
LLM Context (from Umar Jamil): Suppose we have a trajectory where the model generated "Shanghai is in China" with high advantage. Unconstrained optimization might dramatically upweight "China" as the next token given "Shanghai is in"—but this could simultaneously cause unintended probability shifts elsewhere, perhaps making the model overly likely to say "China" in completely unrelated contexts, or disrupting the probability mass across the entire vocabulary in unpredictable ways.
We need a mechanism to constrain from deviating too far from and keeping the ratio close to 1 while still allowing meaningful policy improvement.
VII: Trust Region Policy Optimization (TRPO)
The CPI objective is attractive because it lets us reuse data via importance ratios, but unconstrained optimization is unstable. When drifts far from , the probability ratios become extreme and the advantage estimates become stale and can be exploited by the optimizer.
The key insight of Trust Region Policy Optimization (TRPO) is that the surrogate objective is only a valid approximation to the true objective within a local neighborhood of . TRPO paper formalized this by proving policy performance is guaranteed to improve as long as the KL divergence between consecutive policies remains bounded. This theoretical result motivates constraining the policy update to stay within a "trust region" where the surrogate objective remains reliable. See the TRPO paper for the formal proof.
TRPO converts this insight into a constrained optimization problem that ensures the policy update stays within a "trust region" where the surrogate objective remains reliable.
The hyperparameter defines the trust region size, the maximum allowed divergence between consecutive policies. This constraint ensures that remains close to 1, keeping our importance-weighted estimates reliable.
Solving (VII.I) requires second-order optimization. TRPO approximates the objective linearly and the KL constraint quadratically (using the Fisher Information Matrix) and then solves the resulting problem via the conjugate gradient algorithm followed by a line search to ensure constraints are satisfied.
For large-scale LLM training, this approach is impractical:
- Computational overhead: Each policy update requires multiple conjugate gradient iterations and line search steps, significantly more expensive than standard gradient descent.
- Memory requirements: Computing Fisher-vector products adds substantial memory overhead for billion(s)-parameter models
The theory behind TRPO also suggests using a KL penalty rather than a hard constraint. It is easier to implement and more computationally efficient.
However, choosing a penalty coefficient that works across different problems or even across different training stages is notoriously difficult. This motivates Proximal Policy Optimization (PPO): a first-order method that achieves TRPO's stability through a clipped surrogate objective rather than explicit constraints.
VIII: Proximal Policy Optimization (PPO)
Proximal Policy Optimization (PPO) achieves TRPO's stability guarantees using only first-order optimization. Instead of explicitly constraining the KL divergence, PPO modifies the objective function itself to discourage large policy updates through a clipping mechanism. It implicitly limits how far the policy can move, providing a "soft" trust region using only standard gradient descent.
Clipped Surrogate Objective
CPI objective and probability ratio from Section VI:
The problem with is that nothing prevents from becoming arbitrarily large or small. PPO addresses this by clipping the probability ratio to stay within :
where is a hyperparameter ( from the PPO paper) and the clip function is defined as:
The operator in (VIII.I) is important. It ensures we take the more pessimistic (lower) estimate between the clipped and unclipped objectives. This creates different behavior depending on the sign of the advantage:
Case 1: Positive Advantage ( )
When an action is better than average, we want to increase its probability, which means increasing . The objective becomes:
- If : The objective is , so gradient ascent increases
- If : The objective becomes
The clipping removes the incentive to increase beyond .
Case 2: Negative Advantage ( )
When an action is worse than average, we want to decrease its probability, which means decreasing . Since , multiplying by a smaller makes the product less negative (larger). The objective becomes:
(The with negative values becomes a in terms of which is selected.)
- If : The objective is , so gradient ascent decreases
- If : The objective becomes
The clipping removes the incentive to decrease beyond .
The takeaway here is that PPO provides a pessimistic lower bound on . We ignore updates when they would make things "too good to be true."
LLM Context (from Umar Jamil Video): In language model fine-tuning, the policy is the probability the model assigns to token given the context (prompt + previously generated tokens). The probability ratio measures how much more or less likely the fine-tuned model is to generate a particular token compared to the reference policy. Clipping ensures that no single token's probability can change by more than a factor of in a single update iteration, preventing the model from "overreacting" to high-advantage tokens.
PPO Objective
In practice, PPO combines the clipped policy objective with two additional terms:
1. Value Function Loss ( ): Recall from Section V that we need a value function to compute advantage estimates. The value function is trained to minimize the squared error between its predictions and the actual returns:
where is typically the discounted return-to-go. When the policy and value function share parameters (common in LLM fine-tuning where both use the same transformer backbone), this loss is subtracted from the objective (hence the negative sign, since we maximize but minimize ).
2. Entropy Bonus ( ): To encourage exploration and prevent premature convergence to deterministic policies, PPO adds an entropy loss:
Here, the coefficients control the regularization strength.
IX: Complete PPO Objective with KL Penalty
When fine-tuning an LLM with "vanilla" PPO, the policy learns to maximize rewards from the reward model. However, the reward model is an imperfect proxy for human preferences. It is a neural network trained on limited data that can be exploited. Without constraints, the policy may discover adversarial outputs that achieve high reward scores while producing text that:
- Degenerates into repetitive or nonsensical patterns that "fool" the reward model
- Drifts far from natural language, losing fluency and coherence
- Exploits spurious correlations learned by the reward model
This phenomenon is called reward hacking. The policy finds a way to "game" the reward model rather than genuinely improving response quality.
To prevent reward hacking, the InstructGPT paper adds a KL divergence penalty that regularizes the policy to stay close to a reference model (typically the SFT model before RL fine-tuning).
From Section VIII, the PPO objective (to be maximized via gradient ascent) consists of three terms:
Now, we don't use raw reward model scores directly. Instead, we define a KL-penalized reward that regularizes the policy to stay close to a reference model :
where:
- is the reward signal at timestep
- is the KL penalty coefficient
- is the frozen reference model
At each token position, the KL divergence simplifies to:
In practice we estimate this expectation with the sampled token , yielding:
Note that the reward model produces a single scalar for the complete response . This score is assigned only at the final token , while the KL penalty applies at every token.
The KL penalty serves two purposes:
- Prevents reward hacking: The policy cannot drift arbitrarily far from natural language
- Maintains fluency: Outputs remain similar in distribution to the well-trained SFT model
It modifies the advantage estimates used in PPO through the modified per-token rewards. However, it is mathematically equivalent (and more efficient in implementation) to add the KL term directly to the objective. The PPO objective with KL penalty is:
The first term is exactly what vanilla PPO optimizes using the clipped surrogate. The KL penalty term appears as a separate additive component that penalizes divergence from the reference model. Substituting the PPO clipped surrogate for the first term:
Combining all components, the complete PPO objective with KL penalty (to be maximized) is:
Here, each term serves a distinct purpose:
| Term | Role |
|---|---|
| Policy Objective | Improves the policy while preventing destructive updates via clipping |
| Value Loss | Trains the critic for accurate advantage estimation (subtracted to minimize) |
| Entropy Bonus | Encourages exploration and prevents premature convergence |
| KL Penalty | Prevents reward hacking and maintains language quality |
It is important to distinguish the two KL-related mechanisms in the complete loss. The PPO clipping mechanism acts as a short-term anchor that constrains how much the policy can change in a single update, while the KL penalty is a long-term anchor that constrains how far the policy can drift from its starting point across all of training.
Finally done...
And that's the full derivation! What I find satisfying is that every term in the final loss has a specific purpose. Each one exists because we ran into a specific problem along the way and needed to fix it. I will admit it was not easy to understand all the math and concepts behind the loss. I still do not fully understand every detail but I understand it far better than I did a few days ago.
I hope this was useful. If you spot any errors in derivation (which I'm sure there are) or have suggestions, feel free to reach out.
References
Video:
- Umar Jamil's video on RLHF and PPO: A comprehensive and must-watch video covering RLHF and PPO concepts.
Papers:
- Proximal Policy Optimization Algorithms: The foundational PPO paper introducing the clipped surrogate objective.
- Training language models to follow instructions with human feedback: The InstructGPT paper demonstrating PPO with KL penalty to mitigate reward hacking in LLM fine-tuning.
- Trust Region Policy Optimization: The TRPO paper that motivates the trust region constraints used in PPO.
- High-Dimensional Continuous Control Using Generalized Advantage Estimation: GAE paper introducing the exponentially-weighted advantage estimator for variance reduction in policy gradients.