TD3 builds on the DDPG algorithm for reinforcement learning, with a couple of modifications aimed at tackling overestimation bias with the value function. In particular, it utilises clipped double Q-learning, delayed update of target and policy networks, and target policy smoothing (which is similar to a SARSA based update; a safer update, as they provide higher value to actions resistant to perturbations).
Source: Addressing Function Approximation Error in Actor-Critic MethodsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Reinforcement Learning (RL) | 59 | 40.69% |
Continuous Control | 26 | 17.93% |
OpenAI Gym | 8 | 5.52% |
Decision Making | 7 | 4.83% |
Autonomous Driving | 5 | 3.45% |
Offline RL | 4 | 2.76% |
Meta-Learning | 3 | 2.07% |
Benchmarking | 3 | 2.07% |
D4RL | 2 | 1.38% |