Adaptive Temporal Difference Learning With Linear Function Approximation

This paper revisits the temporal difference (TD) learning algorithm for the policy evaluation tasks in reinforcement learning. Typically, the performance of TD(0) and TD( λ) is very sensitive to the choice of stepsizes. Oftentimes, TD(0) suffers from slow convergence. Motivated by the tight link bet...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence. - 1979. - 44(2022), 12 vom: 14. Dez., Seite 8812-8824
1. Verfasser: Sun, Tao (VerfasserIn)
Weitere Verfasser: Shen, Han, Chen, Tianyi, Li, Dongsheng
Format: Online-Aufsatz
Sprache:English
Veröffentlicht: 2022
Zugriff auf das übergeordnete Werk:IEEE transactions on pattern analysis and machine intelligence
Schlagworte:Journal Article Research Support, Non-U.S. Gov't
LEADER 01000caa a22002652c 4500
001 NLM331891573
003 DE-627
005 20250302141134.0
007 cr uuu---uuuuu
008 231225s2022 xx |||||o 00| ||eng c
024 7 |a 10.1109/TPAMI.2021.3119645  |2 doi 
028 5 2 |a pubmed25n1106.xml 
035 |a (DE-627)NLM331891573 
035 |a (NLM)34648431 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Sun, Tao  |e verfasserin  |4 aut 
245 1 0 |a Adaptive Temporal Difference Learning With Linear Function Approximation 
264 1 |c 2022 
336 |a Text  |b txt  |2 rdacontent 
337 |a ƒaComputermedien  |b c  |2 rdamedia 
338 |a ƒa Online-Ressource  |b cr  |2 rdacarrier 
500 |a Date Completed 09.11.2022 
500 |a Date Revised 19.11.2022 
500 |a published: Print-Electronic 
500 |a Citation Status MEDLINE 
520 |a This paper revisits the temporal difference (TD) learning algorithm for the policy evaluation tasks in reinforcement learning. Typically, the performance of TD(0) and TD( λ) is very sensitive to the choice of stepsizes. Oftentimes, TD(0) suffers from slow convergence. Motivated by the tight link between the TD(0) learning algorithm and the stochastic gradient methods, we develop a provably convergent adaptive projected variant of the TD(0) learning algorithm with linear function approximation that we term AdaTD(0). In contrast to the TD(0), AdaTD(0) is robust or less sensitive to the choice of stepsizes. Analytically, we establish that to reach an ϵ accuracy, the number of iterations needed is [Formula: see text] in the general case, where ρ represents the speed of the underlying Markov chain converges to the stationary distribution. This implies that the iteration complexity of AdaTD(0) is no worse than that of TD(0) in the worst case. When the stochastic semi-gradients are sparse, we provide theoretical acceleration of AdaTD(0). Going beyond TD(0), we develop an adaptive variant of TD( λ), which is referred to as AdaTD( λ). Empirically, we evaluate the performance of AdaTD(0) and AdaTD( λ) on several standard reinforcement learning tasks, which demonstrate the effectiveness of our new approaches 
650 4 |a Journal Article 
650 4 |a Research Support, Non-U.S. Gov't 
700 1 |a Shen, Han  |e verfasserin  |4 aut 
700 1 |a Chen, Tianyi  |e verfasserin  |4 aut 
700 1 |a Li, Dongsheng  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t IEEE transactions on pattern analysis and machine intelligence  |d 1979  |g 44(2022), 12 vom: 14. Dez., Seite 8812-8824  |w (DE-627)NLM098212257  |x 1939-3539  |7 nnas 
773 1 8 |g volume:44  |g year:2022  |g number:12  |g day:14  |g month:12  |g pages:8812-8824 
856 4 0 |u http://dx.doi.org/10.1109/TPAMI.2021.3119645  |3 Volltext 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |d 44  |j 2022  |e 12  |b 14  |c 12  |h 8812-8824