9781601987600-1601987609-A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning (Foundations and Trends(r) in Machine Learning)

A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning (Foundations and Trends(r) in Machine Learning)

ISBN-13: 9781601987600
ISBN-10: 1601987609
Author: Thomas J. Walsh, Alborz Geramifard, Stefanie Tellex
Publication date: 2013
Publisher: Now Publishers
Format: Paperback 92 pages
FREE US shipping

Book details

ISBN-13: 9781601987600
ISBN-10: 1601987609
Author: Thomas J. Walsh, Alborz Geramifard, Stefanie Tellex
Publication date: 2013
Publisher: Now Publishers
Format: Paperback 92 pages

Summary

A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning (Foundations and Trends(r) in Machine Learning) (ISBN-13: 9781601987600 and ISBN-10: 1601987609), written by authors Thomas J. Walsh, Alborz Geramifard, Stefanie Tellex, was published by Now Publishers in 2013. With an overall rating of 4.4 stars, it's a notable title among other AI & Machine Learning (Introductory & Beginning, Programming, Computer Science) books. You can easily purchase or rent A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning (Foundations and Trends(r) in Machine Learning) (Paperback) from BooksRun, along with many other new and used AI & Machine Learning books and textbooks. And, if you're looking to sell your copy, our current buyback offer is $0.3.

Description

A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance. This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable.

Rate this book Rate this book

We would LOVE it if you could help us and other readers by reviewing the book