Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

Bubeck, Sébastien; Nicolò, Cesa-bianchi

ISBN 10: 1601986262 ISBN 13: 9781601986269
Published by Now Pub, 2012
New Soft cover

From GreatBookPrices, Columbia, MD, U.S.A. Seller rating 5 out of 5 stars 5-star rating, Learn more about seller ratings

AbeBooks Seller since 6 April 2009

This book is no longer available. AbeBooks has millions of books. Please enter search terms below to find similar copies.

About this Item

Description:

Seller Inventory # 19193988-n

Report this item

Synopsis:

A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a "one-armed bandit" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a "multi-armed bandit"), and must repeatedly choose where to insert the next coin. Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems.

About the Author: Nicolo Cesa-Bianchi is Professor of Computer Science at the University of Milan, Italy. His research interests include learning theory, pattern analysis, and worst-case analysis of algorithms. He is the acting editor of The Machine Learning Journal.

"About this title" may belong to another edition of this title.

Bibliographic Details

Title: Regret Analysis of Stochastic and ...
Publisher: Now Pub
Publication Date: 2012
Binding: Soft cover
Condition: New

AbeBooks offers millions of new, used, rare and out-of-print books, as well as cheap textbooks from thousands of booksellers around the world. Shopping on AbeBooks is easy, safe and 100% secure - search for your book, purchase a copy via our secure checkout and the bookseller ships it straight to you.

Search thousands of booksellers selling millions of new & used books

New & Used Books

New & Used Books

New and used copies of new releases, best sellers and award winners. Save money with our huge selection.

AbeBooks Home

Rare & Out of Print Books

Rare & Out of Print Books

From scarce first editions to sought-after signatures, find an array of rare, valuable and highly collectible books.

Rare Books

Textbooks

Textbooks

Catch a break with big discounts and fantastic deals on new and used textbooks.

Textbooks

More Books to Discover