\( \newcommand{\argmax}{\operatorname{arg\,max}\limits} \) \( \newcommand{\P}[1]{\mathbf{P} \left(#1\right)} \) \( \newcommand{\E}{\mathbf{E}} \) \( \newcommand{\R}{\mathbb{R}} \) \( \newcommand{\set}[1]{\left\{#1\right\}} \) \( \newcommand{\floor}[1]{\left \lfloor {#1} \right\rfloor} \) \( \newcommand{\ceil}[1]{\left \lceil {#1} \right\rceil} \) \( \newcommand{\logp}{\log_{+}\!} \) \( \let\epsilon\varepsilon\)

Publications

[1] Tor Lattimore and Csaba Szepesvári.
Bandit Algorithms.
Cambridge University Press (draft), 2018.
.pdf ]

[2] Tor Lattimore.
Refining the confidence level for optimistic bandit strategies.
Journal of Machine Learning Research, 19(20):1–32, 2018.
.html ]

[3] Laurent Orseau, Levi Lelis, Tor Lattimore, and Theophane Weber.
Single-agent policy tree search with guarantees.
In Proceedings of the 31st Conference on Neural Information
Processing Systems
. 2018.

[4] Tor Lattimore, Branislav Kveton, Shuai Li, and Csaba Szepesvári.
Toprank: A practical algorithm for online stochastic ranking.
In Proceedings of the 31st Conference on Neural Information
Processing Systems
. 2018.
http ]

[5] Branislav Kveton, Chang Li, Tor Lattimore, Ilya Markov, Maarten de Rijke, Csaba
Szepesvári, and Masrour Zoghi.
Bubblerank: Safe online learning to rerank.
arXiv preprint, 2018.
http ]

[6] Tor Lattimore and Csaba Szepesvári.
Cleaning up the neighbourhood: A full classification of finite
adversarial partial monitoring.
Technical report, 2018.
http ]

[7] Ruitong Huang, Tor Lattimore, András György, and Csaba
Szepesvári.
Following the leader and fast rates in online linear prediction:
Curved constraint sets and other regularities.
Journal of Machine Learning Research, 18(145):1–31, 2017.
.html ]

[8] Joel Veness, Tor Lattimore, Avishkar Bhoopchand, Agnieszka Grabska-Barwinska,
Christopher Mattern, and Peter Toth.
Online learning with gated linear networks.
Technical report, 2017.
http ]

[9] Christoph Dann, Tor Lattimore, and Emma Brunskill.
Unifying pac and regret: Uniform pac bounds for episodic
reinforcement learning.
In Proceedings of the 30th Conference on Neural Information
Processing Systems
, 2017.
http ]

[10] Laurent Orseau, Tor Lattimore, and Shane Legg.
Soft-bayes: Prod for mixtures of experts with log-loss.
In Proceedings of the 28th International Conference on
Algorithmic Learning Theory
, 2017.
.pdf ]

[11] Tor Lattimore.
A scale free algorithm for stochastic bandits with bounded kurtosis.
In Proceedings of the 30th Conference on Neural Information
Processing Systems
, 2017.
http ]

[12] Tor Lattimore and Csaba Szepesvari.
The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear
Bandits.
In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th
International Conference on Artificial Intelligence and Statistics
,
volume 54 of Proceedings of Machine Learning Research, pages 728–737,
Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR.
http ]

[13] Tor Lattimore.
Regret analysis of the anytime optimally confident UCB algorithm.
Technical report, 2016.
http ]

[14] Sébastien Gerchinovitz and Tor Lattimore.
Refined lower bounds for adversarial bandits.
In Proceedings of the 29th Conference on Neural Information
Processing Systems (NIPS)
, 2016.
http ]

[15] Finnian Lattimore, Tor Lattimore, and Mark Reid.
Causal bandits: Learning good interventions via causal inference.
In Proceedings of the 29th Conference on Neural Information
Processing Systems (NIPS)
, 2016.
http ]

[16] Ruitong Huang, Tor Lattimore, András Gyögy, and Csaba Szepesvári.
Following the leader and fast rates in linear prediction: Curved
constraint sets and other regularities.
In Proceedings of the 29th Conference on Neural Information
Processing Systems (NIPS)
, 2016.

[17] Aurélien Garivier, Emilie Kaufmann, and Tor Lattimore.
On explore-then-commit strategies.
In Proceedings of the 29th Conference on Neural Information
Processing Systems (NIPS)
, 2016.
http ]

[18] Jan Leike, Tor Lattimore, Laurent Orseau, and Marcus Hutter.
Thompson sampling is asymptotically optimal in general environments.
In Proceedings of the 32nd Conference on Uncertainty in
Artificial Intelligence (UAI)
, 2016.
http ]

[19] Tor Lattimore.
Regret analysis of the finite-horizon Gittins index strategy for
multi-armed bandits.
In Proceedings of Conference On Learning Theory (COLT), 2016.
http ]

[20] Yifan Wu, Roshan Shariff, Tor Lattimore, and Csaba Szepesvári.
Conservative bandits.
In Proceedings of the International Conference on Machine
Learning (ICML)
, 2016.
http ]

[21] Tor Lattimore.
The pareto regret frontier for bandits.
In Proceedings of the 28th Conference on Neural Information
Processing Systems (NIPS)
, 2015.
http ]

[22] Tor Lattimore.
Optimally confident UCB : Improved regret for finite-armed bandits.
Technical report, 2015.
http ]

[23] Tor Lattimore, Koby Crammer, and Csaba Szepesvári.
Linear multi-resource allocation with semi-bandit feedback.
In Proceedings of the 28th Conference on Neural Information
Processing Systems (NIPS)
, 2015.

[24] Tor Lattimore and Marcus Hutter.
On Martin-löf (non-)convergence of Solomonoff’s universal
mixture.
Theoretical Computer Science, 2014.

[25] Tor Lattimore and Marcus Hutter.
Asymptotics of continuous Bayes for non-i.i.d. sources.
Technical report, 2014.
http ]

[26] Tor Lattimore and Marcus Hutter.
Bayesian reinforcement learning with exploration.
In Proceedings of the 25th Conference on Algorithmic Learning
Theory (ALT)
, 2014.

[27] Tor Lattimore and Rémi Munos.
Bounded regret for finite-armed structured bandits.
In Proceedings of the 27th Conference on Neural Information
Processing Systems (NIPS)
, 2014.

[28] Tor Lattimore, András György, and Csaba Szepesvári.
On learning the optimal waiting time.
In Proceedings of the 25th Conference on Algorithmic Learning
Theory (ALT)
, 2014.

[29] Tor Lattimore, Koby Crammer, and Csaba Szepesvári.
Optimal resource allocation with semi-bandit feedback.
In Proceedings of the 30th Conference on Uncertainty in
Artificial Intelligence (UAI)
, 2014.
http ]

[30] Tom Everitt, Tor Lattimore, and Marcus Hutter.
Free lunch for optimisation under the universal distribution.
In Proceedings of IEEE Congress on Evolutionary Computing
(CEC)
, 2014.

[31] Tor Lattimore and Marcus Hutter.
General time consistent discounting.
Theoretical Computer Science, 519(0):140 — 154, 2014.

[32] Tor Lattimore, Marcus Hutter, and Peter Sunehag.
The sample-complexity of general reinforcement learning.
In Proceedings of the 30th International Conference on Machine
Learning
, 2013.

[33] Tor Lattimore, Marcus Hutter, and Peter Sunehag.
Concentration and confidence for discrete bayesian sequence
predictors.
In Sanjay Jain, Rémi Munos, Frank Stephan, and Thomas Zeugmann,
editors, Proceedings of the 24th International Conference on Algorithmic
Learning Theory
, pages 324–338. Springer, 2013.

[34] Tor Lattimore and Marcus Hutter.
PAC bounds for discounted MDPs.
In Nader Bshouty, Gilles Stoltz, Nicolas Vayatis, and Thomas
Zeugmann, editors, Proceedings of the 23th International Conference on
Algorithmic Learning Theory
, volume 7568 of Lecture Notes in Computer
Science
, pages 320–334. Springer Berlin / Heidelberg, 2012.
http ]

[35] Laurent Orseau, Tor Lattimore, and Marcus Hutter.
Universal knowledge-seeking agents for stochastic environments.
In Sanjay Jain, Rémi Munos, Frank Stephan, and Thomas Zeugmann,
editors, Proceedings of the 24th International Conference on Algorithmic
Learning Theory
, volume 8139 of Lecture Notes in Computer Science,
pages 158–172. Springer Berlin Heidelberg, 2013.

[36] Tor Lattimore and Marcus Hutter.
On Martin-Löf convergence of Solomonoff’s mixture.
In T-H.Hubert Chan, LapChi Lau, and Luca Trevisan, editors,
Theory and Applications of Models of Computation
, volume 7876 of
Lecture Notes in Computer Science
, pages 212–223. Springer Berlin
Heidelberg, 2013.

[37] Tor Lattimore, Marcus Hutter, and Vaibhav Gavane.
Universal prediction of selected bits.
In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas
Zeugmann, editors, Proceedings of the 22nd International Conference on
Algorithmic Learning Theory
, volume 6925 of Lecture Notes in Computer
Science
, pages 262–276. Springer Berlin / Heidelberg, 2011.

[38] Tor Lattimore and Marcus Hutter.
No free lunch versus Occam’s razor in supervised learning.
In David Dowe, editor, Algorithmic Probability and Friends.
Bayesian Prediction and Artificial Intelligence
, volume 7070 of Lecture
Notes in Computer Science
, pages 223–235. Springer Berlin Heidelberg, 2013.

[39] Tor Lattimore and Marcus Hutter.
Asymptotically optimal agents.
In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas
Zeugmann, editors, Proceedings of the 22nd International Conference on
Algorithmic Learning Theory
, volume 6925 of Lecture Notes in Computer
Science
, pages 368–382. Springer Berlin / Heidelberg, 2011.

[40] Tor Lattimore and Marcus Hutter.
Time consistent discounting.
In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas
Zeugmann, editors, Proceedings of the 22nd International Conference on
Algorithmic Learning Theory
, volume 6925 of Lecture Notes in Computer
Science
, pages 383–397. Springer Berlin / Heidelberg, 2011.