Nathan Lichtlé

PhD student in deep learning, UC Berkeley

Download CV
GitHub LinkedIn Google Scholar X

About me

8/22—now
(exp. 8/25)
I'm a PhD candidate in EECS at UC Berkeley, focusing on deep learning and reinforcement learning. I'm advised by Alexandre Bayen in Berkeley AI Research (BAIR) and I will graduate this Summer 2025.

I've worked on RL for autonomous driving and traffic optimization, culminating in CIRCLES, the largest traffic smoothing field test to date, where I designed and trained RL agents to control 100 autonomous vehicles in live highway traffic during rush-hour. More broadly, I've applied AI to control multi-agent systems, built fast data-driven simulators for RL, and integrated PDE-inspired models with neural networks for accurate long-term sequential traffic forecasting. I am also interested the potential of language models in control systems.
10/21—12/24
Joint PhD with Amaury Hayat at École nationale des ponts et chaussées, Institut Polytechnique de Paris in the CERMICS research center, where I worked on reinforcement learning, control and partial differential equations.
9/17—8/21
Previously, I completed my B.S. and M.S. (MVA Master) at École Normale Supérieure (ENS) Paris-Saclay, in the CS department.

News

Research

Sequential traffic prediction: CNNs for accurate long-horizon autoregressive traffic forecasting on hyperbolic PDEs and highway traffic data. [code and paper will be available soon-ish]
CIRCLES: 100-car field test on I-24, using RL to improve traffic flow. [website] [blog] [code] [paper]
Largest-ever comparison of deep RL algorithms for imperfect-information games. [cool demo] [code] [paper]
Nocturne: driving benchmark with human-like partial observability. Written in C++ for speed with a Python interface for RL. [code] [paper] [vids]
Stabilizing a differential equations model with no existing control using deep RL, then extracting explicit control from the trained network. [paper] [code incoming...]
Stabilization of the viscous Saint-Venant equations using Lyapunov function theory.
FLOW: deep RL framework for mixed autonomy in microsimulations. The goal is to minimize fuel consumption for everyone using a small proportion of autonomous vehicles. [code] [demo] [paper]
Another extension of FLOW: using a small proportion of autonomous vehicles, we use multi-agent RL to maximize throughput in a road bottleneck scenario. [paper] [code]

Publications

Traffic Control via Connected and Automated Vehicles (CAVs): An Open-Road Field Experiment with 100 CAVs. IEEE CSM 2025
Jonathan W. Lee*, Han Wang*, Kathy Jang*, Nathan Lichtlé*, Amaury Hayat*, Matthew Bunting*, Arwa Alanqary, William Barbour, Zhe Fu, Xiaoqian Gong, et al.
On Supervised vs. Unsupervised Learning for First Order Hyperbolic Nonlinear PDEs. NeuS 2025 under review
Alexi Canesse*, Zhe Fu*, Nathan Lichtlé*, Hossein Nick Zinat Matin*, Zihe Liu, Maria Laura Delle Monache, and Alexandre M. Bayen.
Reevaluating Policy Gradient Methods for Imperfect-Information Games. ICML 2025 under review
Max Rudolph*, Nathan Lichtlé*, Sobhan Mohammadpour*, Alexandre Bayen, J Zico Kolter, Amy Zhang, Gabriele Farina, Eugene Vinitsky, and Samuel Sokota.
Reinforcement Learning-Based Oscillation Dampening: Scaling Up Single-Agent Reinforcement Learning Algorithms to a 100-Autonomous-Vehicle Highway Field Operational Test. IEEE CSM 2025
Kathy Jang*, Nathan Lichtlé*, Eugene Vinitsky, Adit Shah, Matthew Bunting, Matthew Nice, Benedetto Piccoli, Benjamin Seibold, Daniel B. Work, Maria Laura Delle Monache, et al.
From Sim to Real: A Pipeline for Training and Deploying Traffic Smoothing Cruise Controllers. T-RO 2024
Nathan Lichtlé*, Eugene Vinitsky*, Matthew Nice*, Rahul Bhadani, Matthew Bunting, Fangyu Wu, Benedetto Piccoli, Benjamin Seibold, Daniel B. Work, Jonathan W. Lee, et al.
A Novel Approach to Feedback Control with Deep Reinforcement Learning. SCL 2024 under revision
Kala Agbo Bidia*, Jean-Michel Coron*, Amaury Hayat*, and Nathan Lichtlé*.
Optimizing Mixed Autonomy Traffic Flow with Decentralized Autonomous Vehicles and Multi-Agent Reinforcement Learning. ACM TCPS 2023
Eugene Vinitsky*, Nathan Lichtlé*, Kanaad Parvate, and Alexandre M. Bayen.
Traffic Smoothing Controllers for Autonomous Vehicles Using Deep Reinforcement Learning and Real-World Trajectory Data. ITSC 2023
Nathan Lichtlé, Kathy Jang, Adit Shah, Eugene Vinitsky, Jonathan W. Lee, and Alexandre M. Bayen.
Reinforcement Learning in Control Theory: A New Approach to Mathematical Problem Solving. NeurIPS (Math-AI) 2023
Kala Agbo Bidi*, Jean-Michel Coron*, Amaury Hayat*, and Nathan Lichtlé*.
Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world. NeurIPS 2022
Eugene Vinitsky*, Nathan Lichtlé*, Xiaomeng Yang*, Brandon Amos, and Jakob Foerster.
Deploying Traffic Smoothing Cruise Controllers Learned from Trajectory Data. ICRA 2022
Nathan Lichtlé*, Eugene Vinitsky*, Matthew Nice*, Benjamin Seibold, Dan Work, and Alexandre M. Bayen.
The I-24 Trajectory Dataset . Dataset 2021
Matthew Nice, Nathan Lichtlé, Gracie Gumm, Michael Roman, Eugene Vinitsky, Safwan Elmadani, Matt Bunting, Rahul Bhadani, Kathy Jang, George Gunter, et al.
Fuel Consumption Reduction of Multi-Lane Road Networks using Decentralized Mixed-Autonomy Control. ITSC 2021
Nathan Lichtlé, Eugene Vinitsky, George Gunter, Akash Velu, and Alexandre M. Bayen.
Integrated Framework of Vehicle Dynamics, Instabilities, Energy Models, and Sparse Flow Smoothing Controllers. DI-CPS 2021
Jonathan W. Lee, George Gunter, Rabie Ramadan, Sulaiman Almatrudi, Paige Arnold, John Aquino, William Barbour, Rahul Bhadani, Joy Carpio, ..., Nathan Lichtlé, et al.
Beliefs and Level-k Reasoning in Traffic. NeurIPS (EmeCom) 2020
Eugene Vinitsky, Angelos Filos, Nathan Lichtlé, Kevin Lin, Nicholas Liu, Alexandre Bayen, Anca Dragan, Rowan McAllister, and Jakob Foerster.
Optimizing Traffic Bottleneck Throughput using Cooperative, Decentralized Autonomous Vehicles. NeurIPS (Deep RL) 2020
Eugene Vinitsky, Nathan Lichtlé, Kanaad Parvate, and Alexandre M. Bayen.
Inter-Level Cooperation in Hierarchical Reinforcement Learning. Preprint 2019
Abdul Rahman Kreidieh, Samyak Parajuli, Nathan Lichtlé, Yiling You, Rayyan Nasr, and Alexandre M. Bayen.

*equal first author

Talks & Podcasts

Podcast guest, Ingenius 2023 (FR)
Invited speaker, Traffic and Autonomy Conference (Maiori, Italy), 2023
NeurIPS 2022, New Orleans
ICRA 2022

Posters

NeurIPS Math-AI 2023
ITSC 2021

My work in the media

In English

In French

Misc projects

Tactics: a turn-based tactical combat game environment for RL inside Pufferlib. [code]
Play the demo at puffer.ai/ocean.html.

Replicating stop-and-go waves on little car robots, then smoothing the oscillations with a single car.