E-mail senden E-Mail Adresse kopieren
2026

Terminal-Bench: Benchmarking Agents on Hard, Realistic Tasks in Command Line Interfaces

Zusammenfassung

AI agents may soon become capable of autonomously completing valuable, long-horizon tasks in diverse domains. Current benchmarks either do not measure real-world tasks, or are not sufficiently difficult to meaningfully measure frontier models. To this end, we present Terminal-Bench 1.5: a carefully curated hard benchmark composed of 74 tasks in computer terminal environments inspired by problems from real workflows. Each task features a unique environment, human-written solution, and comprehensive tests for verification. We show that frontier models and agents score less than 50% on the benchmark and conduct an error analysis to identify areas for model and agent improvement. We publish the dataset and evaluation harness to assist developers and researchers in future work.

Konferenzbeitrag

International Conference on Learning Representations (ICLR)

Veröffentlichungsdatum

2026

Letztes Änderungsdatum

2026-01-29