HDSI Tutorial | Large Language Models

Science and Engineering Complex, 150 Western Ave., Boston, MA 02134

LLMs in 5 Formulas

One year after the release of GPT-4, large language models (LLMs) remain the most exciting topic in AI. While much about their qualitative capabilities remain poorly understood, there are some areas where we can quantitatively measure, bound, and forecast their behavior. This tutorial will introduce the topic of LLMs through 5 key formulas in their study. The goal is to provide a survey of the area by focusing on diverse efforts to formalize the remarkable nature of observed phenomena.

Instructor:

  • Sasha Rush, Associate Professor, Cornell Tech, Cornell University

Alexander “Sasha” Rush is an Associate Professor at Cornell Tech and a researcher at Hugging Face. His research interest is in the study of language models with applications in controllable text generation, efficient inference, and applications in summarization and information extraction. In addition to research, he has written several popular open-source software projects supporting NLP research, programming for deep learning, and virtual academic conferences. His projects have received paper and demo awards at major NLP, visualization, and hardware conferences, an NSF Career Award and Sloan Fellowship. He tweets at @srush_nlp


Session Recording