Skip to the content.

Keynote Speakers

1. Kevin Ellis - Cornell University

Kevin Ellis

Title: Non-canonical Tasks for Synthesis and Learning

Abstract: The canonical synthesis task is to construct a correct program from a specification of intended program behavior. Learning methods have been very valuable at extending this paradigm, for instance to LLM-powered code completion in IDEs. This talk, however, discusses several tasks beyond the purview of canonical synthesis. This includes learning to predict whether a synthesizer should be trusted; using synthesis to generate programming problems (instead of using synthesis to solve programming problems); using synthesis to solve tasks in graphics and visual perception, and, time permitting, tasks in psychology and computational linguistics.

Bio: Kevin Ellis is an Assistant Professor of Computer Science at Cornell University. He works in AI and programming languages. Previously he received his PhD in Cognitive Science from MIT, and was a Research Scientist at Common Sense Machines.

2. Vincent Hellendoorn - Google

Vincent Hellendoorn

Title: Beyond Code Completion: Towards the Next Generation of AI for SE

Abstract: In the span of a decade, the field of software engineering (SE) has undergone a series of transformations with the rise of artificial intelligence (AI). Early work that used machine learning for SE tasks explored a wide range of applications, either with one-off models or by fine-tuning modestly-sized pretrained models. While these efforts showed promising signs of life, they mostly failed to reach production-level quality and generalized poorly to new domains. In recent years, large language models (LLMs) have marked a significant leap forward in model quality. Being generative models, LLMs have primarily seen adoption in code completion and synthesis tasks, and in conversational Q&A. This leaves out many parts of both everyday programming and the SE process. Developers spend much time debugging, maintaining, and improving code; and programming tasks themselves makes up just a small part of a software engineer’s work. In this talk, I first discuss promising trends from recent work that uses LLMs for classical SE tasks beyond code completion, such as bug detection and repair, code migration, and supporting code review. Then, I highlight work that shows promise for tasks well beyond the capabilities of previous generations of models, such as generating entire pull requests (PRs) or test suites. Finally, I discuss what may be in store in the next five years, discussing the progress towards next-generation AI that can autonomously manage extended parts of the software development process. I also describe the associated challenges, both technical and in terms of the potential impact on software development teams and on software in general. The talk concludes with a call to action to the AI for SE community, along with a series of new and important research directions for the coming years.

Bio: Dr. Hellendoorn conducts research at the intersection of machine learning and software engineering. His work leverages AI to power novel tools that support the many facets of the software development process, and employs empirical SE methods to identify and address shortcomings that impact model success in practice. His work has been published at major conferences in both the AI and SE fields including ICSE, FSE, ASE, ICLR, and NeurIPS. He has worked as a research consultant for GitHub Nextfor and Microsoft Research and as an assistant professor at Carnegie Mellon University. He is currently a research scientist at Google.