no code implementations • 3 Feb 2024 • Claudio Spiess, David Gros, Kunal Suresh Pai, Michael Pradel, Md Rafiqul Islam Rabin, Amin Alipour, Susmit Jha, Prem Devanbu, Toufique Ahmed
Our contributions will lead to better-calibrated decision-making in the current use of code generated by language models, and offers a framework for future research to further improve calibration methods for generative models in Software Engineering.
no code implementations • 20 Jun 2023 • Toufique Ahmed, Dian Yu, Chengxuan Huang, Cathy Wang, Prem Devanbu, Kenji Sagae
To understand the extent to which language models can learn some form of meaning, we investigate their ability to capture semantics of code beyond superficial frequency and co-occurrence.
no code implementations • 3 Oct 2020 • David Gros, Hariharan Sezhiyan, Prem Devanbu, Zhou Yu
We carefully examine the underlying assumption here: that the task of generating comments sufficiently resembles the task of translating between natural languages, and so similar models and evaluation metrics could be used.
1 code implementation • 17 Sep 2020 • Prem Devanbu, Matthew Dwyer, Sebastian Elbaum, Michael Lowry, Kevin Moran, Denys Poshyvanyk, Baishakhi Ray, Rishabh Singh, Xiangyu Zhang
The intent of this report is to serve as a potential roadmap to guide future work that sits at the intersection of SE & DL.
no code implementations • 8 Oct 2019 • Casey Casalnuovo, Kevin Lee, Hulin Wang, Prem Devanbu, Emily Morgan
Natural code is known to be very repetitive (much more so than natural language corpora); furthermore, this repetitiveness persists, even after accounting for the simpler syntax of code.
no code implementations • 6 Jun 2018 • Casey Casalnuovo, Kenji Sagae, Prem Devanbu
Code corpora, as observed in large software systems, are now known to be far more repetitive and predictable than natural language corpora.