no code implementations • 6 May 2024 • Evan King, Haoxiang Yu, Sahil Vartak, Jenna Jacob, Sangsu Lee, Christine Julien
We propose an end-to-end framework that leverages formal modeling, automated training data synthesis, and generative language models to create devices that are both capable and thoughtful in the presence of unconstrained user goals and inquiries.
no code implementations • 27 May 2023 • Haoxiang Yu, Jingyi An, Evan King, Edison Thomaz, Christine Julien
From solely an individual's perspective, it can be difficult to differentiate between these activities as they may appear very similar, even though they are markedly different.
no code implementations • 16 May 2023 • Evan King, Haoxiang Yu, Sangsu Lee, Christine Julien
We implement and evaluate Sasha in a hands-on user study, showing the capabilities and limitations of LLM-driven smart homes when faced with unconstrained user-generated scenarios.
1 code implementation • 24 Mar 2023 • Evan King, Haoxiang Yu, Sangsu Lee, Christine Julien
We first explore the feasibility of a system that places an LLM at the center of command inference and action planning, showing that LLMs have the capacity to infer intent behind vague, context-dependent commands like "get ready for a party" and respond with concrete, machine-parseable instructions that can be used to control smart devices.