1 code implementation • Findings (EMNLP) 2021 • Jarana Manotumruksa, Jeff Dalton, Edgar Meij, Emine Yilmaz
While state-of-the-art Dialogue State Tracking (DST) models show promising results, all of them rely on a traditional cross-entropy loss function during the training process, which may not be optimal for improving the joint goal accuracy.
1 code implementation • ICML 2020 • Qiang Zhang, Aldo Lipani, Omer Kirnap, Emine Yilmaz
A common method to do this is Hawkes processes.
no code implementations • EMNLP (intexsempar) 2020 • Priyanka Sen, Emine Yilmaz
Collecting training data for semantic parsing is a time-consuming and expensive task.
no code implementations • 13 May 2024 • Hossein A. Rahmani, Nick Craswell, Emine Yilmaz, Bhaskar Mitra, Daniel Campos
Previous studies demonstrate that LLMs have the potential to generate synthetic relevance judgments for use in the evaluation of IR systems.
no code implementations • 2 Feb 2024 • Hossein A. Rahmani, Xi Wang, Mohammad Aliannejadi, Mohammadmehdi Naghiaei, Emine Yilmaz
Clarifying questions are an integral component of modern information retrieval systems, directly impacting user satisfaction and overall system performance.
1 code implementation • 23 Jan 2024 • Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, Zhaopeng Tu
The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods.
no code implementations • 30 Dec 2023 • Yuxiang Qiu, Karim Djemili, Denis Elezi, Aaneel Shalman, María Pérez-Ortiz, Emine Yilmaz, John Shawe-Taylor, Sahan Bulathwela
With the advancement and utility of Artificial Intelligence (AI), personalising education to a global population could be a cornerstone of new educational systems in the future.
1 code implementation • 25 Oct 2023 • Xi Wang, Hossein A. Rahmani, Jiqun Liu, Emine Yilmaz
Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques.
1 code implementation • 15 Oct 2023 • Fanghua Ye, Meng Fang, Shenghui Li, Emine Yilmaz
Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency.
1 code implementation • 26 May 2023 • Yue Feng, Yunlong Jiao, Animesh Prasad, Nikolaos Aletras, Emine Yilmaz, Gabriella Kazai
Further, it employs a fulfillment representation layer for learning how many task attributes have been fulfilled in the dialogue, an importance predictor component for calculating the importance of task attributes.
1 code implementation • 25 May 2023 • Hossein A. Rahmani, Xi Wang, Yue Feng, Qiang Zhang, Emine Yilmaz, Aldo Lipani
The ability to understand a user's underlying needs is critical for conversational systems, especially with limited input from users in a conversation.
1 code implementation • 23 May 2023 • Yue Feng, Hossein A. Rahmani, Aldo Lipani, Emine Yilmaz
Task-oriented dialogue systems aim at providing users with task-specific services.
2 code implementations • 22 May 2023 • Zhengxiang Shi, Francesco Tonolini, Nikolaos Aletras, Emine Yilmaz, Gabriella Kazai, Yunlong Jiao
Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance in downstream natural language processing (NLP) tasks.
1 code implementation • 21 May 2023 • Fanghua Ye, Zhiyuan Hu, Emine Yilmaz
It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users.
1 code implementation • 13 May 2023 • Sahan Bulathwela, Hamze Muse, Emine Yilmaz
We develop \textit{EduQG}, a novel educational question generation model built by adapting a large language model.
no code implementations • 23 Apr 2023 • Debasis Ganguly, Emine Yilmaz
However, in this paper we argue that the annotation effort can be substantially reduced if the depth of the pool is made a variable quantity for each query, the rationale being that the number of documents relevant to the information need can widely vary across queries.
no code implementations • 24 Jan 2023 • Procheta Sen, Xi Wang, Ruiqing Xu, Emine Yilmaz
Search engines and conversational assistants are commonly used to help users complete their every day tasks such as booking travel, cooking, etc.
no code implementations • 7 Dec 2022 • Hamze Muse, Sahan Bulathwela, Emine Yilmaz
With the boom of digital educational materials and scalable e-learning systems, the potential for realising AI-assisted personalised learning has skyrocketed.
1 code implementation • 22 Oct 2022 • Fanghua Ye, Xi Wang, Jie Huang, Shenghui Li, Samuel Stern, Emine Yilmaz
Experimental results demonstrate that all three schemes can achieve competitive performance.
no code implementations • 21 Oct 2022 • Giorgio Giannone, Serhii Havrylov, Jordan Massiah, Emine Yilmaz, Yunlong Jiao
Advances in deep learning theory have revealed how average generalization relies on superficial patterns in data.
no code implementations • 19 Oct 2022 • Gizem Gezici, Aldo Lipani, Yucel Saygin, Emine Yilmaz
However, search engine results do not necessarily cover all the viewpoints of a search query topic, and they can be biased towards a specific view since search engine results are returned based on relevance, which is calculated using many features and sophisticated algorithms where search neutrality is not necessarily the focal point.
no code implementations • 22 Jun 2022 • Sahan Bulathwela, Meghana Verma, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
This work explores how population-based engagement prediction can address cold-start at scale in large learning resource collections.
no code implementations • 19 Jun 2022 • Peter Hayes, Mingtian Zhang, Raza Habib, Jordan Burgess, Emine Yilmaz, David Barber
We introduce a label model that can learn to aggregate weak supervision sources differently for different datapoints and takes into consideration the performance of the end-model during training.
1 code implementation • 17 May 2022 • Rikaz Rameez, Hossein A. Rahmani, Emine Yilmaz
We collect a dataset of 330k tweets to train ViralBERT and validate the efficacy of our model using baselines from current studies in this field.
no code implementations • ACL 2022 • Yue Feng, Aldo Lipani, Fanghua Ye, Qiang Zhang, Emine Yilmaz
Existing approaches that have considered such relations generally fall short in: (1) fusing prior slot-domain membership relations and dialogue-aware dynamic slot relations explicitly, and (2) generalizing to unseen domains.
1 code implementation • Findings (ACL) 2022 • Fanghua Ye, Yue Feng, Emine Yilmaz
In this paper, instead of improving the annotation quality further, we propose a general framework, named ASSIST (lAbel noiSe-robuSt dIalogue State Tracking), to train DST models robustly from noisy labels.
no code implementations • 10 Jan 2022 • Maria Perez-Ortiz, Sahan Bulathwela, Claire Dormann, Meghana Verma, Stefan Kreitmayer, Richard Noss, John Shawe-Taylor, Yvonne Rogers, Emine Yilmaz
The user questionnaire revealed that participants found the Content Flow Bar helpful and enjoyable for finding relevant information in videos.
no code implementations • 8 Dec 2021 • Sahan Bulathwela, María Pérez-Ortiz, Emine Yilmaz, John Shawe-Taylor
In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas.
no code implementations • NeurIPS 2021 • Qiang Zhang, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang, Emine Yilmaz
Conventional meta-learning considers a set of tasks from a stationary distribution.
no code implementations • 17 Oct 2021 • Aldo Lipani, Florina Piroi, Emine Yilmaz
Information availability affects people's behavior and perception of the world.
1 code implementation • ICLR 2022 • Fangyu Liu, Yunlong Jiao, Jordan Massiah, Emine Yilmaz, Serhii Havrylov
Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders.
Ranked #7 on Semantic Textual Similarity on STS16
no code implementations • 24 Sep 2021 • Emine Yilmaz, Peter Hayes, Raza Habib, Jordan Burgess, David Barber
Labelling data is a major practical bottleneck in training and testing classifiers.
no code implementations • 3 Sep 2021 • Sahan Bulathwela, Maria Perez-Ortiz, Erik Novak, Emine Yilmaz, John Shawe-Taylor
One of the main challenges in advancing this research direction is the scarcity of large, publicly available datasets.
no code implementations • 11 Aug 2021 • Ömer Kırnap, Fernando Diaz, Asia Biega, Michael Ekstrand, Ben Carterette, Emine Yilmaz
There is increasing attention to evaluating the fairness of search system ranking decisions.
no code implementations • 9 May 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Jimmy Lin
Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public leaderboard such as MS MARCO, are intended to encourage research and track our progress, addressing big questions in our field.
no code implementations • 19 Apr 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees, Ian Soboroff
The TREC Deep Learning (DL) Track studies ad hoc search in the large data regime, meaning that a large set of human-labeled training data is available.
1 code implementation • SIGDIAL (ACL) 2022 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
The annotations in the training set remain unchanged (same as MultiWOZ 2. 1) to elicit robust and noise-resilient model training.
no code implementations • 25 Feb 2021 • Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, Emine Yilmaz
Leaderboards are a ubiquitous part of modern research in applied machine learning.
1 code implementation • 15 Feb 2021 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos
This is the second year of the TREC Deep Learning Track, with the goal of studying ad hoc ranking in the large training data regime.
1 code implementation • 22 Jan 2021 • Fanghua Ye, Jarana Manotumruksa, Qiang Zhang, Shenghui Li, Emine Yilmaz
Then a stacked slot self-attention is applied on these features to learn the correlations among slots.
1 code implementation • 2 Nov 2020 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
This paper introduces VLEngagement, a novel dataset that consists of content-based and video-specific features extracted from publicly available scientific video lectures and several metrics related to user engagement.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Fanghua Ye, Jarana Manotumruksa, Emine Yilmaz
Semantic hashing is a powerful paradigm for representing texts as compact binary hash codes.
no code implementations • 9 Jun 2020 • Nick Craswell, Daniel Campos, Bhaskar Mitra, Emine Yilmaz, Bodo Billerbeck
Users of Web search engines reveal their information needs through queries and clicks, making click logs a useful asset for information retrieval.
1 code implementation • 31 May 2020 • Sahan Bulathwela, María Pérez-Ortiz, Aldo Lipani, Emine Yilmaz, John Shawe-Taylor
The explosion of Open Educational Resources (OERs) in the recent years creates the demand for scalable, automatic approaches to process and evaluate OERs, with the end goal of identifying and recommending the most suitable educational materials for learners.
no code implementations • 28 Apr 2020 • Emine Yilmaz, Nick Craswell, Bhaskar Mitra, Daniel Campos
As deep learning based models are increasingly being used for information retrieval (IR), a major challenge is to ensure the availability of test collections for measuring their quality.
2 code implementations • 17 Mar 2020 • Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees
The Deep Learning Track is a new track for TREC 2019, with the goal of studying ad hoc ranking in a large data regime.
1 code implementation • 3 Dec 2019 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
One of the most ambitious use cases of computer-assisted learning is to build a recommendation system for lifelong learning.
1 code implementation • 21 Nov 2019 • Sahan Bulathwela, Maria Perez-Ortiz, Emine Yilmaz, John Shawe-Taylor
The recent advances in computer-assisted learning systems and the availability of open educational resources today promise a pathway to providing cost-efficient, high-quality education to large masses of learners.
1 code implementation • 12 Oct 2019 • Niklas Stoehr, Emine Yilmaz, Marc Brockschmidt, Jan Stuehmer
While a wide range of interpretable generative procedures for graphs exist, matching observed graph topologies with such procedures and choices for its parameters remains an open problem.
1 code implementation • 17 Jul 2019 • Qiang Zhang, Aldo Lipani, Omer Kirnap, Emine Yilmaz
The proposed method adapts self-attention to fit the intensity function of Hawkes processes.
no code implementations • 8 Jul 2019 • Bhaskar Mitra, Corby Rosset, David Hawking, Nick Craswell, Fernando Diaz, Emine Yilmaz
Deep neural IR models, in contrast, compare the whole query to the document and are, therefore, typically employed only for late stage re-ranking.
no code implementations • 30 Dec 2018 • Qiang Zhang, Shangsong Liang, Emine Yilmaz
This paper proposes a variational self-attention model (VSAM) that employs variational inference to derive self-attention.
no code implementations • 28 Nov 2018 • Sebastin Santy, Wazeer Zulfikar, Rishabh Mehrotra, Emine Yilmaz
We consider the problem of understanding real world tasks depicted in visual images.
no code implementations • 6 Jun 2017 • Rishabh Mehrotra, Emine Yilmaz
As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions, for better recommendations, for satisfaction prediction, and for improved personalization in terms of tasks.