1 code implementation • 24 May 2023 • Marek Kadlčík, Michal Štefánik, Ondřej Sotolář, Vlastimil Martinek
We address this deficiency by creating Calc-X, a collection of datasets that demonstrates the appropriate use of a calculator in reasoning chains.
no code implementations • 23 May 2023 • Michal Štefánik, Marek Kadlčík
Many recent language models (LMs) of Transformers family exhibit so-called in-context learning (ICL) ability, manifested in the LMs' ability to modulate their function by a task described in a natural language input.
1 code implementation • 15 May 2023 • Marek Kadlčík, Adam Hájek, Jürgen Kieslich, Radosław Winiecki
The field of audio captioning has seen significant advancements in recent years, driven by the availability of large-scale audio datasets and advancements in deep learning techniques.
1 code implementation • 4 Apr 2023 • Michal Štefánik, Marek Kadlčík, Piotr Gramacki, Petr Sojka
Despite the rapid recent progress in creating accurate and compact in-context learners, most recent work focuses on in-context learning (ICL) for tasks in English.
no code implementations • 3 Dec 2022 • Michal Štefánik, Marek Kadlčík
We find that most of the recent in-context learners can not consistently benefit from the demonstrated concepts, irrespective of the model size.
1 code implementation • 29 Nov 2022 • Michal Štefánik, Marek Kadlčík, Petr Sojka
Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application.