Search Results for author: Marek Kadlčík

Found 6 papers, 4 papers with code

Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems

1 code implementation24 May 2023 Marek Kadlčík, Michal Štefánik, Ondřej Sotolář, Vlastimil Martinek

We address this deficiency by creating Calc-X, a collection of datasets that demonstrates the appropriate use of a calculator in reasoning chains.

Arithmetic Reasoning GSM8K +1

Concept-aware Training Improves In-context Learning Ability of Language Models

no code implementations23 May 2023 Michal Štefánik, Marek Kadlčík

Many recent language models (LMs) of Transformers family exhibit so-called in-context learning (ICL) ability, manifested in the LMs' ability to modulate their function by a task described in a natural language input.

In-Context Learning

A Whisper transformer for audio captioning trained with synthetic captions and transfer learning

1 code implementation15 May 2023 Marek Kadlčík, Adam Hájek, Jürgen Kieslich, Radosław Winiecki

The field of audio captioning has seen significant advancements in recent years, driven by the availability of large-scale audio datasets and advancements in deep learning techniques.

Audio captioning Transfer Learning +1

Resources and Few-shot Learners for In-context Learning in Slavic Languages

1 code implementation4 Apr 2023 Michal Štefánik, Marek Kadlčík, Piotr Gramacki, Petr Sojka

Despite the rapid recent progress in creating accurate and compact in-context learners, most recent work focuses on in-context learning (ICL) for tasks in English.

In-Context Learning

Can In-context Learners Learn a Reasoning Concept from Demonstrations?

no code implementations3 Dec 2022 Michal Štefánik, Marek Kadlčík

We find that most of the recent in-context learners can not consistently benefit from the demonstrated concepts, irrespective of the model size.

Few-Shot Learning In-Context Learning

Soft Alignment Objectives for Robust Adaptation of Language Generation

1 code implementation29 Nov 2022 Michal Štefánik, Marek Kadlčík, Petr Sojka

Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application.

Domain Adaptation Machine Translation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.