no code implementations • EACL 2021 • Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.
no code implementations • COLING 2020 • Motoki Taniguchi, Yoshihiro Ueda, Tomoki Taniguchi, Tomoko Ohkuma
To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Toru Nishino, Ryota Ozaki, Yohei Momoki, Tomoki Taniguchi, Ryuji Kano, Norihisa Nakano, Yuki Tagawa, Motoki Taniguchi, Tomoko Ohkuma, Keigo Nakamura
We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset.
no code implementations • WS 2019 • Yuki Tagawa, Motoki Taniguchi, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma, Takayuki Yamamoto, Keiichi Nemoto
Knowledge graphs (KGs) are generally used for various NLP tasks.
no code implementations • WS 2019 • Takumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
This paper describes our model for the reading comprehension task of the MRQA shared task.
no code implementations • WS 2018 • Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
Information extraction about an event can be improved by incorporating external evidence.
no code implementations • WS 2018 • Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura, Tomoko Ohkuma
A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues.
no code implementations • EMNLP 2018 • Ryuji Kano, Yasuhide Miura, Motoki Taniguchi, Yan-Ying Chen, Francine Chen, Tomoko Ohkuma
We leverage a popularity measure in social media as a distant label for extractive summarization of online conversations.
no code implementations • COLING 2018 • Yasuhide Miura, Ryuji Kano, Motoki Taniguchi, Tomoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We proposed a model that integrates discussion structures with neural networks to classify discourse acts.
no code implementations • IJCNLP 2017 • Yasuhide Miura, Tomoki Taniguchi, Motoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We propose a hierarchical neural network model for language variety identification that integrates information from a social network.
no code implementations • WS 2017 • Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The contributions of this work are (1) verifying the effectiveness of the state-of-the-art NER model for Japanese, (2) proposing a neural model for predicting a tag for each character using word and character information.
no code implementations • ACL 2017 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We propose a novel geolocation prediction model using a complex neural network.
no code implementations • WS 2016 • Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
In the test run of the task, the model achieved the accuracy of 40. 91{\%} and the median distance error of 69. 50 km in message-level prediction and the accuracy of 47. 55{\%} and the median distance error of 16. 13 km in user-level prediction.