Search Results for author: Motoki Taniguchi

Found 13 papers, 0 papers with code

Quantifying Appropriateness of Summarization Data for Curriculum Learning

no code implementations EACL 2021 Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma

We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.

Translation

A Large-Scale Corpus of E-mail Conversations with Standard and Two-Level Dialogue Act Annotations

no code implementations COLING 2020 Motoki Taniguchi, Yoshihiro Ueda, Tomoki Taniguchi, Tomoko Ohkuma

To assess the difficulty of DA recognition on our corpus, we evaluate several models, including a pre-trained contextual representation model, as our baselines.

Multi-Task Learning

Integrating Entity Linking and Evidence Ranking for Fact Extraction and Verification

no code implementations WS 2018 Motoki Taniguchi, Tomoki Taniguchi, Takumi Takahashi, Yasuhide Miura, Tomoko Ohkuma

A simple entity linking approach with text match is used as the document selection component, this component identifies relevant documents for a given claim by using mentioned entities as clues.

Entity Linking Natural Language Inference +4

Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition

no code implementations WS 2017 Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma

The contributions of this work are (1) verifying the effectiveness of the state-of-the-art NER model for Japanese, (2) proposing a neural model for predicting a tag for each character using word and character information.

named-entity-recognition Named Entity Recognition +2

A Simple Scalable Neural Networks based Model for Geolocation Prediction in Twitter

no code implementations WS 2016 Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma

In the test run of the task, the model achieved the accuracy of 40. 91{\%} and the median distance error of 69. 50 km in message-level prediction and the accuracy of 47. 55{\%} and the median distance error of 16. 13 km in user-level prediction.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.