no code implementations • SemEval (NAACL) 2022 • Tanuj Shekhawat, Manoj Kumar, Udaybhan Rathore, Aditya Joshi, Jasabanta Patro
This paper describes the system architectures and the models submitted by our team “IISERB Brains” to SemEval 2022 Task 6 competition.
1 code implementation • 4 Mar 2022 • Tanuj Singh Shekhawat, Manoj Kumar, Udaybhan Rathore, Aditya Joshi, Jasabanta Patro
This paper describes the system architectures and the models submitted by our team "IISERBBrains" to SemEval 2022 Task 6 competition.
no code implementations • EACL 2021 • Jasabanta Patro, Sabyasachee Baruah
There is a huge difference between a scientific journal reporting {`}wine consumption might be correlated to cancer{'}, and a media outlet publishing {`}wine causes cancer{'} citing the journal{'}s results.
no code implementations • ACL 2020 • Srijan Bansal, Vishal Garimella, Ayush Suhane, Jasabanta Patro, Animesh Mukherjee
In this paper we demonstrate how code-switching patterns can be utilised to improve various downstream NLP applications.
no code implementations • IJCNLP 2019 • Jasabanta Patro, Srijan Bansal, Animesh Mukherjee
In this paper we propose a deep learning framework for sarcasm target detection in predefined sarcastic texts.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1
no code implementations • SEMEVAL 2019 • Jasabanta Patro, Nitin Choudhary, Kalpit Chittora, Animesh Mukherjee
We report the bidirectional LSTM model, along with the input word embedding as the concatenation of word embedding generated from bidirectional LSTM for word characters and conceptnet embedding, as the best performing model with a highest micro-F1 score of 0. 7261.
no code implementations • EMNLP 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Abhipsa Basu, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts.
no code implementations • 25 Jul 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Abhipsa Basu, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts.
no code implementations • 15 Mar 2017 • Jasabanta Patro, Bidisha Samanta, Saurabh Singh, Prithwish Mukherjee, Monojit Choudhury, Animesh Mukherjee
We first propose context based clustering method to sample a set of candidate words from the social media data. Next, we propose three novel and similar metrics based on the usage of these words by the users in different tweets; these metrics were used to score and rank the candidate words indicating their borrowed likeliness.