no code implementations • CONSTRAINT (ACL) 2022 • Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar, Tanmoy Chakraborty
We present the findings of the shared task at the CONSTRAINT 2022 Workshop: Hero, Villain, and Victim: Dissecting harmful memes for Semantic role labeling of entities.
no code implementations • 26 Jan 2023 • Shivam Sharma, Atharva Kulkarni, Tharun Suresh, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar, Tanmoy Chakraborty
A common problem associated with meme comprehension lies in detecting the entities referenced and characterizing the role of each of these entities.
1 code implementation • 1 Dec 2022 • Shivam Sharma, Siddhant Agarwal, Tharun Suresh, Preslav Nakov, Md. Shad Akhtar, Tanmoy Chakraborty
Here, we introduce a novel task - EXCLAIM, generating explanations for visual semantic role labeling in memes.
no code implementations • 16 Jun 2022 • Qing Meng, Tharun Suresh, Roy Ka-Wei Lee, Tanmoy Chakraborty
Tweets are the most concise form of communication in online social media, wherein a single tweet has the potential to make or break the discourse of the conversation.
no code implementations • 8 Jun 2022 • Aseem Srivastava, Tharun Suresh, Sarah Peregrine, Lord, Md. Shad Akhtar, Tanmoy Chakraborty
A structured counseling conversation may contain discussions about symptoms, history of mental health issues, or the discovery of the patient's behavior.
1 code implementation • 27 Apr 2022 • Ayan Sengupta, Tharun Suresh, Md Shad Akhtar, Tanmoy Chakraborty
Learning the semantics and morphology of code-mixed language remains a key challenge, due to scarcity of data and unavailability of robust and language-invariant representation learning technique.
no code implementations • 26 Jan 2022 • Tanmay Garg, Sarah Masud, Tharun Suresh, Tanmoy Chakraborty
While reducing toxicity on online platforms continues to be an active area of research, a systematic study of various biases and their mitigation strategies will help the research community produce robust and fair models.