1 code implementation • ACL 2022 • FatemehSadat Mireshghallah, Kartik Goyal, Taylor Berg-Kirkpatrick
Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM.
no code implementations • ACL 2022 • Volkan Cirik, Louis-Philippe Morency, Taylor Berg-Kirkpatrick
AI systems embodied in the physical world face a fundamental challenge of partial observability; operating with only a limited view and knowledge of the environment.
1 code implementation • EMNLP 2021 • Harsh Jhamtani, Taylor Berg-Kirkpatrick
In this paper, we explore the task of automatically generating natural language descriptions of salient patterns in a time series, such as stock prices of a company over a week.
no code implementations • 18 Feb 2024 • Yasaman Jafari, Dheeraj Mekala, Rose Yu, Taylor Berg-Kirkpatrick
RL-based techniques can be used to search for prompts that when fed into a target language model maximize a set of user-specified reward functions.
no code implementations • 22 Jan 2024 • Zachary Novack, Julian McAuley, Taylor Berg-Kirkpatrick, Nicholas J. Bryan
We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents.
no code implementations • 7 Dec 2023 • Jarad Forristal, Niloofar Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick
Recent work has shown that energy-based language modeling is an effective framework for controllable text generation because it enables flexible integration of arbitrary discriminators.
no code implementations • 25 Oct 2023 • Ayesha Qamar, Chetan Verma, Ahmed El-Kishky, Sumit Binnani, Sneha Mehta, Taylor Berg-Kirkpatrick
Common language model (LM) encoders such as BERT can be used to understand and represent the textual content of webpages.
1 code implementation • 16 Oct 2023 • Zachary Novack, Nikita Srivatsan, Taylor Berg-Kirkpatrick, Julian McAuley
Lead sheets have become commonplace in generative music research, being used as an initial compressed representation for downstream tasks like multitrack music generation and automatic arrangement.
1 code implementation • 12 Oct 2023 • Ivan Lee, Nan Jiang, Taylor Berg-Kirkpatrick
We also measure each architecture's predisposition towards in-context learning when presented with the option to memorize rather than leverage in-context examples.
1 code implementation • 4 Oct 2023 • Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes
Large Language Models (LLMs) are being enhanced with the ability to use tools and to process multiple modalities.
1 code implementation • 4 Aug 2023 • Keren Shao, Ke Chen, Taylor Berg-Kirkpatrick, Shlomo Dubnov
In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance.
1 code implementation • 3 Aug 2023 • Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov
Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.
no code implementations • 16 Jun 2023 • Hao-Wen Dong, Xiaoyu Liu, Jordi Pons, Gautam Bhattacharya, Santiago Pascual, Joan Serrà, Taylor Berg-Kirkpatrick, Julian McAuley
Our results show the effectiveness of the proposed method, and that the pretrained diffusion prior can reduce the modality transfer gap.
no code implementations • 12 Jun 2023 • Nikolai Vogler, Kartik Goyal, Kishore PV Reddy, Elizaveta Pertseva, Samuel V. Lemley, Christopher N. Warren, Max G'Sell, Taylor Berg-Kirkpatrick
Specifically, we focus on matching uniquely damaged character type-imprints in anonymously printed books to works with known printers in order to provide evidence of their origins.
1 code implementation • 29 May 2023 • Justus Mattern, FatemehSadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick
To investigate whether this fragility provides a layer of safety, we propose and evaluate neighbourhood attacks, which compare model scores for a given sample to scores of synthetically generated neighbour texts and therefore eliminate the need for access to the training data distribution.
no code implementations • 24 May 2023 • Nikita Srivatsan, Sofia Samaniego, Omar Florez, Taylor Berg-Kirkpatrick
In this work we present an approach for generating alternative text (or alt-text) descriptions for images shared on social media, specifically Twitter.
no code implementations • 17 May 2023 • Niloofar Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick
With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machine-generated or human-written becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures.
1 code implementation • 11 Jan 2023 • Daniel Spokoyny, Tanmay Laud, Tom Corringham, Taylor Berg-Kirkpatrick
The topic of Climate Change (CC) has received limited attention in NLP despite its urgency.
1 code implementation • 21 Dec 2022 • John Wieting, Jonathan H. Clark, William W. Cohen, Graham Neubig, Taylor Berg-Kirkpatrick
Contrastive learning has been successfully used for retrieval of semantically aligned sentences, but it often requires large batch sizes or careful engineering to work well.
1 code implementation • 14 Dec 2022 • Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, Taylor Berg-Kirkpatrick
Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence.
no code implementations • 13 Sep 2022 • FatemehSadat Mireshghallah, Nikolai Vogler, Junxian He, Omar Florez, Ahmed El-Kishky, Taylor Berg-Kirkpatrick
User-generated social media data is constantly changing as new trends influence online discussion and personal information is deleted due to privacy concerns.
no code implementations • 12 Sep 2022 • Nikita Srivatsan, Taylor Berg-Kirkpatrick
In this work we present a new approach for the task of predicting fingerings for piano music.
2 code implementations • 14 Jul 2022 • Hao-Wen Dong, Ke Chen, Shlomo Dubnov, Julian McAuley, Taylor Berg-Kirkpatrick
Existing approaches for generating multitrack music with transformer models have been limited in terms of the number of instruments, the length of the music segments and slow inference.
1 code implementation • 25 May 2022 • FatemehSadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans, Taylor Berg-Kirkpatrick
Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase.
1 code implementation • 29 Apr 2022 • Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig
One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting.
1 code implementation • 24 Mar 2022 • FatemehSadat Mireshghallah, Kartik Goyal, Taylor Berg-Kirkpatrick
Recent work on controlled text generation has either required attribute-based fine-tuning of the base language model (LM), or has restricted the parameterization of the attribute discriminator to be compatible with the base autoregressive LM.
1 code implementation • ACL 2022 • Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley
In this paper, we propose a post-hoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model.
no code implementations • 8 Mar 2022 • FatemehSadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data?
no code implementations • 12 Feb 2022 • Hao-Wen Dong, Cong Zhou, Taylor Berg-Kirkpatrick, Julian McAuley
Music performance synthesis aims to synthesize a musical score into a natural performance.
1 code implementation • 2 Feb 2022 • Ke Chen, Shuai Yu, Cheng-i Wang, Wei Li, Taylor Berg-Kirkpatrick, Shlomo Dubnov
In this paper, we propose TONet, a plug-and-play model that improves both tone and octave perceptions by leveraging a novel input representation and a novel network architecture.
1 code implementation • 2 Feb 2022 • Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov
To combat these problems, we introduce HTS-AT: an audio transformer with a hierarchical structure to reduce the model size and training time.
Ranked #4 on Sound Event Detection on DESED
no code implementations • 7 Jan 2022 • Nikolai Vogler, Songlin Li, Yujie Xu, Yujian Mi, Taylor Berg-Kirkpatrick
We show that a simple unsupervised masking objective can approach near supervised performance on abstractive multi-document news summarization.
no code implementations • Findings (NAACL) 2022 • Nikolai Vogler, Jonathan Parkes Allen, Matthew Thomas Miller, Taylor Berg-Kirkpatrick
We present a self-supervised pre-training approach for learning rich visual language representations for both handwritten and printed historical document transcription.
no code implementations • Findings (NAACL) 2022 • Daniel Spokoyny, Ivan Lee, Zhao Jin, Taylor Berg-Kirkpatrick
Physical measurements constitute a large portion of numbers in academic papers, engineering reports, and web tables.
no code implementations • AAAI 2021 • Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov
Our approach uses a single model for source separation of multiple sound types, and relies solely on weakly-labeled data for training.
1 code implementation • 15 Dec 2021 • Ke Chen, Xingjian Du, Bilei Zhu, Zejun Ma, Taylor Berg-Kirkpatrick, Shlomo Dubnov
Our approach uses a single model for source separation of multiple sound types, and relies solely on weakly-labeled data for training.
Ranked #1 on Audio Source Separation on AudioSet
1 code implementation • ICLR 2022 • Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig
Furthermore, our unified framework enables the transfer of design elements across different approaches, and as a result we are able to instantiate new parameter-efficient fine-tuning methods that tune less parameters than previous methods while being more effective, achieving comparable results to fine-tuning all parameters on all four tasks.
1 code implementation • 5 Oct 2021 • Harsh Jhamtani, Taylor Berg-Kirkpatrick
In this paper, we explore the task of automatically generating natural language descriptions of salient patterns in a time series, such as stock prices of a company over a week.
1 code implementation • EMNLP 2021 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Taylor Berg-Kirkpatrick
Humans often employ figurative language use in communication, including during interactions with dialog systems.
no code implementations • NAACL 2022 • FatemehSadat Mireshghallah, Vaishnavi Shrivastava, Milad Shokouhi, Taylor Berg-Kirkpatrick, Robert Sim, Dimitrios Dimitriadis
As such, these models are often unable to produce personalized responses for individual users, based on their data.
1 code implementation • EMNLP 2021 • FatemehSadat Mireshghallah, Taylor Berg-Kirkpatrick
Text style can reveal sensitive attributes of the author (e. g. race or age) to the reader, which can, in turn, lead to privacy violations and bias in both human and algorithmic decisions based on text.
no code implementations • EMNLP 2021 • Nikita Srivatsan, Si Wu, Jonathan T. Barron, Taylor Berg-Kirkpatrick
We propose a deep generative model that performs typography analysis and font reconstruction by learning disentangled manifolds of both font style and character shape.
2 code implementations • EMNLP 2021 • Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore, which allows them to learn through explicitly memorizing the training datapoints.
1 code implementation • 3 Aug 2021 • Sachinda Edirisooriya, Hao-Wen Dong, Julian McAuley, Taylor Berg-Kirkpatrick
Monophonic and homophonic music can be described as homorhythmic, or having a single musical rhythm.
no code implementations • 14 Jul 2021 • Nikita Srivatsan, Jason Vega, Christina Skelton, Taylor Berg-Kirkpatrick
In this work, we present an investigation into the use of neural feature extraction in performing scribal hand analysis of the Linear B writing system.
1 code implementation • 13 Jul 2021 • Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick, Julian McAuley
In this paper, we aim to further extend this idea and examine the feasibility of automatic instrumentation -- dynamically assigning instruments to notes in solo music during performance.
no code implementations • ACL (SIGMORPHON) 2021 • Maria Ryskina, Eduard Hovy, Taylor Berg-Kirkpatrick, Matthew R. Gormley
Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention.
1 code implementation • ACL 2021 • Bodhisattwa Prasad Majumder, Taylor Berg-Kirkpatrick, Julian McAuley, Harsh Jhamtani
Humans often refer to personal narratives, life experiences, and events to make a conversation more engaging and rich.
1 code implementation • Findings (ACL) 2021 • Varun Gangal, Harsh Jhamtani, Eduard Hovy, Taylor Berg-Kirkpatrick
Multiple different responses are often plausible for a given open domain dialog context.
no code implementations • ICLR 2022 • Kartik Goyal, Chris Dyer, Taylor Berg-Kirkpatrick
While recent work has shown that scores from models trained by the ubiquitous masked language modeling (MLM) objective effectively discriminate probable from improbable sequences, it is still an open question if these MLMs specify a principled probability distribution over the space of possible sequences.
no code implementations • NAACL 2021 • FatemehSadat Mireshghallah, Huseyin Inan, Marcello Hasegawa, Victor R{\"u}hle, Taylor Berg-Kirkpatrick, Robert Sim
In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a novel triplet-loss term.
1 code implementation • 30 Apr 2021 • John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick
We train these models on large amounts of data, achieving significantly improved performance from the original papers proposing the methods on a suite of monolingual semantic similarity, cross-lingual semantic similarity, and bitext mining tasks.
no code implementations • 16 Apr 2021 • Aashi Jain, Taylor Berg-Kirkpatrick
We conduct an empirical evaluation of extrapolation performance when conditioning on scalar control inputs like desired output length, desired edit from an input sentence, and desired sentiment across three text generation tasks.
no code implementations • 12 Mar 2021 • FatemehSadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim
In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a triplet-loss term.
1 code implementation • EMNLP 2020 • Daniel Spokoyny, Taylor Berg-Kirkpatrick
We conduct a large scale empirical investigation of contextualized number prediction in running text.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Harsh Jhamtani, Taylor Berg-Kirkpatrick
Past work on story generation has demonstrated the usefulness of conditioning on a generation plan to generate coherent stories.
1 code implementation • EMNLP 2020 • Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Julian McAuley
Existing persona-grounded dialog models often fail to capture simple implications of given persona descriptions, something which humans are able to do seamlessly.
2 code implementations • 5 Aug 2020 • Hao-Wen Dong, Ke Chen, Julian McAuley, Taylor Berg-Kirkpatrick
MusPy provides easy-to-use tools for essential components in a music generation system, including dataset management, data I/O, data preprocessing and model evaluation.
1 code implementation • 4 Aug 2020 • Ke Chen, Cheng-i Wang, Taylor Berg-Kirkpatrick, Shlomo Dubnov
Drawing an analogy with automatic image completion systems, we propose Music SketchNet, a neural network framework that allows users to specify partial musical ideas guiding automatic music generation.
1 code implementation • ACL 2020 • Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency
We propose a novel large-scale referring expression recognition dataset, Refer360{\mbox{$^\circ$}}, consisting of 17, 137 instruction sequences and ground-truth actions for completing these instructions in 360{\mbox{$^\circ$}} scenes.
1 code implementation • NeurIPS 2020 • Junxian He, Taylor Berg-Kirkpatrick, Graham Neubig
While effective, these methods are inefficient at test time as a result of needing to store and index the entire training corpus.
1 code implementation • ACL 2020 • Maria Ryskina, Matthew R. Gormley, Taylor Berg-Kirkpatrick
Informal romanization is an idiosyncratic process used by humans in informal digital communication to encode non-Latin script languages into Latin character sets found on common keyboards.
no code implementations • ACL 2020 • Kartik Goyal, Chris Dyer, Christopher Warren, Max G'Sell, Taylor Berg-Kirkpatrick
We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents.
5 code implementations • ICLR 2020 • Junxian He, Xinyi Wang, Graham Neubig, Taylor Berg-Kirkpatrick
Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.
1 code implementation • SCiL 2020 • Maria Ryskina, Ella Rabinovich, Taylor Berg-Kirkpatrick, David R. Mortensen, Yulia Tsvetkov
Besides presenting a new linguistic application of distributional semantics, this study tackles the linguistic question of the role of language-internal factors (in our case, sparsity) in language change motivated by language-external factors (reflected in frequency growth).
2 code implementations • EMNLP 2020 • John Wieting, Graham Neubig, Taylor Berg-Kirkpatrick
Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences.
1 code implementation • IJCNLP 2019 • Nikita Srivatsan, Jonathan T. Barron, Dan Klein, Taylor Berg-Kirkpatrick
We propose a deep factorization model for typographic analysis that disentangles content from style.
4 code implementations • ACL 2019 • John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick
We present a model and methodology for learning paraphrastic sentence embeddings directly from bitext, removing the time-consuming intermediate step of creating paraphrase corpora.
1 code implementation • IJCNLP 2019 • Harsh Jhamtani, Sanket Vaibhav Mehta, Jaime Carbonell, Taylor Berg-Kirkpatrick
Existing recurrent neural language models often fail to capture higher-level structure present in text: for example, rhyming patterns present in poetry.
1 code implementation • 14 Sep 2019 • John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT) systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can substantially improve final translation accuracy.
1 code implementation • IJCNLP 2019 • Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, Yiming Yang
In this paper, we investigate a simple fix for posterior collapse which yields surprisingly effective results.
no code implementations • ACL 2019 • John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT)systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can significantly improve final translation accuracy.
1 code implementation • ACL 2019 • Junxian He, Zhisong Zhang, Taylor Berg-Kirkpatrick, Graham Neubig
The parameters of source model and target model are softly shared through a regularized log likelihood objective.
no code implementations • NAACL 2019 • Kartik Goyal, Chris Dyer, Taylor Berg-Kirkpatrick
Globally normalized neural sequence models are considered superior to their locally normalized equivalents because they may ameliorate the effects of label bias.
2 code implementations • ICLR 2019 • Junxian He, Daniel Spokoyny, Graham Neubig, Taylor Berg-Kirkpatrick
The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique.
Ranked #1 on Text Generation on Yahoo Questions
no code implementations • EMNLP 2018 • Nikita Srivatsan, Zachary Wojtowicz, Taylor Berg-Kirkpatrick
In this paper, we propose a deep, globally normalized topic model that incorporates structural relationships connecting documents in socially generated corpora, such as online forums.
1 code implementation • EMNLP 2018 • Harsh Jhamtani, Taylor Berg-Kirkpatrick
We propose a model that captures visual salience by using a latent variable to align clusters of differing pixels with output sentences.
1 code implementation • EMNLP 2018 • Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick
In this work, we propose a novel generative model that jointly learns discrete syntactic structure and continuous word representations in an unsupervised fashion by cascading an invertible neural network with a structured generative prior.
1 code implementation • ACL 2018 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, Taylor Berg-Kirkpatrick
This paper examines the problem of generating natural language descriptions of chess games.
1 code implementation • NeurIPS 2018 • Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.
1 code implementation • NAACL 2018 • Volkan Cirik, Louis-Philippe Morency, Taylor Berg-Kirkpatrick
We present an empirical analysis of the state-of-the-art systems for referring expression recognition -- the task of identifying the object in an image referred to by a natural language expression -- with the goal of gaining insight into how these systems reason about language and vision.
1 code implementation • NeurIPS 2018 • Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, Taylor Berg-Kirkpatrick
Binary classifiers are often employed as discriminators in GAN-based unsupervised style transfer systems to ensure that transferred sentences are similar to sentences in the target domain.
1 code implementation • 26 May 2018 • Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency
We introduce GroundNet, a neural network for referring expression recognition -- the task of localizing (or grounding) in an image the object referred to by a natural language expression.
2 code implementations • 23 Nov 2017 • Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Eduard Hovy
We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec.
1 code implementation • EMNLP 2017 • Greg Durrett, Jonathan K. Kummerfeld, Taylor Berg-Kirkpatrick, Rebecca S. Portnoff, Sadia Afroz, Damon McCoy, Kirill Levchenko, Vern Paxson
One weakness of machine-learned NLP models is that they typically perform poorly on out-of-domain data.
no code implementations • 1 Aug 2017 • Kartik Goyal, Graham Neubig, Chris Dyer, Taylor Berg-Kirkpatrick
In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.
Ranked #3 on Motion Segmentation on Hopkins155
no code implementations • 1 Jul 2017 • Junxian He, Zhiting Hu, Taylor Berg-Kirkpatrick, Ying Huang, Eric P. Xing
Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling.
no code implementations • ACL 2017 • Maria Ryskina, Hannah Alpert-Abrams, Dan Garrette, Taylor Berg-Kirkpatrick
Compositor attribution, the clustering of pages in a historical printed document by the individual who set the type, is a bibliographic task that relies on analysis of orthographic variation and inspection of visual details of the printed page.
no code implementations • ACL 2017 • Kartik Goyal, Chris Dyer, Taylor Berg-Kirkpatrick
We demonstrate that a continuous relaxation of the argmax operation can be used to create a differentiable approximation to greedy decoding for sequence-to-sequence (seq2seq) models.
3 code implementations • ICML 2017 • Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, Taylor Berg-Kirkpatrick
Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (Bowman et al., 2015).
Ranked #3 on Text Generation on Yahoo Questions
no code implementations • ACL 2016 • Greg Durrett, Taylor Berg-Kirkpatrick, Dan Klein
We present a discriminative model for single-document summarization that integrally combines compression and anaphoricity constraints.
no code implementations • NeurIPS 2014 • Taylor Berg-Kirkpatrick, Jacob Andreas, Dan Klein
We present a new probabilistic model for transcribing piano music from audio to a symbolic form.