no code implementations • 30 Oct 2023 • Keegan Quigley, Miriam Cha, Josh Barua, Geeticka Chauhan, Seth Berkowitz, Steven Horng, Polina Golland
Vision-language pretraining has been shown to produce high-quality visual encoders which transfer efficiently to downstream computer vision tasks.
1 code implementation • 7 Jun 2023 • Miriam Cha, Gregory Angelides, Mark Hamilton, Andy Soszynski, Brandon Swenson, Nathaniel Maidel, Phillip Isola, Taylor Perron, Bill Freeman
The Multimodal Learning for Earth and Environment Workshop (MultiEarth 2023) is the second annual CVPR workshop aimed at the monitoring and analysis of the health of Earth ecosystems by leveraging the vast amount of remote sensing data that is continuously being collected.
no code implementations • 5 Aug 2022 • Keegan Quigley, Miriam Cha, Ruizhi Liao, Geeticka Chauhan, Steven Horng, Seth Berkowitz, Polina Golland
In this paper, we build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data (fewer than 1000 examples).
no code implementations • 26 Jul 2022 • Armando Cabrera, Miriam Cha, Prafull Sharma, Michael Newey
This paper explores the use of multi-conditional adversarial networks for SAR-to-EO image translation.
1 code implementation • 14 Jul 2022 • Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter, Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron, Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi, Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze, Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan Wollaber, Sophia Yuditskaya, Jeremy Kepner
Through a series of federal initiatives and orders, the U. S. Government has been making a concerted effort to ensure American leadership in AI.
no code implementations • 15 Apr 2022 • Miriam Cha, Kuan Wei Huang, Morgan Schmidt, Gregory Angelides, Mark Hamilton, Sam Goldberg, Armando Cabrera, Phillip Isola, Taylor Perron, Bill Freeman, Yen-Chen Lin, Brandon Swenson, Jean Piou
The Multimodal Learning for Earth and Environment Challenge (MultiEarth 2022) will be the first competition aimed at the monitoring and analysis of deforestation in the Amazon rainforest at any time and in any weather conditions.
1 code implementation • 8 Mar 2021 • Ruizhi Liao, Daniel Moyer, Miriam Cha, Keegan Quigley, Seth Berkowitz, Steven Horng, Polina Golland, William M. Wells
We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text.
no code implementations • 12 Dec 2018 • Miriam Cha, Youngjune L. Gwon, H. T. Kung
Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class.
no code implementations • 5 Sep 2017 • Miriam Cha, Youngjune Gwon, H. T. Kung
We argue that clustering with word embeddings in the metric space should yield feature representations in a higher semantic space appropriate for text regression.
no code implementations • 30 Aug 2017 • Miriam Cha, Youngjune Gwon, H. T. Kung
Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text.
no code implementations • 17 May 2016 • Youngjune Gwon, William Campbell, Kevin Brady, Douglas Sturim, Miriam Cha, H. T. Kung
Unsupervised feature learning methods have proven effective for classification tasks based on a single modality.
no code implementations • 19 Nov 2015 • Miriam Cha, Youngjune Gwon, H. T. Kung
In this paper, we present a multimodal framework for learning sparse representations that can capture semantic correlation between modalities.