2 code implementations • 29 Feb 2024 • ShangHua Gao, Teddy Koker, Owen Queen, Thomas Hartvigsen, Theodoros Tsiligkaridis, Marinka Zitnik
However, current foundation models apply to sequence data but not to time series, which present unique challenges due to the inherent diverse and multidomain time series datasets, diverging task specifications across forecasting, classification and other types of tasks, and the apparent need for task-specialized models.
no code implementations • 14 Feb 2024 • Laura Niss, Kevin Vogt-Lowell, Theodoros Tsiligkaridis
Foundations models are presented as generalists that often perform well over a myriad of tasks.
1 code implementation • 6 Feb 2024 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
However, most DG methods assume access to abundant source data in the target label space, a requirement that proves overly stringent for numerous real-world applications, where acquiring the same label space as the target task is prohibitively expensive.
1 code implementation • 5 Feb 2024 • Eric Yang Yu, Christopher Liao, Sathvik Ravi, Theodoros Tsiligkaridis, Brian Kulis
We first show that when an OOD data point is misclassified, the correct class can be typically found in the Top-K predicted classes.
1 code implementation • 21 Nov 2023 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
A recent study, WaffleCLIP, demonstrated that similar zero-shot accuracy can be achieved with an ensemble of random descriptors.
1 code implementation • 3 Nov 2023 • Kevin Vogt-Lowell, Noah Lee, Theodoros Tsiligkaridis, Marc Vaillant
To address these gaps, we present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP and evaluate its performance on challenging benchmark datasets with realistic distribution shifts from the WILDS collection.
1 code implementation • 4 Apr 2023 • Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis, Kate Saenko, Bryan A. Plummer
We also explore the relationship between DG performance and similarity to pre-training data, and find that similarity to pre-training data distributions is an important driver of performance, but that ERM++ with stronger initializations can deliver strong performance even on dissimilar datasets. Code is released at https://github. com/piotr-teterwak/erm_plusplus.
1 code implementation • 6 Feb 2023 • Huan He, Owen Queen, Teddy Koker, Consuelo Cuevas, Theodoros Tsiligkaridis, Marinka Zitnik
Additionally, the label distributions of tasks in the source and target domains can differ significantly, posing difficulties in addressing label shifts and recognizing labels unique to the target domain.
1 code implementation • 4 Oct 2022 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
There is extensive interest in metric learning methods for image retrieval.
1 code implementation • 17 Jun 2022 • Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, Marinka Zitnik
Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15. 4% (F1 score) on average in one-to-one settings (e. g., fine-tuning an EEG-pretrained model on EMG data) and by 8. 4% (precision) in challenging one-to-many settings (e. g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications.
1 code implementation • 26 May 2022 • Christopher Liao, Theodoros Tsiligkaridis, Brian Kulis
Domain Adaptation (DA) has received widespread attention from deep learning researchers in recent years because of its potential to improve test accuracy with out-of-distribution labeled data.
no code implementations • 24 Feb 2022 • Ryan Soklaski, Michael Yee, Theodoros Tsiligkaridis
Diverse data augmentation strategies are a natural approach to improving robustness in computer vision models against unforeseen shifts in data distribution.
2 code implementations • ICLR 2022 • Xiang Zhang, Marko Zeman, Theodoros Tsiligkaridis, Marinka Zitnik
Here, we introduce RAINDROP, a graph neural network that embeds irregularly sampled and multivariate time series while also learning the dynamics of sensors purely from observational data.
no code implementations • ICML Workshop AML 2021 • Theodoros Tsiligkaridis, Jay Roberts
We develop a theoretical framework for adversarial training (AT) with FW optimization (FW-AT) that reveals a geometric connection between the loss landscape and the distortion of $\ell_\infty$ FW attacks (the attack's $\ell_2$ norm).
1 code implementation • 2 Apr 2021 • Theodoros Tsiligkaridis, Athanasios Tsiligkaridis
In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of corruptions while still maintaining high clean accuracy.
1 code implementation • CVPR 2022 • Theodoros Tsiligkaridis, Jay Roberts
We develop a theoretical framework for adversarial training with FW optimization (FW-AT) that reveals a geometric connection between the loss landscape and the $\ell_2$ distortion of $\ell_\infty$ FW attacks.
no code implementations • 30 Nov 2020 • Jay Roberts, Theodoros Tsiligkaridis
Diagnosis of COVID-19 at point of care is vital to the containment of the global pandemic.
no code implementations • 19 Oct 2020 • Theodoros Tsiligkaridis
Reliably assessing model confidence in deep learning and predicting errors likely to be made are key elements in providing safety for model deployment, in particular for applications with dire consequences.
no code implementations • 10 Sep 2020 • Theodoros Tsiligkaridis, Jay Roberts
It is shown that using only a single iteration in our regularizer achieves stronger robustness than prior gradient and curvature regularization schemes, avoids gradient obfuscation, and, with additional iterations, achieves strong robustness with significantly lower training time than AT.
no code implementations • 10 Oct 2019 • Theodoros Tsiligkaridis
Precise estimation of uncertainty in predictions for AI systems is a critical factor in ensuring trust and safety.
no code implementations • 28 Jun 2018 • Athanasios Tsiligkaridis, Theodoros Tsiligkaridis
We present a novel efficient object detection and localization framework based on the probabilistic bisection algorithm.
no code implementations • 14 Jun 2017 • Theodoros Tsiligkaridis, David Romero
Opportunistic spectrum access is one of the emerging techniques for maximizing throughput in congested bands and is enabled by predicting idle slots in spectrum.
no code implementations • 21 Aug 2016 • Athanasios Tsiligkaridis, Theodoros Tsiligkaridis
We present a novel distributed probabilistic bisection algorithm using social learning with application to target localization.
no code implementations • 10 Nov 2015 • Theodoros Tsiligkaridis
This paper considers the problem of adaptively searching for an unknown target using multiple agents connected through a time-varying network topology.
no code implementations • NeurIPS 2015 • Theodoros Tsiligkaridis, Keith W. Forsythe
We develop a sequential low-complexity inference procedure for Dirichlet process mixtures of Gaussians for online clustering and parameter estimation when the number of clusters are unknown a-priori.
no code implementations • 27 Jul 2013 • Kristjan Greenewald, Theodoros Tsiligkaridis, Alfred O. Hero III
To allow a smooth tradeoff between the reduction in the number of parameters (to reduce estimation variance) and the accuracy of the covariance approximation (affecting estimation bias), we introduce a diagonally loaded modification of the sum of kronecker products representation [1].
no code implementations • 12 Feb 2013 • Theodoros Tsiligkaridis, Alfred O. Hero III
We show that a class of block Toeplitz covariance matrices is approximatable by low separation rank and give bounds on the minimal separation rank $r$ that ensures a given level of bias.
no code implementations • 3 Apr 2012 • Theodoros Tsiligkaridis, Alfred O. Hero III, Shuheng Zhou
The KGlasso algorithm generalizes Glasso, introduced by Yuan and Lin ["Model selection and estimation in the Gaussian graphical model," Biometrika, vol.