no code implementations • 6 Mar 2024 • Olukorede Fakorede, Modeste Atsague, Jin Tian
Adversarial Training (AT) effectively improves the robustness of Deep Neural Networks (DNNs) to adversarial attacks.
no code implementations • 20 Jan 2024 • Yuta Kawakami, manabu kuroki, Jin Tian
There has been considerable recent interest in estimating heterogeneous causal effects.
no code implementations • 14 Jul 2023 • Olukorede Fakorede, Ashutosh Kumar Nirala, Modeste Atsague, Jin Tian
Adversarial Training (AT) has been found to substantially improve the robustness of deep learning classifiers against adversarial attacks.
no code implementations • 23 Mar 2023 • Yaojie Hu, Jin Tian
NI is the first neural model capable of executing Py150 dataset programs, including library functions without concrete inputs, and it can be trained with flexible code understanding objectives.
no code implementations • 15 Mar 2023 • Olukorede Fakorede, Ashutosh Nirala, Modeste Atsague, Jin Tian
In this paper, we propose integrating HE into AT with regularization terms that exploit the rich angular information available in the HE framework.
2 code implementations • 11 Oct 2022 • Hyunchai Jeong, Jin Tian, Elias Bareinboim
Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences.
no code implementations • 22 Feb 2022 • Tara V. Anand, Adèle H. Ribeiro, Jin Tian, Elias Bareinboim
Finally, we show that C-DAGs are valid for performing counterfactual inferences about clusters of variables.
no code implementations • NeurIPS 2021 • Yonghan Jung, Jin Tian, Elias Bareinboim
We study the problem of estimating the density of the causal effect of a binary treatment on a continuous outcome given a binary instrumental variable in the presence of covariates.
no code implementations • 12 Oct 2021 • Junzhe Zhang, Jin Tian, Elias Bareinboim
This paper investigates the problem of bounding counterfactual queries from an arbitrary collection of observational and experimental distributions and qualitative knowledge about the underlying data-generating model represented in the form of a causal diagram.
no code implementations • 21 Sep 2021 • Xin Du, Subramanian Ramamoorthy, Wouter Duivesteijn, Jin Tian, Mykola Pechenizkiy
Specifically, we propose to leverage causal knowledge by regarding the distributional shifts in subpopulations and deployment environments as the results of interventions on the underlying system.
no code implementations • NeurIPS 2021 • Junzhe Zhang, Elias Bareinboim, Jin Tian
We show that all counterfactual distributions (over finite observed variables) in an arbitrary causal diagram could be generated by a special family of structural causal models (SCMs), compatible with the same causal diagram, where unobserved (exogenous) variables are discrete, taking values in a finite domain.
no code implementations • 18 Feb 2021 • Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu
Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.
no code implementations • NeurIPS 2020 • Yonghan Jung, Jin Tian, Elias Bareinboim
In this paper, we develop a learning framework that marries two families of methods, benefiting from the generality of the causal identification theory and the effectiveness of the estimators produced based on the principle of ERM.
1 code implementation • 8 Jun 2020 • Hebi Li, Qi Xiao, Jin Tian
We propose a novel approach of modeling the whole DAG structure discovery as a supervised learning.
no code implementations • 2 Jul 2019 • Mojdeh Saadati, Jin Tian
In this paper, we introduce a covariate adjustment formulation for controlling confounding bias in the presence of missing-not-at-random data and develop a necessary and sufficient condition for recovering causal effects using the adjustment.
no code implementations • 26 May 2019 • Hebi Li, Qi Xiao, Shixin Tian, Jin Tian
Machine learning models are vulnerable to adversarial examples.
no code implementations • 15 Nov 2016 • Jin Tian
A probabilistic query may not be estimable from observed data corrupted by missing values if the data are not missing at random (MAR).
no code implementations • 19 Jan 2015 • Ru He, Jin Tian, Huaiqing Wu
We study the Bayesian model averaging approach to learning Bayesian network structures (DAGs) from data.
no code implementations • 7 Aug 2014 • Yetian Chen, Jin Tian, Olga Nikolova, Srinivas Aluru
Using dynamic programming (DP), the fastest known sequential algorithm computes the exact posterior probabilities of structural features in $O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the Bayesian network is $n$ and the in-degree (the number of parents) per node is bounded by a constant $d$.
no code implementations • NeurIPS 2013 • Karthika Mohan, Judea Pearl, Jin Tian
We address the problem of deciding whether there exists a consistent estimator of a given relation Q, when data are missing not at random.