Search Results for author: Jang Hyun Cho

Found 7 papers, 5 papers with code

Language-Image Models with 3D Understanding

no code implementations6 May 2024 Jang Hyun Cho, Boris Ivanovic, Yulong Cao, Edward Schmerling, Yue Wang, Xinshuo Weng, Boyi Li, Yurong You, Philipp Krähenbühl, Yan Wang, Marco Pavone

Our experiments on outdoor benchmarks demonstrate that Cube-LLM significantly outperforms existing baselines by 21. 3 points of AP-BEV on the Talk2Car dataset for 3D grounded reasoning and 17. 7 points on the DriveLM dataset for complex reasoning about driving scenarios, respectively.

Question Answering Visual Question Answering

Language-conditioned Detection Transformer

1 code implementation29 Nov 2023 Jang Hyun Cho, Philipp Krähenbühl

We use this detector to pseudo-label images with image-level labels.

Pseudo Label

PartDistillation: Learning Parts From Instance Segmentation

1 code implementation CVPR 2023 Jang Hyun Cho, Philipp Krähenbühl, Vignesh Ramanathan

PartDistillation transfers the part information of an instance segmentation model into a part segmentation model through self-supervised self-training on a large dataset.

Instance Segmentation Object +3

NMS Strikes Back

1 code implementation12 Dec 2022 Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl

Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50. 2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting.

Ranked #2 on Object Detection on COCO-O (using extra training data)

Attribute object-detection +1

On the Efficacy of Knowledge Distillation

no code implementations ICCV 2019 Jang Hyun Cho, Bharath Hariharan

In this paper, we present a thorough evaluation of the efficacy of knowledge distillation and its dependence on student and teacher architectures.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.