no code implementations • 28 Aug 2023 • Shuyi Zhou, Shuxiang Xie, Ryoichi Ishikawa, Ken Sakurada, Masaki Onishi, Takeshi Oishi
INF first trains a neural density field of the target scene using LiDAR frames.
1 code implementation • CVPR 2023 • Ryuhei Hamaguchi, Yasutaka Furukawa, Masaki Onishi, Ken Sakurada
Conventional architectures encode entire scene contents at a fixed rate regardless of their temporal characteristics.
Ranked #5 on Object Detection on GEN1 Detection
1 code implementation • CVPR 2021 • Ryuhei Hamaguchi, Yasutaka Furukawa, Masaki Onishi, Ken Sakurada
This paper proposes a novel heterogeneous grid convolution that builds a graph-based image representation by exploiting heterogeneity in the image content, enabling adaptive, efficient, and controllable computations in a convolutional architecture.
no code implementations • 30 Jul 2020 • Kento Doi, Ryuhei Hamaguchi, Shun Iwase, Rio Yokota, Yutaka Matsuo, Ken Sakurada
To cope with the difficulty, we introduce a deep graph matching network that establishes object correspondence between an image pair.
no code implementations • ECCV 2020 • Mikiya Shibuya, Shinya Sumikura, Ken Sakurada
This study proposes a privacy-preserving Visual SLAM framework for estimating camera poses and performing bundle adjustment with mixed line and point clouds in real time.
no code implementations • 9 Mar 2020 • Weimin Wang, Shohei Nobuhara, Ryosuke Nakamura, Ken Sakurada
This paper presents a novel semantic-based online extrinsic calibration approach, SOIC (so, I see), for Light Detection and Ranging (LiDAR) and camera sensors.
4 code implementations • 2 Oct 2019 • Shinya Sumikura, Mikiya Shibuya, Ken Sakurada
In this paper, we introduce OpenVSLAM, a visual SLAM framework with high usability and extensibility.
no code implementations • 3 May 2019 • Masaya Kaneko, Ken Sakurada, Kiyoharu Aizawa
We propose a novel and efficient representation for single-view depth estimation using Convolutional Neural Networks (CNNs).
no code implementations • CVPR 2019 • Ryuhei Hamaguchi, Ken Sakurada, Ryosuke Nakamura
The effectiveness of the proposed approach is verified by the quantitative evaluations on four change detection datasets, and the qualitative analysis shows that the proposed method can acquire the representations that disentangle rare events from trivial ones.
1 code implementation • 29 Nov 2018 • Ken Sakurada, Mikiya Shibuya, Weimin WANG
A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner.
no code implementations • 28 Oct 2018 • Shinya Sumikura, Ken Sakurada, Nobuo Kawaguchi, Ryosuke Nakamura
This paper proposes a novel method of estimating the absolute scale of monocular SfM for a multi-modal stereo camera.
no code implementations • 8 Dec 2017 • Ken Sakurada, Weimin WANG, Nobuo Kawaguchi, Ryosuke Nakamura
This paper presents a novel method for detecting scene changes from a pair of images with a difference of camera viewpoints using a dense optical flow based change detection network.
no code implementations • 13 Oct 2017 • Kenji Enomoto, Ken Sakurada, Weimin WANG, Hiroshi Fukui, Masashi Matsuoka, Ryosuke Nakamura, Nobuo Kawaguchi
The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs.
Ranked #8 on Cloud Removal on SEN12MS-CR
1 code implementation • 18 Aug 2017 • Weimin Wang, Ken Sakurada, Nobuo Kawaguchi
Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem.
no code implementations • CVPR 2013 • Ken Sakurada, Takayuki Okatani, Koichiro Deguchi
The proposed method is compared with the methods that use multi-view stereo (MVS) to reconstruct the scene structures of the two time points and then differentiate them to detect changes.