no code implementations • 23 Apr 2024 • Hoang Chuong Nguyen, Tianyu Wang, Jose M. Alvarez, Miaomiao Liu
In the next stage, we use an object network to estimate the depth of those moving objects assuming rigid motions.
no code implementations • 22 Apr 2024 • Jiahao Ma, Miaomiao Liu, David Ahmedt-Aristizaba, Chuong Nguyen
We solve this problem by our HashPoint method combining these two strategies, leveraging rasterization for efficient point searching and sampling, and ray marching for rendering.
no code implementations • 18 Apr 2024 • Jinwu Wang, Wei Mao, Miaomiao Liu
In this paper, we introduce a MusIc conditioned 3D Dance GEneraTion model, named MIDGET based on Dance motion Vector Quantised Variational AutoEncoder (VQ-VAE) model and Motion Generative Pre-Training (GPT) model to generate vibrant and highquality dances that match the music rhythm.
no code implementations • 1 Oct 2023 • Chaoyue Xing, Wei Mao, Miaomiao Liu
In this paper, we tackle the problem of scene-aware 3D human motion forecasting.
no code implementations • 25 Sep 2023 • Tianyu Wang, Kee Siong Ng, Miaomiao Liu
We tackle the task of scalable unsupervised object-centric representation learning on 3D scenes.
no code implementations • 19 Jul 2023 • Hao Yang, Liyuan Pan, Yan Yang, Richard Hartley, Miaomiao Liu
In this paper, we propose, to the best of our knowledge, the first framework that introduces the contrastive language-image pre-training framework (CLIP) to accurately estimate the blur map from a DP pair unsupervisedly.
1 code implementation • 9 Jul 2023 • Jiayu Yang, Enze Xie, Miaomiao Liu, Jose M. Alvarez
In contrast, we propose to use parametric depth distribution modeling for feature transformation.
1 code implementation • CVPR 2023 • Huiyu Gao, Wei Mao, Miaomiao Liu
Different from their works which sparsify voxels globally with a fixed occupancy threshold, we perform the sparsification on a local feature volume along each visual ray to preserve at least one voxel per ray for more fine details.
no code implementations • 28 Feb 2023 • Shidi Li, Christian Walder, Alexander Soen, Lexing Xie, Miaomiao Liu
The sparse transformer can reduce the computational complexity of the self-attention layers to $O(n)$, whilst still being a universal approximator of continuous sequence-to-sequence functions.
1 code implementation • ICCV 2023 • Jiayu Yang, Enze Xie, Miaomiao Liu, Jose M. Alvarez
In contrast, we propose to use parametric depth distribution modeling for feature transformation.
no code implementations • CVPR 2023 • Yan Yang, Liyuan Pan, Liu Liu, Miaomiao Liu
It estimates a disparity feature map, which is used to query a trainable kernel set to estimate a blur kernel that best describes the spatially-varying blur.
1 code implementation • 8 Oct 2022 • Wei Mao, Miaomiao Liu, Richard Hartley, Mathieu Salzmann
In this paper, we tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion.
Ranked #2 on Human Pose Forecasting on GTA-IM Dataset
no code implementations • 24 Aug 2022 • Doris Antensteiner, Silvia Bucci, Arushi Goel, Marah Halawa, Niveditha Kalavakonda, Tejaswi Kasarla, Miaomiao Liu, Nermin Samet, Ivaxi Sheth
In this paper, we present the details of Women in Computer Vision Workshop - WiCV 2022, organized alongside the hybrid CVPR 2022 in New Orleans, Louisiana.
1 code implementation • CVPR 2022 • Wei Mao, Miaomiao Liu, Mathieu Salzmann
We introduce the task of action-driven stochastic human motion prediction, which aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
1 code implementation • CVPR 2022 • Jiayu Yang, Jose M. Alvarez, Miaomiao Liu
Boundary pixels usually follow a multi-modal distribution as they represent different depths; Therefore, the assumption results in an erroneous depth prediction at the coarser level of the cost volume pyramid and can not be corrected in the refinement levels leading to wrong depth predictions.
no code implementations • 15 Mar 2022 • Shidi Li, Christian Walder, Miaomiao Liu
This paper addresses the problem of unsupervised parts-aware point cloud generation with learned parts-based self-similarity.
no code implementations • 24 Nov 2021 • Shudong Yang, Miaomiao Liu
University evaluation and ranking is an extremely complex activity.
no code implementations • 13 Oct 2021 • Shidi Li, Miaomiao Liu, Christian Walder
We achieve this with a simple modification of the Variational Auto-Encoder which yields a joint model of the point cloud itself along with a schematic representation of it as a combination of shape primitives.
1 code implementation • ICCV 2021 • Wei Mao, Miaomiao Liu, Mathieu Salzmann
Recent progress in stochastic motion prediction, i. e., predicting multiple possible future human motions given a single past pose sequence, has led to producing truly diverse future motions and even providing control over the motion of some body parts.
Ranked #2 on Human Pose Forecasting on AMASS (APD metric)
1 code implementation • 17 Jun 2021 • Wei Mao, Miaomiao Liu, Mathieu Salzmann, Hongdong Li
Whether based on recurrent or feed-forward neural networks, existing learning based methods fail to model the observation that human motion tends to repeat itself, even for complex sports actions and cooking activities.
no code implementations • 10 Jun 2021 • Tianyu Wang, Miaomiao Liu, Kee Siong Ng
Experimental results demonstrate that SPAIR3D has strong scalability and is capable of detecting and segmenting an unknown number of objects from a point cloud in an unsupervised manner.
no code implementations • 20 May 2021 • Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
We present a simple setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid, and develop a method for recovering the object surface through reconstructing and triangulating such incident light paths.
1 code implementation • CVPR 2021 • Jiayu Yang, Jose M. Alvarez, Miaomiao Liu
Here, we propose a self-supervised learning framework for multi-view stereo that exploit pseudo labels from the input data.
1 code implementation • 23 Jan 2021 • Kai Han, Miaomiao Liu, Dirk Schnieders, Kwan-Yee K. Wong
This paper addresses the problem of mirror surface reconstruction, and proposes a solution based on observing the reflections of a moving reference plane on the mirror surface.
1 code implementation • CVPR 2021 • Liyuan Pan, Shah Chowdhury, Richard Hartley, Miaomiao Liu, Hongguang Zhang, Hongdong Li
The heavy defocus blur in DP pairs affects the performance of matching-based depth estimation approaches.
3 code implementations • ECCV 2020 • Wei Mao, Miaomiao Liu, Mathieu Salzmann
Human motion prediction aims to forecast future human poses given a past motion.
no code implementations • CVPR 2020 • Liyuan Pan, Miaomiao Liu, Richard Hartley
Then, we consider the special case of image blur caused by high dynamics in the visual environments and show that including the blur formation in our model further constrains flow estimation.
no code implementations • 26 Feb 2020 • Shihao Jiang, Dylan Campbell, Miaomiao Liu, Stephen Gould, Richard Hartley
We address the problem of joint optical flow and camera motion estimation in rigid scenes by incorporating geometric constraints into an unsupervised deep learning framework.
1 code implementation • CVPR 2020 • Jiayu Yang, Wei Mao, Jose M. Alvarez, Miaomiao Liu
We propose a cost volume-based neural network for depth inference from multi-view images.
Ranked #14 on 3D Reconstruction on DTU
no code implementations • 6 Oct 2019 • Liyuan Pan, Yuchao Dai, Miaomiao Liu, Fatih Porikli, Quan Pan
Under our model, these three tasks are naturally connected and expressed as the parameter estimation of 3D scene structure and camera motion (structure and motion for the dynamic scenes).
5 code implementations • ICCV 2019 • Wei Mao, Miaomiao Liu, Mathieu Salzmann, Hongdong Li
In this paper, we propose a simple feed-forward deep network for motion prediction, which takes into account both temporal smoothness and spatial dependencies among human body joints.
1 code implementation • 12 Mar 2019 • Liyuan Pan, Richard Hartley, Cedric Scheerlinck, Miaomiao Liu, Xin Yu, Yuchao Dai
Based on the abundant event data alongside a low frame rate, easily blurred images, we propose a simple yet effective approach to reconstruct high-quality and high frame rate sharp videos.
no code implementations • 1 Mar 2019 • Liyuan Pan, Yuchao Dai, Miaomiao Liu
Camera shake during exposure is a major problem in hand-held photography, as it causes image blur that destroys details in the captured images.~In the real world, such blur is mainly caused by both the camera motion and the complex scene structure.~While considerable existing approaches have been proposed based on various assumptions regarding the scene structure or the camera motion, few existing methods could handle the real 6 DoF camera motion.~In this paper, we propose to jointly estimate the 6 DoF camera motion and remove the non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with a single blurry image and its depth map (either direct depth measurements, or a learned depth map) as input.~We formulate our joint deblurring and 6 DoF camera motion estimation as an energy minimization problem which is solved in an alternative manner.
1 code implementation • CVPR 2019 • Liyuan Pan, Cedric Scheerlinck, Xin Yu, Richard Hartley, Miaomiao Liu, Yuchao Dai
In this paper, we propose a simple and effective approach, the \textbf{Event-based Double Integral (EDI)} model, to reconstruct a high frame-rate, sharp video from a single blurry frame and its event data.
no code implementations • 26 Nov 2018 • Liyuan Pan, Richard Hartley, Miaomiao Liu, Yuchao Dai
The image blurring process is generally modelled as the convolution of a blur kernel with a latent image.
no code implementations • CVPR 2018 • Miaomiao Liu, Xuming He, Mathieu Salzmann
By contrast, in this paper, we propose to exploit the 3D geometry of the scene to synthesize a novel view.
no code implementations • 27 Nov 2017 • Liyuan Pan, Yuchao Dai, Miaomiao Liu, Fatih Porikli
In this paper, we propose to tackle the problem of depth map completion by jointly exploiting the blurry color image sequences and the sparse depth map measurements, and present an energy minimization based formulation to simultaneously complete the depth maps, estimate the scene flow and deblur the color images.
no code implementations • CVPR 2017 • Wei Zhuo, Mathieu Salzmann, Xuming He, Miaomiao Liu
In particular, while some of them aim at segmenting the image into regions, such as object or surface instances, others aim at inferring the semantic labels of given regions, or their support relationships.
no code implementations • CVPR 2017 • Liyuan Pan, Yuchao Dai, Miaomiao Liu, Fatih Porikli
Unlike the existing approach [31] which used a pre-computed scene flow, we propose a single framework to jointly estimate the scene flow and deblur the image, where the motion cues from scene flow estimation and blur information could reinforce each other, and produce superior results than the conventional scene flow estimation or stereo deblurring methods.
no code implementations • CVPR 2016 • Kai Han, Kwan-Yee K. Wong, Dirk Schnieders, Miaomiao Liu
Unlike previous approaches which require tedious work to calibrate the camera, our method can recover both the camera intrinsics and extrinsics together with the mirror surface from reflections of the reference plane under at least three unknown distinct poses.
no code implementations • 31 May 2016 • Miaomiao Liu, Mathieu Salzmann, Xuming He
Despite much progress, state-of-the-art techniques suffer from two drawbacks: (i) they rely on the assumption that intensity edges coincide with depth discontinuities, which, unfortunately, is only true in controlled environments; and (ii) they typically exploit the availability of high-resolution training depth maps, which can often not be acquired in practice due to the sensors' limitations.
no code implementations • 19 Nov 2015 • Miaomiao Liu, Mathieu Salzmann, Xuming He
To this end, we first study the problem of depth estimation from a single image.
no code implementations • CVPR 2015 • Wei Zhuo, Mathieu Salzmann, Xuming He, Miaomiao Liu
We tackle the problem of single image depth estimation, which, without additional knowledge, suffers from many ambiguities.
no code implementations • CVPR 2015 • Kai Han, Kwan-Yee K. Wong, Miaomiao Liu
In this paper, we develop a fixed viewpoint approach for dense surface reconstruction of transparent objects based on refraction of light.
no code implementations • CVPR 2014 • Miaomiao Liu, Mathieu Salzmann, Xuming He
The unary potentials in this graphical model are computed by making use of the images with known depth.
no code implementations • CVPR 2013 • Miaomiao Liu, Richard Hartley, Mathieu Salzmann
In such conditions, our differential geometry analysis provides a theoretical proof that the shape of the mirror surface can be uniquely recovered if the pose of the reference target is known.