no code implementations • 26 Sep 2023 • Namiko Saito, Mayu Hiramoto, Ayuna Kubo, Kanata Suzuki, Hiroshi Ito, Shigeki SUGANO, Tetsuya OGATA
We tackled on the task of cooking scrambled eggs using real ingredients, in which the robot needs to perceive the states of the egg and adjust stirring movement in real time, while the egg is heated and the state changes continuously.
no code implementations • 4 Jun 2021 • Namiko Saito, Tetsuya OGATA, Satoshi Funabashi, Hiroki Mori, Shigeki SUGANO
We also examine the contributions of images, force, and tactile data and show that learning a variety of multimodal information results in rich perception for tool use.
no code implementations • ICLR 2019 • Zhihao LI, Toshiyuki MOTOYOSHI, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of train- ing driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected.
1 code implementation • 28 Sep 2018 • Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya OGATA, Shigeki SUGANO
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don't work as expected.
no code implementations • 23 Sep 2018 • Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya OGATA, Shigeki SUGANO
We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.