TY - GEN
T1 - Method of 3D object reconstruction by fusing vision with touch using internal models with global and local deformations
AU - Yamada, Yasuharu
AU - Ishiguro, Akio
AU - Uchikawa, Yoshiki
PY - 1993
Y1 - 1993
N2 - Recently, the necessity of developing sensor fusion systems has become much higher. In this paper, a method for sensor fusion of vision and touch using an internal model with both global and local deformation is introduced. We utilize superquadrics with local deformations as internal models. Our proposed method consists of two phases. At the first phase, we recover the object shape parametrically by changing the parameters of superquadrics using visual data. The internal model constructed in this phase is, of course, a rough representation of object shape, since visual data are given merely from the visible portion of the object, and parametric models such as superquadrics have inevitable limitation in shape representation. But thanks to this parametric model, we can easily extract regions which are invisible and/or have large errors. Thus, at the second phase, we make the tactile sensor explore to get information of the above-mentioned regions, and deform the internal model locally based on the defined energy functions. The feasibility of the proposed method is confirmed by simulations.
AB - Recently, the necessity of developing sensor fusion systems has become much higher. In this paper, a method for sensor fusion of vision and touch using an internal model with both global and local deformation is introduced. We utilize superquadrics with local deformations as internal models. Our proposed method consists of two phases. At the first phase, we recover the object shape parametrically by changing the parameters of superquadrics using visual data. The internal model constructed in this phase is, of course, a rough representation of object shape, since visual data are given merely from the visible portion of the object, and parametric models such as superquadrics have inevitable limitation in shape representation. But thanks to this parametric model, we can easily extract regions which are invisible and/or have large errors. Thus, at the second phase, we make the tactile sensor explore to get information of the above-mentioned regions, and deform the internal model locally based on the defined energy functions. The feasibility of the proposed method is confirmed by simulations.
UR - http://www.scopus.com/inward/record.url?scp=0027309984&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0027309984&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:0027309984
SN - 0818634529
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 782
EP - 787
BT - Proceedings - IEEE International Conference on Robotics and Automation
PB - Publ by IEEE
T2 - Proceedings of the IEEE International Conference on Robotics and Automation
Y2 - 2 May 1993 through 6 May 1993
ER -