TY - GEN
T1 - Learning deep representations and detection of docking stations using underwater imaging
AU - Liu, Shuang
AU - Ozay, Mete
AU - Okatani, Takayuki
AU - Xu, Hongli
AU - Lin, Yang
AU - Gu, Haitao
N1 - Funding Information:
This work was partly supported by (1) China State Key Laboratory of Robotics(No.2016-Z08), (2) Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (Infrastructure Maintenance, Renovation and Management), and (3) the ImPACT Program Tough Robotics Challenge of the Council for Science, Technology, and Innovation (Cabinet Office, Government of Japan).
Publisher Copyright:
© 2018 IEEE
PY - 2018/12/4
Y1 - 2018/12/4
N2 - Underwater docking endows AUVs with the ability of recharging and data transfer. Detection of underwater docking stations is a crucial step required to perform a successful docking. We propose a method to detect underwater docking stations using two dimensional images captured under different environmental light variance, deformations aroused by scale and rotation, different light intensity and partial observation. In order to realize our proposed method, we first train Convolutional Neural Networks (CNNs) to learn feature representations and then employ a deep detection network. In order to analyze the performance of the proposed method, we prepared an image dataset of docking stations using underwater imaging. Then, we explore the performance of our method using different data augmentation methods. We improved the AUC of detection by 0.14 using data augmentation and obtained 0.88 AUC with data augmentation. An increment of 0.23 AUC is gained by transfer learning and we obtained 0.88 AUC on another datasets.
AB - Underwater docking endows AUVs with the ability of recharging and data transfer. Detection of underwater docking stations is a crucial step required to perform a successful docking. We propose a method to detect underwater docking stations using two dimensional images captured under different environmental light variance, deformations aroused by scale and rotation, different light intensity and partial observation. In order to realize our proposed method, we first train Convolutional Neural Networks (CNNs) to learn feature representations and then employ a deep detection network. In order to analyze the performance of the proposed method, we prepared an image dataset of docking stations using underwater imaging. Then, we explore the performance of our method using different data augmentation methods. We improved the AUC of detection by 0.14 using data augmentation and obtained 0.88 AUC with data augmentation. An increment of 0.23 AUC is gained by transfer learning and we obtained 0.88 AUC on another datasets.
KW - CNNs
KW - Detection
KW - Underwater docking
KW - Underwater imaging
UR - http://www.scopus.com/inward/record.url?scp=85060315398&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060315398&partnerID=8YFLogxK
U2 - 10.1109/OCEANSKOBE.2018.8559067
DO - 10.1109/OCEANSKOBE.2018.8559067
M3 - Conference contribution
AN - SCOPUS:85060315398
T3 - 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans, OCEANS - Kobe 2018
BT - 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans, OCEANS - Kobe 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans, OCEANS - Kobe 2018
Y2 - 28 May 2018 through 31 May 2018
ER -