Invited Speaker

Fusaomi Nagata

Fusaomi Nagata

Professor, Department of Mechanical Engineering, Faculty of Engineering, Sanyo-Onoda City University, Japan
Speech Title: Transfer Learning Based CNN and Visual Feedback Control for a Pick and Place Robot

Abstract: Artificial neural network (ANN) which has five or more layers structure is called deep NN (DNN) and it has been recognized as one of the most powerful machine learning techniques. Convolutional neural network (CNN) has a reasonable structure for image recognition, so that it has been being applied to defect inspection processes in various industrial manufacturing lines. It is also known that support vector machine (SVM) has a superior ability for binary classification in spite of only having two layers. The authors have already developed a CNN&SVM design and training tool for easy consideration of defect detection systems, while the effectiveness and the validity have been proved through several CNNs design, training and evaluation [1, 2]. The tool further enables to facilitate the design of a CNN model based on transfer learning concept [3].
For example, when industrial robots are applied to pick and place tasks of resin molded articles, information of each object’s position and orientation is essential. Recognition and extraction of the object position in an image are not so difficult if some image processing technique is used, however, that of orientation is not easy due to the variety in shape. In this paper, a pick and place robot is introduced while implementing a visual feedback control and a transfer learning-based CNN. The visual feedback control enables to omit the complicated calibration between image and robot coordinate systems, also the transfer learning based CNN allows the robot to estimate the orientation of target objects. The effectiveness and validity of the system is demonstrated through pick and place experiments using a small articulated robot named DOBOT.

[1] F. Nagata, K. Tokuno, K. Nakashima, A. Otsuka, T. Ikeda, H. Ochi, K. Watanabe, M.K. Habib, Fusion method of convolutional neural network and support vector machine for high accuracy anomaly detection, Procs. of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA 2019), pp. 970-975, Tianjin, China, August 2019.
[2] F. Nagata, K. Tokuno, K. Mitarai, A. Otsuka, T. Ikeda, H. Ochi, K. Watanabe, M.K. Habib, Defect detection method using deep convolutional neural network, support vector machine and template matching techniques, Artificial Life and Robotics, Vol. 24, No. 4, pp 512-519, 2019.
[3] F. Nagata, K. Miki, Y. Imahashi, K. Nakashima, K. Tokuno, A. Otsuka, K. Watanabe and M.K. Habib, Orientation detection using a CNN designed by transfer learning of AlexNet, Procs. of the 8th IIAE International Conference on Industrial Application Engineering (ICIAE2020), pp. 295-299, 2020.

Keywords: Pick and place robot, Convolutional neural network, Transfer learning, Visual feedback control

Biography: Fusaomi Nagata received the B.E. degree from the Department of Electronic Engineering at Kyushu Institute of Technology in 1985, and the D.E. degree from the Faculty of Engineering Systems and Technology at Saga University in 1999. He was a research engineer with Kyushu Matsushita Electric Co. from 1985 to 1988, and a special researcher with Fukuoka Industrial Technology Centre from 1988 to 2006. He is currently a professor at the Department of Mechanical Engineering, Faculty of Engineering, Tokyo University of Science, Yamaguchi, Japan and also a Dean of the Faculty of Engineering. His research interests include deep convolutional neural networks for visual inspection of resin molded articles, intelligent control for industrial robots and its application to machining process, e.g., robot sander, mold polishing robot, desktop NC machine tool with compliance control capability, machining robot with robotic CAM system, and 3D printer-like data interface for a machining robot have been developed for wood material, aluminum PET bottle mold, LED lens mold, foamed polystyrene, and so on.