|Prof. James Ting-Ho Lo|
Department of Mathematics and Statistics, University of Maryland, USA
|Biography: Dr. James Ting-Ho Lo is a Professor of Mathematics and Statistics of University of Maryland Baltimore County. He obtained a B.S. and Ph.D. degree from National Taiwan University and University of Southern California respectively. His research interests have included optimal filtering, system control and identification; and machine learning. In 1992, he developed neural filtering, which solved the long-standing notorious problem of optimal nonlinear filtering in its most general setting and obtained a best paper award. Subsequently, he developed adaptive, accommodative and/or robust neural networks for system identification, control and filtering. He has been developing a convexification method for avoiding nonglobal local minima in training deep neural networks, which is ready for application and is nearing a complete solution of the long-standing notorious "local minimum problem". In recent years, Dr. Lo has also been developing a low-order model of biological neural networks. It is a logically coherent and computationally feasible model integrating axonal/dendritic trees, synapses, spiking/nonspiking somas, unsupervised/supervised learning mechanisms, a maximal generalization scheme into a learning machine. The low-order model explains mathematically for the first time how the biological neural networks encode, learn, memorize, recall and generalize. |
Speech Title: Deep Learning and a New Approach for Machine Learning
Abstract: A basis of AI is machine learning, whose state of the art is mainly the highly publicized deep learning. Due to its superior performances in visual recognition, deep learning machines have had a wide range of impressive applications. However, its development for higher level cognitive computing has been stagnant. In this talk, some fundamental shortcomings of deep learning will be examined in connection with big data. A computational model of biological neural networks will be introduced. It provides a logically coherent explanation of how the brain encodes, learns, memorizes, recalls and generalizes. Being used as a learning machine, the computational model can perform real-time, photographic, hierarchical and unsupervised learning.
|Dr. Peter Fischer|
IEEE Fellow, Lawrence Berkeley National Laboratory, USA
|Biography: Dr. Peter Fischer received his PhD in Physics (Dr.rer.nat.) from the Technical University in Munich, Germany in 1993 on pioneering work with X-ray magnetic circular dichroism in rare earth systems and his Habilitation from the University in Würzburg, Germany in 2000 based on his pioneering work on Magnetic Soft X-ray microscopy. Since 2004 he is with the Materials Sciences Division at Lawrence Berkeley National Laboratory in Berkeley CA. He is Senior Staff Scientist and Principal Investigator in the Non-Equilibrium Magnetic Materials Program and currently also Deputy Division Director at MSD. His research program is focused on the use of polarized synchrotron radiation for the study of fundamental problems in magnetism. Since 2014 he is also Adjunct Professor for Physics at the University of California in Santa Cruz. Dr. Fischer has published more than 200 peer reviewed papers and has given about 300 invited presentations at national and international conferences. He was nominated as Distinguished Lecturer of the IEEE Magnetics Society in 2011. For his achievements of “hitting the 10nm resolution milestone with soft X-ray microscopy” he received the Klaus Halbach Award at the Advanced Light Source in 2010. Dr. Fischer is Fellow of the APS and IEEE.|