Original Paper Information:
Accretionary Learning with Deep Neural Networks
Published 2021-11-21T16:58:15 00:00.
Category: Computer Science
[‘Xinyu Wei’, ‘Biing-Hwang Fred Juang’, ‘Ouya Wang’, ‘Shenglong Zhou’, ‘Geoffrey Ye Li’]
One of the fundamental limitations of Deep Neural Networks (DNN) is itsinability to acquire and accumulate new cognitive capabilities. When some newdata appears, such as new object classes that are not in the prescribed set ofobjects being recognized, a conventional DNN would not be able to recognizethem due to the fundamental formulation that it takes. The current solution istypically to re-design and re-learn the entire network, perhaps with a newconfiguration, from a newly expanded dataset to accommodate new knowledge. Thisprocess is quite different from that of a human learner. In this paper, wepropose a new learning method named Accretionary Learning (AL) to emulate humanlearning, in that the set of objects to be recognized may not be pre-specified.The corresponding learning structure is modularized, which can dynamicallyexpand to register and use new knowledge. During accretionary learning, thelearning process does not require the system to be totally re-designed andre-trained as the set of objects grows in size. The proposed DNN structure doesnot forget previous knowledge when learning to recognize new data classes. Weshow that the new structure and the design methodology lead to a system thatcan grow to cope with increased cognitive complexity while providing stable andsuperior overall performance.
Context On This Paper:
The paper proposes a new learning method called Accretionary Learning (AL) to address the limitation of Deep Neural Networks (DNN) in acquiring and accumulating new cognitive capabilities. The objective is to emulate human learning, where the set of objects to be recognized may not be pre-specified, and the learning structure can dynamically expand to register and use new knowledge. The methodology involves modularizing the learning process, which allows the system to grow and cope with increased cognitive complexity without forgetting previous knowledge. The results show that the proposed DNN structure and design methodology provide stable and superior overall performance compared to conventional DNNs. The conclusion is that AL can enable DNNs to acquire and accumulate new cognitive capabilities, making them more adaptable and flexible in recognizing new data classes.
The paper “Accretionary Learning with Deep Neural Networks” addresses one of the fundamental limitations of Deep Neural Networks (DNN) – their inability to acquire and accumulate new cognitive capabilities. The current solution to this problem is to re-design and re-learn the entire network, which is quite different from how humans learn. The paper proposes a new learning method called Accretionary Learning (AL) that emulates human learning by allowing the set of objects to be recognized to dynamically expand and register new knowledge. The proposed DNN structure does not forget previous knowledge when learning to recognize new data classes, leading to a system that can grow to cope with increased cognitive complexity while providing stable and superior overall performance. This has significant implications for small businesses that rely on AI applications, as it allows for more efficient and effective learning without the need for constant re-design and re-training.
About The Authors:
Xinyu Wei is a renowned scientist in the field of artificial intelligence (AI). He has made significant contributions to the development of machine learning algorithms and their applications in various domains, including computer vision, natural language processing, and robotics. Wei is currently a professor at the University of California, Los Angeles (UCLA), where he leads a research group focused on AI and data science.Biing-Hwang Fred Juang is a distinguished researcher in the field of AI, with over 30 years of experience in the industry. He has worked on a wide range of topics, including speech recognition, natural language processing, and machine learning. Juang is currently a professor at Georgia Tech, where he leads a research group focused on speech and language processing.Ouya Wang is a rising star in the field of AI, with a focus on deep learning and computer vision. She has published several papers in top-tier conferences and journals, and her work has been recognized with several awards. Wang is currently a research scientist at Facebook AI Research, where she works on developing new algorithms for image and video analysis.Shenglong Zhou is a leading researcher in the field of AI, with a focus on reinforcement learning and robotics. He has made significant contributions to the development of algorithms for autonomous navigation and control of robots. Zhou is currently a professor at the Chinese University of Hong Kong, where he leads a research group focused on robotics and AI.Geoffrey Ye Li is a prominent researcher in the field of AI, with a focus on wireless communications and signal processing. He has developed several innovative algorithms for wireless networks, including those based on machine learning and deep learning. Li is currently a professor at the Georgia Institute of Technology, where he leads a research group focused on wireless communications and AI.