Advanced Computing in the Age of AI | Saturday, April 20, 2024

NSF Distinguished Lecture with Geoffrey Hinton, How to Represent Part-Whole Hierarchies in a Neural Net, to Be Held Feb. 11 

Feb. 8, 2021 -- Geoffrey Hinton, University of Toronto, will present “How to Represent Part-Whole Hierarchies in a Neural Net,” part of the National Science Foundation (NSF) Computer & Information Science & Engineering (CISE) Distinguished Lecture Series on February 11th, 2021, from 11:00AM to 12:300PM ET.

Geoffrey Hinton received his Ph.D. in Artificial Intelligence from Edinburgh in 1978.  After five years as a faculty member at Carnegie-Mellon, he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto where he is now an emeritus professor. He is also a VP of Engineering fellow at Google and Chief Scientific Adviser at the Vector Institute.

Geoffrey Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, and deep learning.  His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.

Geoffrey Hinton is a fellow of the UK Royal Society and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart Prize, the IJCAI award for research excellence, the Killam Prize for Engineering, the IEEE Frank Rosenblatt medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold medal, the NEC C&C award, the BBVA award, the Honda Prize and the Turing Award.

Talk Abstract:

I will present a single idea about representation that allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, implicit functions,  contrastive representation learning, distillation, and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. The talk will discuss the many ramifications of this idea.  If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language.

Please register in advance for this webinar here. After registering, you will receive a confirmation email containing information about joining the webinar.


Source: NSF

EnterpriseAI