- Info
Maurizio Parton
Università degli studi G.D’Annunzio Chieti Pescara, Pescara, Italy
Introduction to geometric deep learning
The last few years have witnessed the impressive success of deep learning in several very different fields: image recognition, games, biology, and natural language processing, to name a few. However, we are still far from truly understanding the mathematics behind these accomplishments. Besides being of a theoretical interest by itself, understanding why these techniques work so well would certainly increase both their performance and fields of application. Geometric deep learning (GDL) is a promising field targeting such a mathematical
understanding. GDL proposes a unified approach to understanding why diverse architectures, such as CNN, LSTM, Graph Neural Networks, and Transformers, are so successful. The underlying powerful idea is that whenever the function to be approximated is invariant by a group G of
symmetries, G-invariance should be encoded in the architecture. For instance, this is why CNNs, which implement a translational invariance, work so well on image recognition, which is a task
naturally translation-invariant.
In this talk I will sketch an introduction to this marvellous topic.