Thesis defense of Titouan Parcollet, entitled “Artificial Neural Networks Based on Quaternion Algebra,” will take place on Tuesday, December 3, 2019, at 2:30 PM in the Blaise Pascal amphitheater (CERI).
The thesis will be presented before a jury composed of:
- Mr. Thierry Artières, Professor, ECM/LIS/CNRS, Aix-Marseille University (Reviewer)
- Mr. Alexandre Allauzen, Professor, LIMSI/CNRS, Paris-Sud University (Reviewer)
- Ms. Nathalie Camelin, Associate Professor, LIUM, Le Mans University (Examiner)
- Mr. Yoshua Bengio, Professor, DIRO/MILA, University of Montreal (Remote participation, Examiner)
- Mr. Benjamin Lecouteux, Associate Professor, LIG, Université Grenoble Alpes (Examiner)
- Mr. Xavier Bost, Research Engineer, ORKIS (Examiner)
- Mr. Georges Linarès, Professor, LIA, Avignon University (Supervisor)
- Mr. Mohamed Morchid, Associate Professor, LIA, Avignon University (co-Supervisor)
The defense will be conducted in French. You are also invited to the reception following the defense in Room 5.
Abstract: In recent years, deep learning has become the preferred approach for developing modern artificial intelligence (AI). The significant increase in computing power, along with the ever-growing amount of available data, has made deep neural networks the most efficient solution for solving complex problems. However, accurately representing the multidimensionality of real-world data remains a major challenge for artificial neural architectures.
To address this challenge, neural networks based on complex and hypercomplex number algebras have been developed. Thus, the multidimensionality of data is integrated into neurons, which are now complex and hypercomplex components of the model. In particular, quaternion neural networks (QNNs) have been proposed to process three-dimensional and four-dimensional data, based on quaternions representing rotations in our three-dimensional space. Unfortunately, unlike complex-valued neural networks, which are now accepted as an alternative to real-valued neural networks, QNNs suffer from several limitations, partly addressed by the various works detailed in this manuscript.
The thesis is composed of three parts that progressively introduce the missing concepts to make QNNs an alternative to real-valued neural networks. The first part presents and categorizes previous discoveries related to quaternions and quaternion neural networks, defining a foundation for building modern QNNs.
The second part introduces state-of-the-art quaternion neural networks to allow comparison in identical contexts with traditional modern architectures. Specifically, QNNs were mostly limited by their overly simple architectures, often consisting of a single hidden layer with few neurons. In this part, we bridge this significant gap between quaternion neural networks and those based on real numbers. First, fundamental paradigms such as autoencoders and deep neural networks are presented. Then, widely used and studied convolutional and recurrent neural networks are extended to the quaternion space. Numerous experiments on different real-world applications, such as computer vision, spoken language understanding, and automatic speech recognition, are conducted to compare the introduced quaternion models with conventional neural networks. In these specific contexts, QNNs achieved better performance and significantly reduced the number of neural parameters required for the learning phase.
QNNs are then extended to training conditions capable of processing all input representations of quaternion models. In a traditional scenario involving QNNs, input features are manually segmented into four components to match the quaternion-induced representation. Unfortunately, ensuring that such segmentation is optimal for solving the considered problem is challenging. Moreover, manual segmentation fundamentally restricts the application of QNNs to tasks naturally defined in at most four-dimensional space. Thus, the third part of this thesis introduces a supervised and unsupervised model allowing the extraction of untangled and meaningful input features in the quaternion space from any type of real one-dimensional signal. This enables the use of QNNs regardless of the dimensionality of input vectors and the considered task. Experiments on speech recognition and spoken document classification demonstrate that the proposed approaches outperform traditional quaternion representations.