SLG Meeting – St Germes Bengono Obiang – 21/12/2023

12 December 2023

The next SLG meeting will be held in room S1 on Thursday, December 21st, from 12:00 PM to 1:00 PM.    We will have the pleasure of hosting St Germes BENGONO OBIANG, a PhD student in speech processing, focusing on tone recognition in under-resourced languages. He is supervised by Norbert TSOPZE and Paulin MELATAGIA from the University of Yaoundé 1, as well as by Jean-François BONASTRE and Tania JIMENEZ from LIA.   Abstract: Many sub-Saharan African languages are categorized as tone languages and for the most part, they are classified as low resource languages due to the limited resources and tools available to process these languages. Identifying the tone associated with a syllable is therefore a key challenge for speech recognition in these languages. We propose models that automate the recognition of tones in continuous speech that can easily be incorporated into a speech recognition pipeline for these languages. We have investigated different neural architectures as well as several features extraction algorithms in speech (Filter banks, Leaf, Cestrogram, MFCC). In the context of low-resource languages, we also evaluated Wav2vec models for this task. In this work, we use a public speech recognition dataset on Yoruba. As for the results, using the Plus d'infos

ANR EVA Project (SLG)

1 January 2023

Explicit Voice Attributes Describing a voice in a few words remains a very arbitrary task. We can speak with a “deep”, “breathy”, “bright” or “hoarse” voice, but the full characterization of a voice would require a close set of rigorously defined attributes constituting an ontology. However, such a description grid does not exist. Machine learning applied to speech also suffers the same weakness : in most automatic processing tasks, when a speaker is modeled, abstract global representations are used without making their characteristics explicit. For instance, automatic speaker verification / identification is usually tackled thanks to the x-vectors paradigm, which consists in describing a speaker’s voice by an embedding vector only designed to distinguish speakers. Despite their very good accuracy for speaker identification, x-vectors are usually unsuitable to detect similarities between different voices with common characteristics. The same observations can be made for speech generation. We propose to carry out a comprehensive set of analyses to extract salient, unaddressed voice attributes to enrich structured representations usable for synthesis and voice conversion. Partner list: Project leader: Orange Scientific leader for LIA: Yannick Estève Start date: 01/01/2023 — End date: 31/12/2025 More

ANR UMICROWD Project

1 September 2022

Understanding, Modeling and Improving the outcome of Crowdfunding campaigns UMICrowd project explores CF from economical and sociological perspectives, using advancedmathematical modeling tools, Artificial Intelligence (AI) and empirical analysis. It aims to proposedecision-making tools that help entrepreneurs in designing their campaigns and CFP managers inselecting, classifying and promoting projects. Partners: CentraleSupelec CRAN FPF ESCE LIA Period: 2022-2026 ANR Webpage: https://anr.fr/Projet-ANR-22-CE38-0013

Language weekly meetings

28 October 2020

  Starting from November 6th, small informal meetings around the « language » research area of LIA will be re-introduced. These events are expected to foster interesting discussions around the members interested in this broad topic. More precisely, we all know that it may become hard to follow or even have a global view on what our colleagues do, thus potentially preventing relevant collaborations. In fact, many benefits may come from such short group meetings.     Each session will be divided into two parts:    Round table: each participant will be given 5 minutes to speak about her/his latest findings, progress, questions, research paper. In short, it is about exchanging ideas and understanding what peoples do. Of course, slides are not required, this is a simple informal discussion.  Short talk: Every week, one person will be given the opportunity to present something in 5-15min. It could be a presentation that should be given to a conference (i.e with feedbacks from the group), new results, a completely unexplored idea, or even simply a paper that the speaker found relevant.      Duration: 45min-1hour. When: Every Friday What time: 13:00-14:00. (Not fixed yet) Who: Absolutely e-v-e-r-y-o-n-e. PhD students are STRONGLY encouraged to attend. Where: Physically if allowed (COVID) or virtually. The room Plus d'infos

Best Paper Award

25 June 2020

Prof. Abderrahim Benslimane et ses co-auteurs ont obtenu le BEST PAPER AWARD pour leur article “Defending Malicious Check-in Based on Access Point Selection for Indoor Positioning System”, publié à la conference IEEE International Conference on Communications (ICC 2020), 7-11 June 2020, Dublin, Ireland.: https://icc2020.ieee-icc.org/

Prof. Benslimane IEEE Distinguished Lecturer

22 June 2020

Chaque année IEEE Vehicular Technology Society nomme quelques conférenciers émérites, Distinguished Lecturers, connus pour leurs travaux de recherche dans la technologie de véhicules. Cette année, Abderrahim Benslimane, professeur à l’université d’Avignon et membre du Laboratoire d’Informatique d’Avignon est l’un des Lauréats qui ont été sélectionnés dans ce sens pour deux ans. Dans le cadre de cette nomination, et avec le soutien financier de IEEE, le conférencier émérite pourra être invité dans le monde pour donner des conférences dans son domaine d’expertise.

Intelligence artificielle pour la compréhension du langage parlé contrôlée sémantiquement – AISSPER

2 December 2019

L’Agence Nationale pour la Recherche finance chaque année des projets de recherche dont plusieurs sur l’intelligence artificielle. Focus sur le projet d’AISSPER porté par Mohamed Morchid du Laboratoire d’Informatique d’Avignon : Intelligence artificielle pour la compréhension du langage parlé contrôlée sémantiquement.Lire la suite sur: https://www.actuia.com/actualite/intelligence-artificielle-pour-la-comprehension-du-langage-parle-controlee-semantiquement-aissper/

HDR defense of Mohamed Morchid – 26 November 2019

26 November 2019

On the next 26th of November at 4 PM in the thesis room (Hannah Arendt campus). This HDR entitled ‘Neural Networks for Natural Language Processing’ will be presented before a jury composed of: Reviewers: Mrs. Dilek Z. HAKKANI-TÜR Senior Principal Scientist, Alexa AI, USA Mr. Patrice BELLOT Professor, AMU Polytech’, LIS, Marseille Mr. Frédéric ALEXANDRE Research Director INRIA, Bordeaux Examiners: Mr. Yannick ESTÈVE Professor, AU, LIA, Avignon Mr. Frédéric BÉCHET Professor, AMU, LIS, Marseille

SpeechBrain

18 November 2019

        Nous sommes heureux d’annoncer le lancement de  (https://speechbrain.github.io/), un toolkit tout-en-un liant PyTorch et le traitement automatique de la parole. Basé sur le succès de son prototype PyTorch-Kaldi, nous souhaitons accroitre les fonctionnalités ainsi que l’efficacité de ce projet. Plus précisément, le but est de créer un outil unique, flexible et surtout facile à prendre en main, qui puisse être utiliser pour rapidement développer des systèmes état de l’art pour la parole. Nous connaissons tous, dans nos sous-domaines respectifs, de nombreux outils éparpillés, plus ou moins complexes (souvent plus que moins), et il est donc d’un intérêt certain de construire un projet unique, capable de réunir et de combler tous les besoins de la communauté. Quelques exemples sont: ASR (end-to-end et DNN-HMM), identification / vérification du locuteur, séparation de la parole, traitement de signaux multi-microphones, apprentissages “self-supervised” et non supervisé, extraction de caractéristiques via GPUs, et autres. Le projet sera dans un premier temps dirigé par le MILA (via Dr. Ravanelli Mirco, présent à la dernière retraite du LIA), et est actuellement soutenu par Samsung, Dolby ainsi que Nvidia. Le LIA participe également depuis le début à cet outil, via mon implication dans la création et gestion de PyTorch-Kaldi Plus d'infos

1 5 6 7 8