PhD defense of Carlos González – 18 December 2019

Thesis defense of Carlos González entitled ‘Multimedia and Multilingual Automatic Summarization and Information Retrieval’ on Wednesday, December 18, 2019, at 2:00 PM in the Thesis Room (Saint Marthe – City Center).

Jury:

  • Mme Violaine PRINCE Professeur, LIRMM, Montpellier 2 Rapportrice
  • M. Eric GAUSSIER Professeur, LIG, Grenoble Rapporteur
  • Mme. Fatiha SADAT Professeur, GDAC, Montréal Examinatrice
  • M. Laurent BESACIER Professeur, LIG, Grenoble Examinateur
  • M. Kamel SMAILI Professeur, LORIA, CNRS-Lorraine-Inria Examinateur
  • M. Alfonso MEDINA URREA Chercheur, CELL, COLMEX Examinateur
  • M. Juan-Manuel TORRES-MORENO Maître de Conférence HDR, LIA, Avignon Directeur
  • M. Eric SANJUAN Maître de Conférence HDR, LIA, Avignon Co-directeur

Abstract: As multimedia sources have become massively available online, helping users to understand the large amount of information they generate has become a major issue. One way to approach this is by summarizing multimedia content, thus generating abridged and informative versions of the original sources. This PhD thesis addresses the subject of text and audio-based multimedia summarization in a multilingual context. It has been conducted within the framework of the Access Multilingual Information opinionS (AMIS) CHISTERA-ANR project, whose main objective is to make information easy to understand for everybody.

Text-based multimedia summarization uses transcripts to produce summaries that may be presented either as text or in their original format. The transcription of multimedia sources can be done manually or automatically by an Automatic Speech Recognition (ASR) system. The transcripts produced using either method differ from wellformed written language given their source is mostly spoken language. In addition, ASR transcripts lack syntactic information. For example, capital letters and punctuation marks are unavailable, which means sentences are nonexistent. To deal with this problem, we propose a Sentence Boundary Detection (SBD) method for ASR transcripts which uses textual features to separate the Semantic Units (SUs) within an automatic transcript in a multilingual context. Our approach, based on subword-level information vectors and Convolutional Neural Networks (CNNs), overperforms baselines by correctly identifying SU borders for French, English and Modern Standard Arabic (MSA). We then study the impact of cross-domain datasets over MSA, showing that tuning a model that was originally trained with a big out-of-domain dataset with a small in-domain dataset normally improves SBD performance. Finally, we extend ARTEX, a state-of-the-art extractive text summarization method, to process documents in MSA by adapting preprocessing modules. The resulting summaries can be presented as plain text or in their original multimedia format by aligning the selected SUs.

Concerning audio-based summarization, we introduce an extractive method which represents the informativeness of the source based on its audio features to select the segments that are most pertinent to the summary. During the training phase, our method uses available transcripts of the audio documents to create an informativeness model which maps a set of audio features with a divergence value. Subsequently, when summarizing new audio documents, transcripts are not needed anymore. Results over a multi-evaluator scheme show that our approach provides understandable and informative summaries.

We also deal with the field of evaluation measures. We have developed Window-based Sentence Boundary Evaluation (WiSeBE), a semi-supervised metric based on multi-reference (dis)agreement, which examines whether evaluating an automatic SBD system based on a single reference is enough to assess how well the system is performing. We also explore the possibility of measuring the quality of an automatic transcript based on its informativeness. In addition, we study the extent to which automatic summarization may compensate for the problems raised during the transcription phase. Lastly, we study how text informativeness evaluation measures may be extended to passage interestingness evaluation.