Making Sentence Embeddings Robust to User-Generated Content


User-generated content (UGC), e.g. social media posts written in Internet language, presents a lot of lexical variations and deviates from standard language. As a result, NLP models which were mostly trained on standard texts have been known to perform poorly on UGC, and sentence embedding models like LASER are no exception. In this talk, we focus on the robustness of LASER to UGC data. We evaluate this robustness by LASER’s ability to represent non-standard sentences and their standard counterparts close to each other in the embedding space. Inspired by previous works extending LASER to other languages and modalities, we propose RoLASER, a robust English encoder trained using a teacher-student approach to reduce the distances between the representations of standard and UGC sentences. We also use data augmentation to generate synthetic UGC-like training data. We show that RoLASER significantly improves LASER’s robustness to both natural and artificial UGC data by achieving up to 2× and 11× better alignment scores. We also perform a fine-grained analysis on artificial UGC data and find that our model greatly outperforms LASER on its most challenging UGC phenomena such as keyboard typos and social media abbreviations. Evaluation on downstream tasks shows that RoLASER performs comparably to or better than LASER on standard data, while consistently outperforming it on UGC data.

May 29, 2024 2:30 PM — 3:30 PM
Nairobi, Kenya
Lydia Nishimwe
Lydia Nishimwe
PhD Student

I am a PhD student currently working on the neural machine translation of user-generated content (e.g. social media posts).