University of Cambridge > Talks.cam > Language Technology Lab Seminars > Learning to Create and Reuse Words in Open-Vocabulary Language Modeling

Learning to Create and Reuse Words in Open-Vocabulary Language Modeling

Download to your calendar using vCal

If you have a question about this talk, please contact Dimitri Kartsaklis .

Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types. Although character-level language models offer a partial solution in that they can create word types not attested in the training corpus, they do not capture the β€œbursty” distribution of such words. In this talk, we discuss a hierarchical LSTM language model that generates sequences of word tokens character by character with a caching mechanism that learns to reuse previously generated words. To validate our model we construct a new open-vocabulary language modeling corpus (the Multilingual Wikipedia Corpus; MWC ) from comparable Wikipedia articles in 7 typologically diverse languages and demonstrate the effectiveness of our model across this range of languages.

This talk is part of the Language Technology Lab Seminars series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Β© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity