LAMA ia a probe for analyzing the factual and commonsense knowledge contained in pretrained language models. The LAMA probe was built to run experiments for our paper “Language models as Knowledge-bases?” by my colleague Fabio Petroni. LAMA was open-sourced as a stand-alone probe for assessing how factual Language models are.

The codebase for the LAMA probe is available here, and the dataset is available at here.

LAMA contains a set of connectors to pretrained language models. LAMA exposes a transparent and unique interface to use:

  • Transformer-XL (Dai et al., 2019)
  • BERT (Devlin et al., 2018)
  • ELMo (Peters et al., 2018)
  • GPT (Radford et al., 2018)
  • RoBERTa (Liu et al., 2019)


. Language Models as Knowledge Bases?. EMNLP 2019, 2019.

PDF Code Dataset Project