Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Here are the steps to calculate perplexity at the sentence level using Hugging Face language models:

  1. Load the language model from Hugging Face. For example, you can load the GPT-2 model like this:
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
  1. Tokenize the sentence using the tokenizer.
input_sentence = "The cat sat on the mat."
input_ids = tokenizer.encode(input_sentence, return_tensors="pt")
  1. Get the model's prediction for the next token given the input tokens.
next_token_logits = model(input_ids)[0][:, -1, :]
  1. Calculate perplexity using the CrossEntropyLoss function from PyTorch.
from torch.nn.functional import cross_entropy

next_token = torch.tensor(tokenizer.encode(" ")[0])
cross_entropy_loss = cross_entropy(next_token_logits, next_token)
perplexity = torch.exp(cross_entropy_loss)

The perplexity value indicates how well the language model is able to predict the next token in the sentence given the input tokens. A lower perplexity value means better performance.