Here are the steps to calculate perplexity at the sentence level using Hugging Face language models:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
input_sentence = "The cat sat on the mat."
input_ids = tokenizer.encode(input_sentence, return_tensors="pt")
next_token_logits = model(input_ids)[0][:, -1, :]
CrossEntropyLoss
function from PyTorch.from torch.nn.functional import cross_entropy
next_token = torch.tensor(tokenizer.encode(" ")[0])
cross_entropy_loss = cross_entropy(next_token_logits, next_token)
perplexity = torch.exp(cross_entropy_loss)
The perplexity value indicates how well the language model is able to predict the next token in the sentence given the input tokens. A lower perplexity value means better performance.
Asked: 2023-06-09 17:08:26 +0000
Seen: 12 times
Last updated: Jun 09 '23