One method to obtain the original word-level entity instead of wordpiece tokens in BERT NER is to align the wordpiece tokens to the original text by keeping track of the start and end characters of each token. Once the alignment is done, the labels assigned to the wordpiece tokens can be extended to the original words by considering only the label assigned to the first token of each word and propagating it to the remaining tokens of the word. This approach is called "token-level to word-level inference" and can be performed using heuristics or machine learning models.
Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss
Asked: 2023-07-12 22:25:12 +0000
Seen: 11 times
Last updated: Jul 12 '23
What is the maximum sequence length for the transformer in Sentence-BERT?
Is the preparser or parser the receiver of the tokens created by the byte stream decoder?
What is the keyboard shortcut for choosing a word and expanding the selection in VS Code?
How to modify the name of existing captions in MS Word?
How can you combine the word sharp with formidable?
What is the process of using a Word2Vec model on a column within a Pandas dataframe?
How can I export to MS Word using Cocoa?
How can I make the first letter of every word in a dataframe column capitalized?
How can I search for a portion of a word using django-elasticsearch-dsl-drf?