第七色在线视频,2021少妇久久久久久久久久,亚洲欧洲精品成人久久av18,亚洲国产精品特色大片观看完整版,孙宇晨将参加特朗普的晚宴

為了賬號(hào)安全,請(qǐng)及時(shí)綁定郵箱和手機(jī)立即綁定
已解決430363個(gè)問(wèn)題,去搜搜看,總會(huì)有你想問(wèn)的

從 NER 獲取全名

從 NER 獲取全名

夢(mèng)里花落0921 2023-06-14 16:22:03
通過(guò)閱讀文檔和使用 API,看起來(lái) CoreNLP 會(huì)告訴我每個(gè)標(biāo)記的 NER 標(biāo)簽,但它不會(huì)幫助我從句子中提取全名。例如:Input:?John?Wayne?and?Mary?have?coffee CoreNLP?Output:?(John,PERSON)?(Wayne,PERSON)?(and,O)?(Mary,PERSON)?(have,O)?(coffee,O) Desired?Result:?list?of?PERSON?==>?[John?Wayne,?Mary]除非我錯(cuò)過(guò)了一些標(biāo)志,否則我相信要做到這一點(diǎn),我將需要解析標(biāo)記并將標(biāo)記為 PERSON 的連續(xù)標(biāo)記粘合在一起。有人可以確認(rèn)這確實(shí)是我需要做的嗎?我主要想知道 CoreNLP 中是否有一些標(biāo)志或?qū)嵱贸绦蚩梢詾槲易鲞@樣的事情。如果有人有實(shí)用程序(最好是 Java,因?yàn)槲沂褂玫氖?Java API)可以執(zhí)行此操作并希望分享,則可加分 :)謝謝!
查看完整描述

2 回答

?
白板的微信

TA貢獻(xiàn)1883條經(jīng)驗(yàn) 獲得超3個(gè)贊

您可能正在尋找實(shí)體提及而不是 NER 標(biāo)簽。例如使用簡(jiǎn)單 API:

new Sentence("Jimi Hendrix was the greatest").nerTags()

[PERSON, PERSON, O, O, O]


new Sentence("Jimi Hendrix was the greatest").mentions()

[Jimi Hendrix]

StanfordCoreNLP上面的鏈接有一個(gè)使用舊管道的傳統(tǒng)非簡(jiǎn)單 API 的示例


查看完整回答
反對(duì) 回復(fù) 2023-06-14
?
qq_笑_17

TA貢獻(xiàn)1818條經(jīng)驗(yàn) 獲得超7個(gè)贊

這是完整的 Java API 示例,其中有一個(gè)關(guān)于實(shí)體提及的部分:

import edu.stanford.nlp.coref.data.CorefChain;

import edu.stanford.nlp.ling.*;

import edu.stanford.nlp.ie.util.*;

import edu.stanford.nlp.pipeline.*;

import edu.stanford.nlp.semgraph.*;

import edu.stanford.nlp.trees.*;

import java.util.*;



public class BasicPipelineExample {


? public static String text = "Joe Smith was born in California. " +

? ? ? "In 2017, he went to Paris, France in the summer. " +

? ? ? "His flight left at 3:00pm on July 10th, 2017. " +

? ? ? "After eating some escargot for the first time, Joe said, \"That was delicious!\" " +

? ? ? "He sent a postcard to his sister Jane Smith. " +

? ? ? "After hearing about Joe's trip, Jane decided she might go to France one day.";


? public static void main(String[] args) {

? ? // set up pipeline properties

? ? Properties props = new Properties();

? ? // set the list of annotators to run

? ? props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,depparse,coref,kbp,quote");

? ? // set a property for an annotator, in this case the coref annotator is being set to use the neural algorithm

? ? props.setProperty("coref.algorithm", "neural");

? ? // build pipeline

? ? StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

? ? // create a document object

? ? CoreDocument document = new CoreDocument(text);

? ? // annnotate the document

? ? pipeline.annotate(document);

? ? // examples


? ? // 10th token of the document

? ? CoreLabel token = document.tokens().get(10);

? ? System.out.println("Example: token");

? ? System.out.println(token);

? ? System.out.println();


? ? // text of the first sentence

? ? String sentenceText = document.sentences().get(0).text();

? ? System.out.println("Example: sentence");

? ? System.out.println(sentenceText);

? ? System.out.println();


? ? // second sentence

? ? CoreSentence sentence = document.sentences().get(1);


? ? // list of the part-of-speech tags for the second sentence

? ? List<String> posTags = sentence.posTags();

? ? System.out.println("Example: pos tags");

? ? System.out.println(posTags);

? ? System.out.println();


? ? // list of the ner tags for the second sentence

? ? List<String> nerTags = sentence.nerTags();

? ? System.out.println("Example: ner tags");

? ? System.out.println(nerTags);

? ? System.out.println();


? ? // constituency parse for the second sentence

? ? Tree constituencyParse = sentence.constituencyParse();

? ? System.out.println("Example: constituency parse");

? ? System.out.println(constituencyParse);

? ? System.out.println();


? ? // dependency parse for the second sentence

? ? SemanticGraph dependencyParse = sentence.dependencyParse();

? ? System.out.println("Example: dependency parse");

? ? System.out.println(dependencyParse);

? ? System.out.println();


? ? // kbp relations found in fifth sentence

? ? List<RelationTriple> relations =

? ? ? ? document.sentences().get(4).relations();

? ? System.out.println("Example: relation");

? ? System.out.println(relations.get(0));

? ? System.out.println();


? ? // entity mentions in the second sentence

? ? List<CoreEntityMention> entityMentions = sentence.entityMentions();

? ? System.out.println("Example: entity mentions");

? ? System.out.println(entityMentions);

? ? System.out.println();


? ? // coreference between entity mentions

? ? CoreEntityMention originalEntityMention = document.sentences().get(3).entityMentions().get(1);

? ? System.out.println("Example: original entity mention");

? ? System.out.println(originalEntityMention);

? ? System.out.println("Example: canonical entity mention");

? ? System.out.println(originalEntityMention.canonicalEntityMention().get());

? ? System.out.println();


? ? // get document wide coref info

? ? Map<Integer, CorefChain> corefChains = document.corefChains();

? ? System.out.println("Example: coref chains for document");

? ? System.out.println(corefChains);

? ? System.out.println();


? ? // get quotes in document

? ? List<CoreQuote> quotes = document.quotes();

? ? CoreQuote quote = quotes.get(0);

? ? System.out.println("Example: quote");

? ? System.out.println(quote);

? ? System.out.println();


? ? // original speaker of quote

? ? // note that quote.speaker() returns an Optional

? ? System.out.println("Example: original speaker of quote");

? ? System.out.println(quote.speaker().get());

? ? System.out.println();


? ? // canonical speaker of quote

? ? System.out.println("Example: canonical speaker of quote");

? ? System.out.println(quote.canonicalSpeaker().get());

? ? System.out.println();


? }


}


查看完整回答
反對(duì) 回復(fù) 2023-06-14
  • 2 回答
  • 0 關(guān)注
  • 158 瀏覽
慕課專欄
更多

添加回答

舉報(bào)

0/150
提交
取消
微信客服

購(gòu)課補(bǔ)貼
聯(lián)系客服咨詢優(yōu)惠詳情

幫助反饋 APP下載

慕課網(wǎng)APP
您的移動(dòng)學(xué)習(xí)伙伴

公眾號(hào)

掃描二維碼
關(guān)注慕課網(wǎng)微信公眾號(hào)