[ad_1]
nlp fucntion in python like so-
nlp = StanfordCoreNLP(host + ":" + port)
text = "Joshua Brown, 40, was killed in Florida in May when his Tesla failed to " \
"differentiate between the side of a turning truck and the sky while " \
"operating in autopilot mode."
output = nlp.annotate(
text,
properties={
"outputFormat": "json",
"annotators": "depparse,ner,entitymentions,sentiment"
}
)
pprint(output)
and i get the output like so-
{
'sentences': [
{
'basicDependencies': [
{
'dep': 'ROOT',
'dependent': 7,
'dependentGloss': 'killed',
'governor': 0,
'governorGloss': 'ROOT'
},
{
'dep': 'compound',
'dependent': 1,
'dependentGloss': 'Joshua',
'governor': 2,
'governorGloss': 'Brown'
},
…
],
'enhancedDependencies': [
{
'dep': 'ROOT',
'dependent': 7,
'dependentGloss': 'killed',
'governor': 0,
'governorGloss': 'ROOT'
},
{
'dep': 'compound',
'dependent': 1,
'dependentGloss': 'Joshua',
'governor': 2,
'governorGloss': 'Brown'
},
But I want to do the equivalent in Java, so I wrote a java method like-
public class NlpTest implements RequestHandler<Input,Output>{
public static String text = "Multiple locations in Maharashtra linked to state transport minister Anil Parab were raided this morning by the Enforcement Directorate (ED) as part of a money-laundering probe relating to alleged irregularities in a land deal.";
@Override
public Output handleRequest(Input input, Context context) {
String output="";
try {
System.out.println("entering method handleRequest");
// load input from environment variable
// String str = System.getenv("INPUT");
// String str = input.getText();
String str="Multiple locations in Maharashtra linked to state transport minister Anil Parab were raided this morning by the Enforcement Directorate (ED) as part of a money-laundering probe relating to alleged irregularities in a land deal.";
if (str.length() > 0)
text = str;
System.out.println("str = " + str);
System.out.println("text = " + text);
// Setup CoreNLP
Properties props = new Properties();
//props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref, tokensregex, sentiment"); //------>here
props.setProperty("annotators", "depparse,ner,entitymentions,sentiment");
props.setProperty("ner.additional.tokensregex.rules","src/main/java/custom.rules");
// props.setProperty("coref.algorithm", "neural");
props.setProperty("ner.useSUTime", "false");
props.setProperty("outputFormat", "json");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
CoreDocument doc = new CoreDocument(text);
// annotate
pipeline.annotate(doc);
System.out.println("pipeline="+pipeline.toString());
System.out.println("doc.entityMentions() = " + doc.entityMentions());
// display tokens
for (CoreLabel tok : doc.tokens()) {
System.out.println(String.format("%s\t%d\t%d", tok.word(), tok.beginPosition(), tok.endPosition()));
}
no matter what permutation/combinations i try to add in the line props.setProperty(“annotators” ,”tokenize, ssplit, pos, lemma, ner…”)
(mentioned “—->here” in the code)
I get error saying-
annotator "depparse" requires annotation "TextAnnotation". The usual requirements for this annotator are: tokenize,ssplit,pos
I just want to get the equivalent Python output..Please can anyone tell me where I am going wrong?
I am new to Stanford-core-NLP and any help is appreciated. Thanks!
[ad_2]