Getting started with Stanford CoreNLP | A Stanford CoreNLP Tutorial

PROJECT INFO
PROJECT INFO

This is a two part series, in first part we will discuss THEORY and in second part, we will create CoreNLP project.

  1.  Getting started with Stanford CoreNLP | Theory <must read>

Introduction
Introduction

A lot of open source NLP tools are present in the market but in this tutorial we will focus on Stanford CoreNLP tool.

The backbone of the CoreNLP package is formed by two classes: Annotation and Annotator. 

Annotations are data structures that hold the results of the annotators. Annotations are generally maps. Annotators are more like functions, but they operate on Annotations rather than Objects.

Annotators can perform tokenize, parse, NER, POS. Annotators and Annotations are integrated in AnnotationPipelines. 

An AnnotationPipeline is essentially a List of Annotators, each of which is run in turn. Stanford CoreNLP inherits the AnnotationPipeline class  and customizes NLP Annotators . 

Prerequisites
Prerequisites

  • Java 8
  • Maven
  • IDE

Maven Dependency
For Errors
Maven Dependency

You can find Stanford CoreNLP on Maven Central. The crucial thing to know is that CoreNLP needs its models to run (most parts beyond the tokenizer and sentence splitter) and so you need to specify both the code jar and the models jar in your pom.xml

<dependencies><dependency>    <groupId>edu.stanford.nlp</groupId>    <artifactId>stanford-corenlp</artifactId>    <version>3.9.1</version></dependency><dependency>    <groupId>edu.stanford.nlp</groupId>    <artifactId>stanford-corenlp</artifactId>    <version>3.9.1</version>    <classifier>models</classifier></dependency></dependencies>

If you are new to maven then you can learn Maven from here

For Errors

Solved Errors:

  1. Exception in thread “main” java.lang.RuntimeException: edu.stanford.nlp.io.RuntimeIOException: Unrecoverable error while loading a tagger model
  2. Exception in thread “main” java.lang.RuntimeException: edu.stanford.nlp.io.RuntimeIOException: Error while loading a tagger model (probably missing model file)

Running all Annotators on the text
For Errors
Running all Annotators on the text

To construct a Stanford CoreNLP object from a given set of properties, use StanfordCoreNLP(Properties props). This method creates the pipeline using the annotators given in the “annotators” property. To run all Annotators on this text, use the annotate(Annotation document) method.

Steps to Follow:

1. Creating a Stanford Core NLP Object with Stanford Core NLP (Properties props)

2. Analyze arbitrary text by annotate(Annotation document).

package com.interviewbubble.StandfordSimpleNLP;
import java.util.Properties;import edu.stanford.nlp.pipeline.Annotation;import edu.stanford.nlp.pipeline.StanfordCoreNLP;public class SandfordSimpleNLPExample{    public static void main( String[] args ){    // creates a StanfordCoreNLP object, with POS tagging, lemmatization,        // NER, parsing, and coreference resolution        Properties props = new Properties();        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);        // read some text in the text variable        String text = "She went to America last week.";
        // create an empty Annotation just with the given text        Annotation document = new Annotation(text);        // run all Annotators on this text        pipeline.annotate(document);    System.out.println( "End of Processing" );    }}

OUTPUT:

End of Processing

For Errors

1. StaticLoggerBinder Warning

If you are getting these Warning in console:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".  SLF4J: Defaulting to no-operation (NOP) logger implementation  SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.  

Follow this guide:

SLF4J: Failed to load class “org.slf4j.impl.StaticLoggerBinder”.


2. No Appender Warning

If you are getting these Warning in console:

log4j:WARN No appenders could be found for logger (dao.hsqlmanager).log4j:WARN Please initialize the log4j system properly.log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Follow this Guide:

log4j:WARN No appenders could be found for logger (dao.hsqlmanager).

CoreMap and CoreLabel
CoreMap and CoreLabel

The output of the Annotators is accessed using the data structures CoreMap and CoreLabel.

A CoreMap is essentially a Map that uses class objects as keys and has values with custom types

CoreMap<class object,custom types>

You can get all the sentences in the text  using CoreMap.

List <CoreMap> sentences = document.get(SentencesAnnotation.class);  

CoreLabel is a CoreMap with additional token-specific methods

 

for (CoreLabel token : sentence.get(TokensAnnotation.class)) {}

Interpreting the output
Interpreting the output

The output of Annotators needs to be obtained using CoreMap and CoreLabel. We can get token, POS tag and NER label using CoreLabel.

        // these are all the sentences in this document        List<CoreMap> sentences = document.get(SentencesAnnotation.class);        List<String> words = new ArrayList<>();        List<String> posTags = new ArrayList<>();        List<String> nerTags = new ArrayList<>();        for (CoreMap sentence : sentences) {            // traversing the words in the current sentence            for (CoreLabel token : sentence.get(TokensAnnotation.class)) {                // this is the text of the token                String word = token.get(TextAnnotation.class);                words.add(word);                // this is the POS tag of the token                String pos = token.get(PartOfSpeechAnnotation.class);                posTags.add(pos);                // this is the NER label of the token                String ne = token.get(NamedEntityTagAnnotation.class);                nerTags.add(ne);            }        }

Geting token, POS tag and NER label
Console Output
Geting token, POS tag and NER label

package com.interviewbubble.StandfordSimpleNLP;import java.util.ArrayList;import java.util.List;import java.util.Properties;import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;import edu.stanford.nlp.ling.CoreLabel;import edu.stanford.nlp.pipeline.Annotation;import edu.stanford.nlp.pipeline.StanfordCoreNLP;import edu.stanford.nlp.util.CoreMap;public class SandfordSimpleNLPExample {    public static void main( String[] args )    {    	// creates a StanfordCoreNLP object, with POS tagging, lemmatization,        // NER, parsing, and coreference resolution        Properties props = new Properties();        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);        // read some text in the text variable        String text = "Karma of humans is AI";        // create an empty Annotation just with the given text        Annotation document = new Annotation(text);        // run all Annotators on this text        pipeline.annotate(document);        // these are all the sentences in this document        List<CoreMap> sentences = document.get(SentencesAnnotation.class);        List<String> words = new ArrayList<>();        List<String> posTags = new ArrayList<>();        List<String> nerTags = new ArrayList<>();        for (CoreMap sentence : sentences) {            // traversing the words in the current sentence            for (CoreLabel token : sentence.get(TokensAnnotation.class)) {                // this is the text of the token                String word = token.get(TextAnnotation.class);                words.add(word);                // this is the POS tag of the token                String pos = token.get(PartOfSpeechAnnotation.class);                posTags.add(pos);                // this is the NER label of the token                String ne = token.get(NamedEntityTagAnnotation.class);                nerTags.add(ne);            }        }        System.out.println(words.toString());        System.out.println(posTags.toString());        System.out.println(nerTags.toString());    	System.out.println( "End of Processing" );    }}

Console Output

0    [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator tokenize7    [main] INFO  edu.stanford.nlp.pipeline.TokenizerAnnotator  - No tokenizer type provided. Defaulting to PTBTokenizer.12   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ssplit17   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator pos789  [main] INFO  edu.stanford.nlp.tagger.maxent.MaxentTagger  - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.8 sec].789  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator lemma791  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ner2663 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].3835 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.2 sec].4403 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].4405 [main] INFO  edu.stanford.nlp.time.JollyDayHolidays  - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.4647 [main] INFO  edu.stanford.nlp.time.TimeExpressionExtractorImpl  - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt5093 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: null5094 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: temporal-composite-8:ranges9181 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns.9203 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns.9204 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files17448 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator parse17787 [main] INFO  edu.stanford.nlp.parser.common.ParserGrammar  - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [0.3 sec].17790 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator dcoref30670 [main] INFO  edu.stanford.nlp.pipeline.CorefMentionAnnotator  - Using mention detector type: dependency[Karma, of, humans, is, AI][NN, IN, NNS, VBZ, NNP][RELIGION, O, O, O, O]End of Processing

Syntactic tree, Dependency graph & Others
OUTPUT
Syntactic tree, Dependency graph & Others

Here we will see what other things we can get;

1. We can syntactic tree using TreeAnnotation

Tree tree = sentence.get(TreeAnnotation.class);  

2. We can get dependency graph using CollapsedDependenciesAnnotation

SemanticGraph dependencies = sentence.get(CollapsedDependenciesAnnotation.class);

3. We can get map for chain using CorefChainAnnotation

Map<Integer, CorefChain> graph = document.get(CorefChainAnnotation.class);  
// This is all the sentences in the text       List sentences = document.get(SentencesAnnotation.class);     for (CoreMap sentence : sentences) {        System.out.println("sentence:"+sentence);        // This is the syntactic parse tree of sentence        Tree tree = sentence.get(TreeAnnotation.class);        System.out.println("Tree:n" + tree);        // This is the dependency graph of the sentence        SemanticGraph dependencies = sentence.get(CollapsedDependenciesAnnotation.class);        System.out.println("Dependenciesn:" + dependencies);  
} // This is a map of the chain Map<Integer, CorefChain> graph = document.get(CorefChainAnnotation.class); System.out.println("Map of the chain:n" + graph);

OUTPUT

0    [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator tokenize7    [main] INFO  edu.stanford.nlp.pipeline.TokenizerAnnotator  - No tokenizer type provided. Defaulting to PTBTokenizer.12   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ssplit16   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator pos762  [main] INFO  edu.stanford.nlp.tagger.maxent.MaxentTagger  - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.7 sec].763  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator lemma764  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ner2632 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].3760 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.1 sec].4340 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].4342 [main] INFO  edu.stanford.nlp.time.JollyDayHolidays  - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.4622 [main] INFO  edu.stanford.nlp.time.TimeExpressionExtractorImpl  - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt5105 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: null5106 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: temporal-composite-8:ranges9495 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns.9514 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns.9514 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files18393 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator parse18720 [main] INFO  edu.stanford.nlp.parser.common.ParserGrammar  - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [0.3 sec].18724 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator dcoref30395 [main] INFO  edu.stanford.nlp.pipeline.CorefMentionAnnotator  - Using mention detector type: dependency
Tree:

(ROOT (S (NP (NP (NN Karma)) (PP (IN of) (NP (NNS humans)))) (VP (VBZ is) (NP (NNP AI)))))

Dependencies 

:-> AI/NNP (root)

  -> Karma/NN (nsubj)

    -> humans/NNS (nmod:of)

      -> of/IN (case)

  -> is/VBZ (cop)

Map of the chain:

{1=CHAIN1-[“Karma of humans” in sentence 1, “AI” in sentence 1], 2=CHAIN2-[“humans” in sentence 1]}

Complete Project Files:

Complete Source Code
Complete Source Code

package com.interviewbubble.StandfordSimpleNLP;

import java.util.ArrayList;

import java.util.List;

import java.util.Map;

import java.util.Properties;

import edu.stanford.nlp.coref.CorefCoreAnnotations.CorefChainAnnotation;

import edu.stanford.nlp.coref.data.CorefChain;

import edu.stanford.nlp.ling.CoreAnnotations.NamedEntityTagAnnotation;

import edu.stanford.nlp.ling.CoreAnnotations.PartOfSpeechAnnotation;

import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;

import edu.stanford.nlp.ling.CoreAnnotations.TextAnnotation;

import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;

import edu.stanford.nlp.ling.CoreLabel;

import edu.stanford.nlp.pipeline.Annotation;

import edu.stanford.nlp.pipeline.StanfordCoreNLP;

import edu.stanford.nlp.semgraph.SemanticGraph;

import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations.CollapsedDependenciesAnnotation;

import edu.stanford.nlp.trees.Tree;

import edu.stanford.nlp.trees.TreeCoreAnnotations.TreeAnnotation;

import edu.stanford.nlp.util.CoreMap;

public class SandfordSimpleNLPExample

{

    public static void main( String[] args )

    {

    // creates a StanfordCoreNLP object, with POS tagging, lemmatization,

        // NER, parsing, and coreference resolution

        Properties props = new Properties();

        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");

        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

        // read some text in the text variable

        String text = "Karma of humans is AI";

        // create an empty Annotation just with the given text

        Annotation document = new Annotation(text);

        // run all Annotators on this text

        pipeline.annotate(document);

        

     // these are all the sentences in this document

        List <CoreMap> sentences = document.get(SentencesAnnotation.class);

        List<String> words = new ArrayList<>();

        List<String> posTags = new ArrayList<>();

        List<String> nerTags = new ArrayList<>();

        for (CoreMap sentence : sentences) {

            // traversing the words in the current sentence

            for (CoreLabel token : sentence.get(TokensAnnotation.class)) {

                // this is the text of the token

                String word = token.get(TextAnnotation.class);

                words.add(word);

                // this is the POS tag of the token

                String pos = token.get(PartOfSpeechAnnotation.class);

                posTags.add(pos);

                // this is the NER label of the token

                String ne = token.get(NamedEntityTagAnnotation.class);

                nerTags.add(ne);

            }

            // This is the syntactic parse tree of sentence 

            Tree tree = sentence.get(TreeAnnotation.class); 

            System.out.println("Tree:n"+ tree); 

            // This is the dependency graph of the sentence 

            SemanticGraph dependencies = sentence.get(CollapsedDependenciesAnnotation.class); 

            System.out.println("Dependenciesn:"+ dependencies);

        }

        System.out.println("Words: " + words.toString());

        System.out.println("posTags: " + posTags.toString());

        System.out.println("nerTags: " + nerTags.toString());

     // This is a map of the chain 

     Map<Integer, CorefChain> graph = document.get(CorefChainAnnotation.class); 

     System.out.println("Map of the chain:n" + graph);

  System.out.println( "End of Processing" ); 

   }

}

Console OUTPUT
Console OUTPUT

0    [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator tokenize

6    [main] INFO  edu.stanford.nlp.pipeline.TokenizerAnnotator  - No tokenizer type provided. Defaulting to PTBTokenizer.

11   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ssplit

15   [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator pos

740  [main] INFO  edu.stanford.nlp.tagger.maxent.MaxentTagger  - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.7 sec].

741  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator lemma

742  [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator ner

2665 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].

3840 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.2 sec].

4424 [main] INFO  edu.stanford.nlp.ie.AbstractSequenceClassifier  - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].

4427 [main] INFO  edu.stanford.nlp.time.JollyDayHolidays  - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.

4664 [main] INFO  edu.stanford.nlp.time.TimeExpressionExtractorImpl  - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt

5151 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: null

5152 [main] DEBUG edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor  - Ignoring inactive rule: temporal-composite-8:ranges

9473 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 580641 unique entries out of 581790 from edu/stanford/nlp/models/kbp/regexner_caseless.tab, 0 TokensRegex patterns.

9492 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 4857 unique entries out of 4868 from edu/stanford/nlp/models/kbp/regexner_cased.tab, 0 TokensRegex patterns.

9492 [main] INFO  edu.stanford.nlp.pipeline.TokensRegexNERAnnotator  - TokensRegexNERAnnotator ner.fine.regexner: Read 585498 unique entries from 2 files

18262 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator parse

18598 [main] INFO  edu.stanford.nlp.parser.common.ParserGrammar  - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [0.3 sec].

18601 [main] INFO  edu.stanford.nlp.pipeline.StanfordCoreNLP  - Adding annotator dcoref

32679 [main] INFO  edu.stanford.nlp.pipeline.CorefMentionAnnotator  - Using mention detector type: dependency

Tree:

(ROOT (S (NP (NP (NN Karma)) (PP (IN of) (NP (NNS humans)))) (VP (VBZ is) (NP (NNP AI)))))

Dependencies

:-> AI/NNP (root)

  -> Karma/NN (nsubj)

    -> humans/NNS (nmod:of)

      -> of/IN (case)

  -> is/VBZ (cop)

Words: [Karma, of, humans, is, AI]

posTags: [NN, IN, NNS, VBZ, NNP]

nerTags: [RELIGION, O, O, O, O]

Map of the chain:

{1=CHAIN1-["Karma of humans" in sentence 1, "AI" in sentence 1], 2=CHAIN2-["humans" in sentence 1]}

End of Processing

pom.xml
pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>com.interviewbubble</groupId>

  <artifactId>StandfordSimpleNLP</artifactId>

  <version>0.0.1-SNAPSHOT</version>

  <packaging>jar</packaging>

  <name>StandfordSimpleNLP</name>

  <url>http://maven.apache.org</url>

  <properties>

    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

    <maven.compiler.source>1.8</maven.compiler.source>

    <maven.compiler.target>1.8</maven.compiler.target>

  </properties>

  <dependencies>

    <dependency>

      <groupId>junit</groupId>

      <artifactId>junit</artifactId>

      <version>3.8.1</version>

      <scope>test</scope>

    </dependency>

    <dependency>

      <groupId>edu.stanford.nlp</groupId>

      <artifactId>stanford-corenlp</artifactId>

      <version>3.9.1</version>

    </dependency>

    <dependency>

      <groupId>edu.stanford.nlp</groupId>

      <artifactId>stanford-corenlp</artifactId>

      <version>3.9.1</version>

      <classifier>models</classifier>

    </dependency>

     <dependency>

       <groupId>org.slf4j</groupId>

       <artifactId>slf4j-api</artifactId>

       <version>1.7.25</version>

   </dependency>

   <dependency>

       <groupId>org.slf4j</groupId>

       <artifactId>slf4j-log4j12</artifactId>

       <version>1.7.25</version>

   </dependency>

    </dependencies>

</project>

log4j.properties
log4j.properties

# Set root logger level to DEBUG and its only appender to A1.

log4j.rootLogger=DEBUG, A1

# A1 is set to be a ConsoleAppender.

log4j.appender.A1=org.apache.log4j.ConsoleAppender

# A1 uses PatternLayout.

log4j.appender.A1.layout=org.apache.log4j.PatternLayout

log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

GithubLink
GithubLink

You can find source code here

Git Hub Link