What is Recursive Neural Tensor Network (RNTN) ?



The Recursive Neural Tensor Network (RNTN)

RNTN is a neural network useful for natural language processing. They have a tree structure and each node has a neural network. You can use a recursive neural tensor network for boundary segmentation to determine which word groups are positive and which are negative. The same applies to the entire sentence.

Word vectors are used as features and as a basis for sequential classification. They are then grouped into sub-phrases and the sub-phrases are combined into a sentence that can be classified by emotion(sentiment) and other indicators(metrics).

Recursive neural tensor networks require external components like Word2vec, as described below. To analyze text using a neural network, words can be represented as a continuous vector of parameters. These word vectors contain not only information about the word, but also information about the surrounding words; that is, the context, usage, and other semantic information of the word.


Word2Vec

The first step in building a working RNTN is word vectorization, which can be done using an algorithm called Word2vec. Word2Vec converts corpus into vectors, which can then be put into vector space to measure the cosine distance between them; that is, their similarity or lack.

Word2vec is a pipeline that is independent of NLP. It creates a lookup table that provides a word vector once the sentence is processed.


Summary

  1. [Word2vec pipeline] Vectorize a corpus of words
  2. [NLP pipeline] Tokenize the sentences
  3. [NLP pipeline] Tag tokens as parts of speech
  4. [NLP pipeline] Parse sentences into their constituent sub-phrases
  5. [NLP pipeline] Binarize the tree
  6. [NLP pipeline + Word2Vec pipeline] Combine word vectors with the neural network.
  7. [NLP pipeline + Word2Vec pipeline] Do task (for example classify the sentence’s sentiment)

For Further Reading:

Recursive Deep Models for Semantic Compositionality Over a Sentiment TreebankRichard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng and Christopher Potts Stanford University, Stanford, CA 94305, USA