Switch to side-by-side view

--- a
+++ b/QueryExtraction/output.txt
@@ -0,0 +1,219 @@
+Extraction:
+SpaCy TextRank:
+Captured  4
+longer sequences
+long input sentences
+high bleu score
+the complete input sequence
+vector
+the input sequence
+information
+long dependencies
+a candidate translation
+simpler words
+the entire sequence
+fixed length
+valuable parts
+parallel processing
+one or more 
+reference translations
+Seq2Seq Models
+some input words
+encodes
+the input 
+sequence
+hidden state
+Vaswani et al
+a context vector
+this context vector
+this fixed-length context vector design
+the 
+complete sequence
+an output word
+image captioning
+various problems
+Attention
+attention
+------------------------------
+Gensim TextRank:
+Captured  4
+learning
+learn
+attention
+words
+word
+encoder
+encodes
+translation
+translated
+translations
+translate
+decoder
+paper
+input sequence
+sequences
+vector
+bleu
+sentence
+sentences
+unit
+ideas
+idea
+parts
+models
+model
+like image
+length
+designed
+design
+long
+context
+works
+work
+processes
+processed
+processing
+hidden
+evaluation
+information
+------------------------------
+Rake:
+Captured  1
+2014 paper “ neural machine translation
+neural machine translation using seq2seq models
+various problems like image captioning
+“ thought vector ”)
+every encoder hidden state
+vaswani et al .,
+decoder unit works well
+high bleu score ).
+length context vector design
+retain longer sequences
+bilingual evaluation understudy
+decoder unit fails
+whole long sentence
+famous paper attention
+deep learning community
+deep learning arena
+long input sentences
+complete input sequence
+candidate translation
+et al
+seq2seq model
+context vector
+paper laid
+complete sequence
+long dependencies
+jointly learning
+shorter sentences
+fixed length
+input sequence
+decoder architecture
+------------------------------
+Rakun:
+01-Mar-21 20:38:03 - Initiated a keyword detector instance.
+01-Mar-21 20:38:03 - Number of nodes reduced from 128 to 119
+Captured  5
+attention
+sequence
+input sequence attention
+model
+paper
+using
+encoder
+seq2seq
+encoder-decoder
+vector
+information
+architecture
+learning
+decoder
+ideas
+prominent
+translation
+hence
+processes
+previous
+extension
+natural
+reads
+translate
+align
+candidate
+comparing
+score
+instead
+concentrated
+------------------------------
+Yake:
+Captured  5
+neural machine translation
+input sequence
+deep learning community
+context vector
+sequence
+input
+neural machine
+machine translation
+attention
+model
+complete input sequence
+learning community
+deep learning
+context
+vector
+bilingual evaluation understudy
+encoder-decoder
+translation
+learning
+prominent ideas
+fixed-length context vector
+context vector design
+long input sentences
+machine
+encoder-decoder unit
+neural
+encoder-decoder model
+words
+encoder
+complete sequence
+------------------------------
+KeyBERT:
+01-Mar-21 20:38:03 - Load pretrained SentenceTransformer: distilbert-base-nli-mean-tokens
+01-Mar-21 20:38:03 - Did not find folder distilbert-base-nli-mean-tokens
+01-Mar-21 20:38:03 - Try to download model from server: https://sbert.net/models/distilbert-base-nli-mean-tokens.zip
+01-Mar-21 20:38:03 - Load SentenceTransformer from folder: /Users/irene/.cache/torch/sentence_transformers/sbert.net_models_distilbert-base-nli-mean-tokens
+01-Mar-21 20:38:04 - Use pytorch device: cpu
+Batches: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  9.71it/s]
+Batches: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00,  6.87it/s]
+Captured  1
+deep learning
+neural machine
+encoder decoder
+learning arena
+ideas deep
+context neural
+decoder architecture
+machine translation
+transformers revolutionized
+graph encoder
+memorize long
+architecture encoder
+learning community
+decoder initialized
+prominent ideas
+reflected graph
+revolutionized deep
+translations graph
+valuable parts
+connecting encoder
+famous paper
+focus valuable
+candidate translation
+paper neural
+composed encoder
+arena concept
+helps model
+output critical
+memorize
+foundation famous