Switch to unified view

a b/QueryExtraction/output.txt
1
Extraction:
2
SpaCy TextRank:
3
Captured  4
4
longer sequences
5
long input sentences
6
high bleu score
7
the complete input sequence
8
vector
9
the input sequence
10
information
11
long dependencies
12
a candidate translation
13
simpler words
14
the entire sequence
15
fixed length
16
valuable parts
17
parallel processing
18
one or more 
19
reference translations
20
Seq2Seq Models
21
some input words
22
encodes
23
the input 
24
sequence
25
hidden state
26
Vaswani et al
27
a context vector
28
this context vector
29
this fixed-length context vector design
30
the 
31
complete sequence
32
an output word
33
image captioning
34
various problems
35
Attention
36
attention
37
------------------------------
38
Gensim TextRank:
39
Captured  4
40
learning
41
learn
42
attention
43
words
44
word
45
encoder
46
encodes
47
translation
48
translated
49
translations
50
translate
51
decoder
52
paper
53
input sequence
54
sequences
55
vector
56
bleu
57
sentence
58
sentences
59
unit
60
ideas
61
idea
62
parts
63
models
64
model
65
like image
66
length
67
designed
68
design
69
long
70
context
71
works
72
work
73
processes
74
processed
75
processing
76
hidden
77
evaluation
78
information
79
------------------------------
80
Rake:
81
Captured  1
82
2014 paper “ neural machine translation
83
neural machine translation using seq2seq models
84
various problems like image captioning
85
“ thought vector ”)
86
every encoder hidden state
87
vaswani et al .,
88
decoder unit works well
89
high bleu score ).
90
length context vector design
91
retain longer sequences
92
bilingual evaluation understudy
93
decoder unit fails
94
whole long sentence
95
famous paper attention
96
deep learning community
97
deep learning arena
98
long input sentences
99
complete input sequence
100
candidate translation
101
et al
102
seq2seq model
103
context vector
104
paper laid
105
complete sequence
106
long dependencies
107
jointly learning
108
shorter sentences
109
fixed length
110
input sequence
111
decoder architecture
112
------------------------------
113
Rakun:
114
01-Mar-21 20:38:03 - Initiated a keyword detector instance.
115
01-Mar-21 20:38:03 - Number of nodes reduced from 128 to 119
116
Captured  5
117
attention
118
sequence
119
input sequence attention
120
model
121
paper
122
using
123
encoder
124
seq2seq
125
encoder-decoder
126
vector
127
information
128
architecture
129
learning
130
decoder
131
ideas
132
prominent
133
translation
134
hence
135
processes
136
previous
137
extension
138
natural
139
reads
140
translate
141
align
142
candidate
143
comparing
144
score
145
instead
146
concentrated
147
------------------------------
148
Yake:
149
Captured  5
150
neural machine translation
151
input sequence
152
deep learning community
153
context vector
154
sequence
155
input
156
neural machine
157
machine translation
158
attention
159
model
160
complete input sequence
161
learning community
162
deep learning
163
context
164
vector
165
bilingual evaluation understudy
166
encoder-decoder
167
translation
168
learning
169
prominent ideas
170
fixed-length context vector
171
context vector design
172
long input sentences
173
machine
174
encoder-decoder unit
175
neural
176
encoder-decoder model
177
words
178
encoder
179
complete sequence
180
------------------------------
181
KeyBERT:
182
01-Mar-21 20:38:03 - Load pretrained SentenceTransformer: distilbert-base-nli-mean-tokens
183
01-Mar-21 20:38:03 - Did not find folder distilbert-base-nli-mean-tokens
184
01-Mar-21 20:38:03 - Try to download model from server: https://sbert.net/models/distilbert-base-nli-mean-tokens.zip
185
01-Mar-21 20:38:03 - Load SentenceTransformer from folder: /Users/irene/.cache/torch/sentence_transformers/sbert.net_models_distilbert-base-nli-mean-tokens
186
01-Mar-21 20:38:04 - Use pytorch device: cpu
187
Batches: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  9.71it/s]
188
Batches: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00,  6.87it/s]
189
Captured  1
190
deep learning
191
neural machine
192
encoder decoder
193
learning arena
194
ideas deep
195
context neural
196
decoder architecture
197
machine translation
198
transformers revolutionized
199
graph encoder
200
memorize long
201
architecture encoder
202
learning community
203
decoder initialized
204
prominent ideas
205
reflected graph
206
revolutionized deep
207
translations graph
208
valuable parts
209
connecting encoder
210
famous paper
211
focus valuable
212
candidate translation
213
paper neural
214
composed encoder
215
arena concept
216
helps model
217
output critical
218
memorize
219
foundation famous