[2d4573]: / QueryExtraction / output.txt

Download this file

220 lines (219 with data), 4.5 kB

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
Extraction:
SpaCy TextRank:
Captured 4
longer sequences
long input sentences
high bleu score
the complete input sequence
vector
the input sequence
information
long dependencies
a candidate translation
simpler words
the entire sequence
fixed length
valuable parts
parallel processing
one or more
reference translations
Seq2Seq Models
some input words
encodes
the input
sequence
hidden state
Vaswani et al
a context vector
this context vector
this fixed-length context vector design
the
complete sequence
an output word
image captioning
various problems
Attention
attention
------------------------------
Gensim TextRank:
Captured 4
learning
learn
attention
words
word
encoder
encodes
translation
translated
translations
translate
decoder
paper
input sequence
sequences
vector
bleu
sentence
sentences
unit
ideas
idea
parts
models
model
like image
length
designed
design
long
context
works
work
processes
processed
processing
hidden
evaluation
information
------------------------------
Rake:
Captured 1
2014 paper “ neural machine translation
neural machine translation using seq2seq models
various problems like image captioning
“ thought vector ”)
every encoder hidden state
vaswani et al .,
decoder unit works well
high bleu score ).
length context vector design
retain longer sequences
bilingual evaluation understudy
decoder unit fails
whole long sentence
famous paper attention
deep learning community
deep learning arena
long input sentences
complete input sequence
candidate translation
et al
seq2seq model
context vector
paper laid
complete sequence
long dependencies
jointly learning
shorter sentences
fixed length
input sequence
decoder architecture
------------------------------
Rakun:
01-Mar-21 20:38:03 - Initiated a keyword detector instance.
01-Mar-21 20:38:03 - Number of nodes reduced from 128 to 119
Captured 5
attention
sequence
input sequence attention
model
paper
using
encoder
seq2seq
encoder-decoder
vector
information
architecture
learning
decoder
ideas
prominent
translation
hence
processes
previous
extension
natural
reads
translate
align
candidate
comparing
score
instead
concentrated
------------------------------
Yake:
Captured 5
neural machine translation
input sequence
deep learning community
context vector
sequence
input
neural machine
machine translation
attention
model
complete input sequence
learning community
deep learning
context
vector
bilingual evaluation understudy
encoder-decoder
translation
learning
prominent ideas
fixed-length context vector
context vector design
long input sentences
machine
encoder-decoder unit
neural
encoder-decoder model
words
encoder
complete sequence
------------------------------
KeyBERT:
01-Mar-21 20:38:03 - Load pretrained SentenceTransformer: distilbert-base-nli-mean-tokens
01-Mar-21 20:38:03 - Did not find folder distilbert-base-nli-mean-tokens
01-Mar-21 20:38:03 - Try to download model from server: https://sbert.net/models/distilbert-base-nli-mean-tokens.zip
01-Mar-21 20:38:03 - Load SentenceTransformer from folder: /Users/irene/.cache/torch/sentence_transformers/sbert.net_models_distilbert-base-nli-mean-tokens
01-Mar-21 20:38:04 - Use pytorch device: cpu
Batches: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9.71it/s]
Batches: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 6.87it/s]
Captured 1
deep learning
neural machine
encoder decoder
learning arena
ideas deep
context neural
decoder architecture
machine translation
transformers revolutionized
graph encoder
memorize long
architecture encoder
learning community
decoder initialized
prominent ideas
reflected graph
revolutionized deep
translations graph
valuable parts
connecting encoder
famous paper
focus valuable
candidate translation
paper neural
composed encoder
arena concept
helps model
output critical
memorize
foundation famous