Keywords: neural network, word embedding, deep learning, representation, NLP, word vector
https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
This tutorial explains in simple terms why Recurrent Neural Networks work so well.
Sorry, there is no feedback available. Be the first one to provide feedback!
Institution: | colah's blog |
Year of publication: | 2014 |
Language: | english |
Type: | |
Audience: | linguists, philologists, corpus linguists, psycholinguists, humanists |
Level: | basic |
Prerequisites: | None |
Media: | text/html |
Objective: | |
Licence: | |
Access: | open |
Creation date: | Wednesday, 13 January 2016 10:33:04 |
Last modified: | Thursday, 18 April 2024 14:43:40 |
BibTeX type: | @misc |
@misc(TeLeMaCo:351, author = "Olah, Christopher", title = "{D}eep {L}earning, {N}{L}{P}, and {R}epresentations", year = "2014", url = "https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/" )