Jul 2016
27 Mon
28 Tue
29 Wed
30 Thu
1 Fri 08:45 AM – 06:15 PM IST
2 Sat 08:15 AM – 02:15 PM IST
3 Sun
Ashish Kumar
Understanding language is a trivial task for humans, but when it comes to mimic that task by machines it doesn’t remain that trivial. For humans, everything(image, text, speech etc.) is in terms for electrical impulses. In the same way for machines, everything is numbers either in the vector form (in the case of text or speech) or matrix form (in the case of images or videos). Deep learning has recently shown many promises for Natural Language Processing(NLP) applications. Traditionally in most NLP approaches documents or sentences are represented by a sparse bag-of-words representation.
A lot of work has been done, which goes beyond this by adopting a distributed representation of words by constructing a so-called “neural embedding” or vector space representation for each word(word2vec), sentence(thought vectors) or document(doc2vec).
I’m a software engineer at Snapshopr. You can also go through my profile https://in.linkedin.com/in/ashish30
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}