Imagine an algorithm capable of generating a Wikipedia-like encyclopedia from raw sources (e.g. from the Internet minus Wikipedia), at the same level of factual accuracy, edition quality, scope, and resolution. When will this be feasible?
-
-
This poll is about how people perceive NLP progress at this time -- don't see it as a way to actually predict the future. People's perceptions are always wrong
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
How can you say that? NLP datasets require ridiculous amounts of cleaning & screening, to say nothing of editing,formatting and text representation depending on the type of data you're working with.
- Show replies
-
-
-
Distributional semantics still have problems with sensorial knowledge (which is especially good for checking some obvius facts for humans, and so are underrepresented on written language). IMO this problem should be addressed before internetpedia appears.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Are there any known algorithms you anticipate could be refined to achieve this task? I imagine autoencoders based on word and sentence embeddings but doubt that will lead to good results.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.