Deep learning excels at unlocking the creation of impressive early demos of new applications using very little development resources. The part where it struggles is reaching the level of consistent usefulness and reliability required by production usage.
-
-
Could there be an inherent language problem? As in current machine architecture doesn’t give us the words to describe the problem accurately. The problems get translated down to a set of instructions. This or that. No fuzziness to allow for something almost right but different.
-
Or do we have a broad enough language, but need machines to write and not just read to adapt. Adaptation means modification, maybe not just of a better training set, but of additional models. If a model can identify something that surprises it, can it construct a better solution?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.