Three cases of supervised AI assistants
-
-
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
I think the question remains tricky: if the model works well, it might be OK; but if the model does not work well, how we can understand where the problem is and fix it accordingly? In that sense being black box is a dangerous scenario.
-
This is a valid question and there are a lot of ongoing works on interpretation and accountability of AI
Keskustelun loppu
Uusi keskustelu -
-
-
"self-driving vehicles eventually will be safer than those piloted by humans" - everyone keeps repeating this and yet there is exactly zero evidence for it. Surely when we solve AI, they will, but not with the current black boxes.
-
I actually bothered to read the article and it cited a paper, and its ABSTRACT clearly said it’s not possible from a statistical point of view.https://www.sciencedirect.com/science/article/pii/S0965856416302129 …
- Näytä vastaukset
Uusi keskustelu -
-
-
One should not forget the chance for a root cause analysis in case of unexpected behavior in interpretable learners though.
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
-
-
This whole "black box" thing feels like another marketing buzzword, just like "neural network" and "deep learning". It wants to imply "mystery" to make it seem even more magical, even though it's just brute forcing in nice packaging.
-
It’s been a term of art for decades to describe systems where the internals, behavior or implementation details are not known (at least to the person trying to test, analyze or understand the system)
- Näytä vastaukset
Uusi keskustelu -
-
-
Kiitos. Käytämme tätä aikajanasi parantamiseen. KumoaKumoa
-
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.