On that note, Rebooting AI should be required reading for any doctor or medical student interested in work on applying deep learning and AI to medicine. There are many troubling issues with deep learning alone, especially for medical applications where reliability is paramount.https://twitter.com/GaryMarcus/status/1183077024207818753 …
-
-
I haven't read your book (yet) but looking forward to some realism! In my experience deep learning is great for computer vision tasks where output is binary with high accuracy - BUT many medical tasks are not binary at all, and accuracy can be hindered by many confounders
-
The issue with any medical vision applications is that the number of possible medical diagnoses in a given situation is vast, yet not all are represented well in datasets, and rare entities can be critical. For example, if I get a biopsy from a chronic non-healing wound in a...
- 8 more replies
New conversation -
-
-
I've actually got what I think is a very relevant blog post coming out in the next few days. Preview of core idea: we don't need to reboot AI to make medical applications much safer than they are today, we need to use human smarts to much more effectively validate these models.
- 8 more replies
New conversation -
-
-
Deep learning is great in medicine for image segmentation. Clinical useful for accurately quantifying tumour/ventricular volumes, 3D models etc. CNN output can be easily checked/quickly edited if needed.
-
Exactly—the added benefit is that what the AI is doing can already be easily explained and rationalized and therefore take care of the trust issue.
End of conversation
New conversation -
-
-
This Tweet is unavailable.
-
I’d have to take time to read the link—thanks for sharing. In medicine at least, safety without ability to explain and rationalize would be an oxymoron—if I can’t explain why a lab test would produce unreliable results and the circumstances where it does, then it’s not safe.
- 1 more reply
-
-
-
My current understanding on this: deep learning for critical medical decisions is inherently unsafe. Why? DL (appears to) interpolate, and this will always be from an insignificant subset of the space of all possible inputs => not a reliable way to fully specify the problem.
-
This Tweet is unavailable.
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.