The tech world right now thinks NNs can solve many automation problems. Self-driving cars! Automated diagnosis! And maybe they're effective.
In aerospace, we tried this, too. Almost 30 years ago. That thread was abandoned.
-
-
Why did aerospace give up on NN controls of aircraft? Because their behavior cannot be validated. We cannot guarantee their performance.
-
This hasn't changed. Not even with 30 years of throwing formal methods at the problem. Not even with new tech, with GPUs and convolutions &c
-
If anything, it's *harder* now. We simply cannot prove that they are safe enough to be trusted with complex systems with lives on the line.
-
Neural Networks are great for processing complex data to present in a human context, but this isn't the same as objective truth.
-
Adaptation is a remarkable thing, but it's not to be confused with intelligence, truth seeking, or harm reduction. Look to history.
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.