Machine learning models take the shortest path from input to label, based on previous situations they've encountered -- just like human intuition. Straight input-to-output mapping. And of course, they tend to be highly biased.
-
-
Show this thread
-
Replies indicate that people are very confused about what "bias" means. It means doing pattern recognition based on spurious correlations, as opposed to causal reasoning. A ML model will use all correlations found in the training data, and typically many of them will be spurious.
Show this thread
End of conversation
New conversation -
-
-
Training them is system 2, evaluation is system 1?
-
Can someone explain what's this systems 1 & 2 reference?
- Show replies
New conversation -
-
-
by the way, have you heard about causal inference and
@yudapearl's work,@fchollet?Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
That logical, unbiased S2 reasoning might be where the non-ML symbol manipulation style AI comes into play, no?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Correct, and this is how we QA a ML algorithm - we "trick" it. I guess that's valid for System 1 too. Could you name three QA 'tools' older than 500 years? :)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Insightful thread
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
With Rule-Based systems (remember them), in Forward Chaining we test our inputs against a predetermined rule set, but in Backward Chaining move from a set of known conditions back to a proposition. In a 'learning' system, increased data leads to better outcomes.
-
These are not random choices.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.