Apparently you need to tell ChatGPT to both use probability theory, and to "think about this step by step" in order for it not to fall for the conjunction fallacy.
and even then it gets to the right answer via wrong reasoning. Do we need to assume a probability (0.1) of him being both a librarian and a fisher, after we established that these are probably not correlated? can't we just calculated it as 0.8 * 0.2? Or am I bad at probabilities?
It's definitely cool, but it's not "using" anything. It's a language model. The prompt simply produces that output. There is no inherrent "understanding" of the underlying process.
1
Show additional replies, including those that may contain offensive content