Forgot to address the "desireable" part. A better example wwould be something under question like nuclear vs solar/wind power. How would google decide what kind of pages are more desirable to show for people? I personally don't want google to become the arbiter of "truth"
-
-
But they aren't "deciding". They use signals. Sometimes those signals align with an individual's views, sometimes they don't. What we definitely don't want is some kind of dictated "equal representation" nonsense criteria to make results "fair".
2 replies 0 retweets 4 likes -
For example, it's logical and desirable that Google results should favor true statements. Fact checkers have established that, statistically, Republicans lie more than Democrats. Therefore, if Google favors truth, it would *implicitly* favor Democrats, not due to any bias per se.
3 replies 0 retweets 7 likes -
To get a little pedantic, those Republican and Democrat politicians that have been reviewed have produced those statistics. It would be challenging and interesting to determine the results for general population on a large scale using fMRI.
1 reply 0 retweets 0 likes -
It would, though I think that'd be less relevant for search results. I posit that the people writing influential articles and pages are largely either partisan media closely aligned with politicians, or similarly aligned influencers, so the statistics probably still apply.
1 reply 0 retweets 0 likes -
For that purpose, yes, fact checkers & what politicians say are important. I was just being pedantic about the generalized part of your statement. :) Many people repeat untrue things. Doesn't make them liars if they believe it's true but it's still spreading lies. Hence the fMRI.
1 reply 0 retweets 0 likes -
A liar that managed to convince themselves that what they're saying is the truth is still a liar. Lying is probably more complicated than what would show up in a point-in-time fMRI :-)
1 reply 0 retweets 0 likes -
That's not what I'm talking about. If you take a position based on information that you trust, not knowing that the information is wrong, does that make you a liar?
1 reply 0 retweets 1 like -
No, I know, I'm just saying that someone who convinces themselves of a lie will probably end up with a similar-looking fMRI response. Your example = no liar, my example = liar, but same response. Pathological liars don't work like normal people when telling lies.
1 reply 0 retweets 0 likes -
I'm not sure about that. Pathological liars can sometimes get past traditional lie detectors), but while they might not give a conventional lie response on fMRI, they might just look completely different. But self-convinced normal people ("My child is good!") might fail normally.
2 replies 0 retweets 0 likes
It would be interesting to see some studies about this, certainly.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.