about AI hype, I argue that the most worrisome uses of ChatGPT aren't malicious ones but rather everyday people and organizations using it to cut corners due to the pressures they face, like CNET's misguided experiment.
says. It's messy, it needs input from civil society, it's constantly shifting and culture specific.
Because of that, he says, "AI has no role here." /end
says, particularly ones that aim to predict social outcomes -- such as in hiring and criminal risk predictions.
These types of AI should be considered "illegitimate and morally impermissible." /4
doesn't believe that ChatGPT creates a doomsday scenario that destroys jobs and harms the economy - as some have warned.
Even larger innovations such as the internet and smartphones haven't done that. This is not a "sky is falling" scenario, he says. /3
ChatGPT's tenuous relationship with the truth means that it can be a dangerous creator of misinformation.
Add economic pressures that force institutions to cut costs and rely on automated content creation and you have "a recipe for disaster" says
ChatGPT is great at its job. But its job is to be a bullshit generator -- an automated creator of plausible lies.
"It is very good at being persuasive, but it’s not trained to produce true statements"
There is a lot of work to be done to counteract rising political violence and the speech that incites it.
But first we have to be clear about the fact that incitement is often based on fear and does not always require hate. My newsletter here:
Tech platforms could also do more by directing less attention to controversial but not dangerous hate speech and focusing instead on truly dangerous speech,
What is the solution to dangerous speech? Counterspeech. "Influential people of various spheres need to refrain from dangerous speech themselves and denounce it when other influential people use it,"
"In the United States at the moment, there is, at minimum, a striking and alarming shift in the extent to which dangerous speech is used and condoned by political leaders and other influential people,"
This kind of dangerous fear-mongering is on the rise in the United States. Think of the myth of the "great replacement" which claims there is a conspiracy to replace White people - a myth that prompted violence.
Prior to the Rwandan genocide in 1994, Hutu politicians warned the Hutus that they were about to be exterminated by Tutsi cockroaches.
Prior to the Holocaust, Nazi propagandists declared that Jews were planning to annihilate the German people.
It is fear, more than hate speech, that often leads to mass violence. Leaders who seek to incite violence often create fake threats so that people will feel they must defend themselves.
This week, new details emerged about how social media platforms allowed violent rhetoric to circulate freely on their platforms in the weeks leading up to the Jan. 6 insurrection.
Their working conditions reveal a darker side to the AI boom: that AI often relies on hidden, low-paid human workers who remain on the margins even as their work contributes to a multibillion-dollar industry. (6/8)
She hopes FB users will eventually be offered a yes/no option for ad profiling. Of course, even then, we could just end up with more annoying take-it-or-leave-it pop-up boxes that are not a true choice.
As is always true with consent, it must be freely given to count. /end
filed a lawsuit against Meta in the UK demanding that it comply with her denial of consent to profiling.
My interview with her in today’s newsletter /7
But Ireland’s economy is heavily dependent on Big Tech. And its regulator has a backlog of GDPR cases that have waited years for resolution.
As activist
fined Meta €390 million for not getting proper consent before profiling FB & IG users. It was hailed a huge victory for EU’s landmark privacy law, GDPR, but sadly it may not change how you are profiled. /2
Let’s talk about consent. Do you feel like you ever properly consented to being surveilled online constantly, having a profile built of your interests and having that profile made available to anyone who could pay for it?
EU regulators don’t think so either. /1
All the more reason to keep reading Ryan Mac, Donie O'Sullivan, Drew Harwell, Matt Binder, Micah Lee, Aaron Rupar, Keith Olbermann on their web sites.
Subscribe & Support journalism.
Keep accountability reporting alive
SCOOP w/ @CaseyNewton: Twitter is working on a plan to force users to opt-in to personalized ads & share their location data. It’s considering letting those who pay for Twitter Blue to opt-out of data sharing — a decision that would likely anger Apple: https://platformer.news/p/twitters-risky-plan-to-save-its-ads…
Thank you @JuliaAngwin and @RinaPalta for taking a chance on me And to my squad crew for holding it down @colinlecher@ToddFeathers@alfredwkng@tenuous@jonkeegan@ghongsdusit
- an incredible reporter with unmatched commitment to exposing harms against the most vulnerable.
Her series Working for an Algorithm sadly gets more relevant by the day. https://themarkup.org/series/working-for-an-algorithm…
Today is my last day at @themarkup, where I can’t believe I was lucky enough to work for two years.
I got to learn from a massively talented crew and was encouraged to write stories that highlighted worker voices. I’m so grateful.
(I’ll say what’s coming next in the new year)
I tried the viral Lensa AI portrait app, and got lots and lots of nudes. I know AI image generation models are full of sexist and racist biases, but this one really hit home. My latest story for @techreviewhttps://technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/…
"China’s surveillance state is like a panopticon, where the idea that you’re being tracked is more effective than the actual tracking mechanisms," @lizalinwsj tells me in this week's newsletter with @joshchin:
https://themarkup.org/newsletter/hello-world/inside-chinas-surveillance-panopticon…
Social control is the entire point of surveillance. So it doesn't matter *too much* how well it works. It matters more how afraid people are of being caught.
That's why AI is a great surveillance tool. It's not always accurate but it can be good enough to scare people.