I see this a lot: well-meaning people who want to support individual freedom online but also don't like the impacts. "It's the advertising" or the close-cousin "it's the algorithm" are easy outs. There is no easy out: giving people freedom to express/communicate creates risk.
-
-
Show this thread
-
That doesn't mean you can't reduce the risks with smart product design or operational processes, but there are irreducible downsides to low-friction mass communication and the sooner we accept that the sooner we can discuss the tradeoffs.
Show this thread -
I pinned this for a reason:https://twitter.com/alexstamos/status/1070089951532830720 …
Show this thread
End of conversation
New conversation -
-
-
i think both arguments are straw men. some communication models can amplify harm in orders of magnitude, especially models that benefit from maximum engagement and strive towards it. whatsapp doesn't. facebook/twitter etc does.
-
besides, whatsapp has been making money by sharing metadata with facebook for a while now. claiming that design of whatsapp isn't affected by ad revenue model isn't true.
-
Incorrect.
-
sharing metadata part or making money part?
-
There was a change in ToS to allow for data exchange (which would allow for WA ads powered by FB data) but thanks to all the legal issues around that the technical side froze. To my knowledge, there has been no progress toward revenue from FB/WA data exchange.
-
Plus, pretty much every product design decision made in WhatsApp makes revenue generation harder, so this point is moot, irrelevant and once again is a way to avoid the difficult trade-offs.
-
fair enough. even if FB/WA exchange doesn't go forward, the proposal itself shows a clear intent on FB's part to make money off of it. intent drives design so i don't agree about "every decision made it harder" part. my original point stands too. model makes the whole difference
-
e2e (the only decision i think whatsapp made against ad model) made making money off of whatsapp harder but not that hard. whatsapp still has significant metadata (conversation hours, contact lists, locations, group names) and no promise on their part not to monetize them.
- 1 more reply
New conversation -
-
-
WhatsApp might be a lot more important to Facebook's revenue than you might think. Location/behavioral data is invaluable to the rest of what they do, so I don't think it's fair to characterize it like this.
-
When I left, Facebook was getting no value from WhatsApp's limited data. Finding a revenue model compatible with E2E encryption is a big focus there, and I hope they figure it out as I discussed elsewhere:https://twitter.com/alexstamos/status/1045045964245950465 …
- 1 more reply
New conversation -
-
-
This point is essential and should have been understood ages ago from unmoderated usenet groups. Advertising models and algorithms create some troubles, but dealing with them is seriously insufficient for creating democracy-enhancing public discussion.
-
I often feel that everything is derivative from usenet only now it's a monetized / walled garden. Investment priorities are in "growth" not in quality or health of community. Asking "how have we learned to do better" is fair and I think the answers should disappoint us all.
End of conversation
New conversation -
-
-
You mean the deception involved the the 2016 global privacy policy change isn’t paying off?
pic.twitter.com/hbsYiiFqfb
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Advertising has downsides outside this topic, and may also influence product design in ways adverse to it. But primarily, it's a social issue. Tech can help or hurt, but we cannot ignore that it is at its core, social norms, incentives, and individual decision-making within them
-
We're going to have to learn to deal with a host of issues: cyberstalking, bullying, SWATing, targeted propaganda, invasion of privacy, general loss of privacy, deep memory of online history (mine goes back to the 1970s) We need to take a more active role in learning the lessons
-
We need to use tech where it helps guide behavior in positive directions, and beware tech that rewards bad behavior. Yet we need to beware of social manipulation itself, algorithmic or human.
-
I consciously engage in social manipulation every day. What I choose to Like, what I forward, whom I block for bad behavior, whom I ignore. I consider what example I set, what norms I contribute, in how I word my messages. I try to model the civil dialog I want to see.
-
Of course, one person, my impact is small, but positive feedback amplifies. We get cesspools because of positive feedback for bad behavior, but it works for good as well. But it can't be just about "one person". We need communities. And we have them, to a degree.
-
Through my follows and interactions I have aligned myself with communities engaged with issues in energy, nuclear disarmament, government ethics, civil rights and more. I have disengaged with groups with narrow ideologies, that reward toxic behavior.
-
I moderate a large discussion group on FB with similar views. But I think our conception of communities, and our support for them, is still nascent. Identifying a community is a slow process. Joining a community involves establishing a role, and building trust, a history.
-
It takes time to determine that there IS a community around a certain issue. The undifferentiated feed makes it harder to identify who's a participant—and likewise, who is a troll. I don't know what to build. But I know a social change we need.
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.