Safely managing rapidly progressing dangerous technologies requires conceptual, theoretical thinking - the common scientific ethos that we should wait for empirical results risks giving too little time to respond. Nicely put by
Very happy with this episode I recorded with @rohinmshah of DeepMind's safety team which we just dropped.
I ask for his personal opinions on all kinds of issues:
• Case for and against slowing down
• Where he disagrees with ML folks and LWers
• More!
https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/…
This sort of reusable medium-tech personal protective equipment seems like a promising way of allowing life to mostly go on, while still managing to block the spread of a very deadly pandemic:
Very happy with this episode I recorded with @rohinmshah of DeepMind's safety team which we just dropped.
I ask for his personal opinions on all kinds of issues:
• Case for and against slowing down
• Where he disagrees with ML folks and LWers
• More!
https://80000hours.org/podcast/episodes/rohin-shah-deepmind-doomers-and-doubters/…
𝘚𝘢𝘧𝘦𝘵𝘺 𝘪𝘴 𝘢𝘯 𝘢𝘤𝘩𝘪𝘦𝘷𝘦𝘮𝘦𝘯𝘵. It is an accomplishment of progress—a triumph of reason, science, and institutions.
We should be proud of it—and we should be unsatisfied if we stall out at our current level.
𝗔 𝗣𝗟𝗘𝗔 𝗙𝗢𝗥 𝗦𝗢𝗟𝗨𝗧𝗜𝗢𝗡𝗜𝗦𝗠 𝗢𝗡 𝗔𝗜 𝗦𝗔𝗙𝗘𝗧𝗬
If you’re pro-technology, it is natural to react to fears of AI doom with anger or disgust. It smacks of techno-pessimism, and could lead to regulations that kill this technology or drastically slow it down.
of DeepMind's safety team which we just dropped.
I ask for his personal opinions on all kinds of issues:
• Case for and against slowing down
• Where he disagrees with ML folks and LWers
• More!
I sometimes hear stuff like "of course GPT4 couldn't be conscious" in mainstream AI coverage.
I think GPT4 is very unlikely to have subjective experience, but when I hear that I feel I learn only that the speaker has no idea how puzzling philosophy of mind really is.
(Of course not saying that's how we should think of people morally, people suffer and cars don't.
But it's one framing for thinking about the unemployment question.)
If you think of human beings as a technical artifact that the economy uses — one that happens to have been miles ahead of the alternatives so far — then they could be superseded and made irrelevant by alternatives, just like any other piece of capital equipment.
For some people the question of whether machine intelligence will cause persistent human unemployment is like asking whether e.g. new and better cars will cause unemployment.
But to me, it's more like asking whether we'll still be driving the same models of car in 2100.
Great indoor air quality is good for health.
This post shows how even a low-energy purifier that takes air from outside, filters it, and pumps it inwards, can keep air incredibly clean.
Because the pressure gradient means other outdoor air can't get in.
I don't think those warlords were unaware of trade.
It's that their ability to do violence to others exceeded the rents they were currently extracting from them, so it was in their selfish interest to use violence (or threaten to do so) in order to steal more stuff.
Peaceful trade is a huge deal, often in people's interests.
But much of history is also "local warlord taxes peasants to fund the conquest of new regions, in order to enslave some and forcibly tax the others, in order to fund a bigger army to conquer more regions, etc."
“the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us”
Fascinating by @KatjaGrace:
"Wastewater surveillance is one of the few tools that we can use to prepare for a pandemic and I am pleased that it is expanding rapidly in the US and around the world.
Every major sewage plant in the world should be doing wasterwater surveillance..."
“the AI-human relationship is importantly disanalogous to the human-ant relationship, because the big reason we don’t trade with ants will not apply to AI systems potentially trading with us: we can’t communicate with ants, AI can communicate with us”
Fascinating by
We also discuss 6 new orgs they've supported recently:
1. Dispensers for Safe Water
2. Syphilis screening and treatment in pregnancy by Evidence Action
3. Kangaroo Mother Care
4. MiracleFeet
5. Alliance for International Medical Action
6. HKI's vitamin A supp work
It's always lovely to check in with the folks who, like GiveWell, can confidently point to the concrete benefits of their work. 😁
GiveWell has changed a lot more in the last few years than I'd realised (much bigger, more research and recommendations):
My new book BEING HUMAN explores unexpected ways our biology has shaped world history. In this @Waterstones blog I reveal one such chain of effects: how a particular defunct gene affected the outcome of the Battle of Trafalgar and gave birth to the Mafia.
https://waterstones.com/blog/lewis-dartnell-on-the-surprising-intersections-of-human-biology-and-history/…
The Cognitive Revolution on 'moats' for AI companies:
"Nathan and Erik analyze the moats of the most powerful companies in AI. ... the big players have key competitive advantages that can be examined from many different angles." 3/
This is a massive cliché, but on Twitter people talk to one another with a degree of contempt they would never contemplate using in an in-person conversation, and it's a bad thing.
In general I'm a huge fan of the Ezra Klein Show since he moved to the NYT, and today's episode is one of my all-time favs.
Includes incredible stories of good intentions failing due to implementation details:
Good:
"Octopuses, crabs and lobsters will receive greater welfare protection in UK law following an LSE report which demonstrates that there is strong scientific evidence that these animals have the capacity to experience pain, distress or harm."
5. Figuring out moral status of AI minds & how to avoid harming them.
(6. And various other things.)
Collectively this requires a heroic effort by many parties.
There are ways to contribute on here but most of this is not going to be done on Twitter or by the 'Extreme Online'.
IMO there's an huge amount of work to be done re AI safety and unclear amount of time to do it.
1. Figure out testing, evaluations, liability, etc.
2. Harden infosec.
3. Prevent access and misuse by terrorists or lunatics.
4. Alignment research:
The interview with Givewell's CEO on the 80,000 Hours podcast is well worth your time, if you're wondering what they've been doing lately! They've been making grants outside malaria bednets and are adding staff in all areas
It's been disappointing to see how many of the media articles have focused -- almost exclusively -- on the tech CEOs that signed the CAIS statement on AI risk.
In my mind the headline is that >100 AI professors signed it, and it's NOT just "the usual suspects".