(108) My wife, by the way, looking over my shoulder, tells me "you should really have your offer to write takes grow with the logarithm of the likes you get". She and my girlfriend are now debating the best base for this logarithm. imo this counts as a take on effective altruism.
Conversation
(109) If 'slightly weaker AI' isn't really a thing and there are large unpredictable discontinuities in AI capabilities for any reason, then I think we are probably going to fail AI alignment and all die; all the plans that I have heard that might work assume not that.
1
23
(110) If 'slightly weaker AI' is a thing, then I think some of the plans I've heard in the broad category 'use slightly weaker AIs and idk some formalization of heuristic arguments Paul came up with to align slightly stronger AIs' seem kind of promising and might work fine.
1
16
(111) I find it kind of tempting, given this state of affairs, to go "okay, assuming no discontinuities..." but Eliezer Yudkowsky will be so disappointed in me, so I don't do that.
1
22
(112) The arguments for 'definitely discontinuities' seem pretty tenuous to me, though.
2
18
(113) I really don't think anyone's going to solve interpretability enough that this just itself solves alignment though I tentatively think if you're not pushing the state of the art in capabilities it's worth someone spending five years trying.
2
21
(114) imo working on AI capabilities right now is an understandable thing to do, but a very bad one. I could imagine someone working on capabilities having a justification that felt to me like it was persuasive but the existing people seem to have much worse justifications.
2
6
50
(115) I love some things about silicon valley tech culture, but I think it's pretty destructive as the default for AI companies to be operating from.
1
29
(116) It kind of seems like there's a weird and sort of stupid degree of important people making AI-related strategic decisions not even understanding what other important people think about AI strategy.
1
2
29
(117) This is never trivial to resolve because it, again, tends to bottom out in some incredibly detailed technical debate, but it's definitely a very obvious way we're doing much worse than it feels like we intuitively could be.
1
1
12
(119) When I last said this, someone told me that I was trying to convey 'everything's under control, we're doing fine'. The AI situation seems very not at all under control to me and I think we're doing very badly. I just think people should know why other people do things.
2
1
33
(120) Young EAs who become convinced we have an AI disaster on our hands often go looking for AI safety orgs that are hiring. The ones that think we have an incredibly hard problem on our hands generally aren't hiring much. The ones that think the problem is easy are hiring.
4
7
80
(121) This means that lots of people who want to work on safety go work at whichever organization thinks safety is easiest. This seems bad.
1
2
41
(122) I expect the future to get extremely weird and in some ways good (rich, productive, inventive) and some very bad (turbulent, confusing, lack of good reasoning making sense of everything, abundant AI-generated reasoning) before we reach a critical point for AI.
1
2
32
(123) If that happens, it seems good for AI safety people who have been predicting it to try to explain what's going on to people and earn a reputation for being right.
2
25
(124) Okay enough AI takes, global health and development for the next hundred or so. The global health research field has its issues and plenty has been written about them, but I deeply admire many of the researchers I have met through their global health work.
1
1
15
(125) I think for the most part people are thinking hard about real, deeply important problems, trying things, talking pretty openly about which solutions they think work or don't and what's going on there, etc.
1
13
(126) The big places where incentives seem deeply unhealthy are the incentives to invent something new instead of deploying something slightly better, and the (related) thing where complicated things are more fun but don't tend to scale.
1
1
29
(127) The rule of thumb here is pretty famous and pretty simple: it scales if it's hard to get wrong, not hard to get right. If it takes unusual skill or discretion to implement, it won't scale.
1
3
41
(128) We haven't even exhausted the gains from doing things that definitely scale fine, but it still seems like a huge problem if we can't scale complicated things because many important ingredients of a better life - especially education - are pretty complicated.
1
2
23
(129) I would love to see more research focused on instructions to public servants implementing programs, trainings for those public servants, incentives and payment systems -- what works to get hard things done at scale?
3
26
(130) Mass deworming programs seem great in high worm prevalence areas, meh in low worm prevalence areas. I think a lot of people just love "debunking" global aid and that means their readers/listeners end up with really misleading impressions of what's going on.
1
1
26
(131) At the same time, mass deworming in low prevalence areas really does seem pretty meh, I don't personally donate to it, and even though I think GiveWell was pretty clear about their rationale for the rec it clearly took lots of people by surprise so be clearer I guess.
1
15
(132) I think that a lot of Western criticisms of aid is not focused on the beneficiaries but on stupid local political point-scoring. I get angry when I read criticisms of aid that do not focus on the aid failing to improve the lives of the recipients.
1
1
47
(133) I get especially annoyed by criticisms that seem to use what Jai delightfully termed the Copenhagan Interpretation of Ethics, that interacting with a problem makes you culpable if you don't fully solve it.
1
4
68
(134) Some people think that Good Ventures doesn't give more to global health and development in anticipation of other donors/if there credibly weren't other donors they'd give me. I basically think this is incorrect?
1
12
(135) The calculations about the last dollar are super complicated but my sense is that mostly people expect that they'd get less good done with all the money in total by spending more on GiveWell top charities, even when those have a real funding gap.
1
11
(136) That said, this probably changes as the amount of money EA has to give aways swings wildly with the fortunes of a small number of specific people. (I always want to ask why they aren't more hedged but it seems like a rude Q for people who have def thought about this.)
2
1
20
(137) Personally I give to global health stuff. I would probably give to x-risk stuff if I knew of a good x-risk funding thing that for some reason other people couldn't touch, but I don't.
2
27
(138) I feel ludicrously lucky I was born here, in the richest country in the world, to a upper middle class family, at the richest time in history, at a moment when my choices really matter. I want everyone to have that.
1
37
(139) Except that I don't want anyone's choices to potentially be very important to whether there is any human future at all, that seems like an unhealthy amount of pressure really.
1
29
(140) I think I donate to global health and dev for approximately the same reasons I try to be a good mother and a good wife and a journalist with integrity and a good tipper at my favorite coffee shop. I think you can't get hard stuff right if you don't get easy stuff right.
1
1
43
(141) Saving peoples' lives is important, and it is good, and it is part of living the life in which I am my fully realized self, think clearly, act clearly, and hopefully do a lot of good through my choices.
1
21
(142) I also think most human lives are really good! I can't justify this at all; at some point it all comes down to intuition, but I think we should mostly trust people that their lives, which they are living, are worth it to them.
1
1
27
(143) Sometimes people express confusion that we had kids while thinking there's a high chance of something going catastrophically wrong this century. But better to have lived and died than never have lived at all imo.
1
1
35
(144) That said I definitely don't think we should be trying to deliberately increase the population or anything. The population should be whatever people authentically want when they have a choice, and then in the future our options will change radically.
2
21
(145) In general a lot of people say really awful things when thinking about population. I think it's important people can make mistakes out loud but I have a hard time not snapping at people when they assert things about 'overpopulation' and how there should be fewer of us.
1
26
(146) We don't have overpopulation, the things I care about would mostly be worse if there were fewer of us, and I think that 'overpopulation' worries come from a pretty deeply unhealthy mindset in which people are competition for resources.
1
2
31
(147) Today I saw someone criticizing an EA because one zany hypothetical in some paper the EA wrote is about raising a lot of clones of von Neumann or something. Their critique of this? That given overpopulation, it's an appalling thing to suggest we make more people.
1
2
23
(148) I think people who say and fervently affirm criticisms like that don't think through the implications of a "people shouldn't have the right to make more people" stance for reproductive freedom, but uh - okay, personal note here -
1
19
Show replies
