Conversation

I didn’t have an inside look at yours, but from the outside, everything happening seemed pretty textbook. Vague goals, unachievable constraints, poor management, no deadlines, distracted, senior leadership working on other projects.
1
11
No incremental success metrics, no way to know whether you were getting anywhere or not, no pressure to win. I am reminded of friends of mine whose lives have been ruined by trust funds.
1
10
Every organization that remains dysfunctional has a good set of excuses for why that happened by the way. It’s the easiest thing on earth, and the most human. It’s a lot harder to say “well, we screwed up. What do we do differently now?” and then do it.
2
9
Yep, world-class existing CEOs didn't line up to produce a world-class functional organization, we ran with the people we had, we bit bullets and took consequences. You're still not naming a single thing we could have done differently.
3
23
Perry’s taking the very standard heuristic “if you don’t know what to do, at least pick a tractable sub-problem and do incremental work that builds skills/techniques and demonstrate progress in public.” This is indeed how almost all science and engineering works.
2
31
The very very natural thing to do by this heuristic is, as soon as you think existing ML models are at all relevantly analogous to the kind of AI you’re worried about, try to do basic interpretability on them. Like Chris Olah’s “circuits” stuff or the SolidGoldMagikarp thing
2
23
And that predictably wouldn't get far enough in time, and furthermore other people are trying it as I predicted they would. Earth without Eliezer does this too; it didn't need an Eliezer on that particular failed effort.
2
14
Right, I understand that. Valid to swing for the fences, I guess. (Though the historical examples I know of pre-paradigmatic science/theory with no incremental progress metrics are like…the pre-Socratics and Pythagoreans, who did real stuff but not *fast*)
2
9
Back in 2012 I made my peace with “MIRI doesn’t look like normal science but that’s ok because it’s early/pre-paradigmatic and this is what it looks like when fields are being born”
1
6
I still don’t get why when AlphaGo came out in 2016 and you guys decided AGI looked like incremental progress on *that*, you also thought the agent foundations approach had a shot at working fast. Not my business, of course, but it was weird.
1
8
I think I knew that? I knew about the “ontological crisis” thing. I would have expected someone to approach that experimentally though! Like, capsule networks seemed to be a super primitive start…
1
5