Opens profile photo
Follow
Click to Follow jesse_hoogland
Jesse Hoogland
@jesse_hoogland
Just another guy failing to align the AI. Neural network physicist & research assistant @ Cambridge.
Amsterdam, The Netherlandsjessehoogland.comJoined June 2022

Jesse Hoogland’s Tweets

Kittens raised in cylinders of vertical stripes only learn to see vertical stripes. What happens if we raise our children in VR with non-Euclidean geometries?
Image
6
23
Just heard the most horrifying story of a physics professor who thinks imaginary numbers are not actually physical but purely descriptive. How can we entrust these dangerous ideologues to educate the young and impressionable?
1
12
10/ You could have had all that and more, Europe! The ball was in your court! Unfortunately, my grandfather didn’t get the position. Still, today we’ll share a virtual toast to everyone who made the web what it is.
10
Show this thread
9/ P.S. My favorite part of the story takes place years later when my grandfather was campaigning to become director of CERN. During a tour of the member states, he got chewed out by the Spanish contingent for having wasted the opportunity of the web.
1
4
Show this thread
5/ But CERN also realized that they weren’t well-positioned to bring the web to the masses. So Michael Sendall, TBL’s boss, went around Europe pitching the web to various actors from private industry.
Image
1
3
Show this thread
2/ You see, the web was never meant to be open-sourced. Sure, Tim Berners Lee and the other founders were idealists who believed the web could bridge our divides and that everyone had a right to access.
Image
1
5
Show this thread
The confirmed oldest human in history reached 122 years old. The unconfirmed oldest human who ever lived reached…
  • 120-125
    54%
  • 126-130
    13.8%
  • 131-140
    11.5%
  • 141+
    20.7%
87 votesFinal results
3
2
It’s amazing that there is still no consensus whether the brain stores information in the specific spiking times or average spiking rates
Image
1
15
I turned down an AI slowdown this week. Sam Altman was shocked when I told him no & that I want him to accelerate 20% harder instead. What he doesnt understand is that someone like me just wants the thrill of the race instead of easy AI safety research. I’m just built different
8
EAs: don’t break Chesterton’s fence. Also EAs: I’m going to replace 2/3rds of my diet with “nutritionally complete” liquid meals.
7
78
Evidence for the LessWrong crowd being right about AI: LessWrong currently has the only functional rich text editor in the world.
Quote Tweet
I wonder why no app has figured rich text editing yet. It’s really simple: you allow two positions on block boundary, one inside and one outside.
Show this thread
Embedded video
0:19
94K views
2
So… late last night, a friend called me in tears He just lost his job at a 1000+ person paperclip factory. The culprit? ChatGPT. This is a wakeup call to all people made of atoms everywhere (read this to stay alive):
11
It is a small comfort to know that middle management may go before the rest of us
Quote Tweet
ai is going to solve organizational problems far before it’s at the level of top humans. someone’s going to call the “summarize meeting notes” function and gpt will settle a debate with the (undeserved) weight of scientific authority. an “objective” arbiter, automated McKinsey
4
Memorizing facts, quotes, and definitions gets unfair flack. There’s no creativity without synthesis. No generalization without interpolation. So in humans, so in AI.
2
12
The next time you feel like dunking on interpolation, remember that you just don't have the imagination to deal with high-dimensional interpolation. Maybe keep it to yourself and go interpolate somewhere else.
2
Show this thread
7/ Here I've presented the visuals in terms of regression, but the story is pretty similar for classification, where the function being fit is a classification boundary. In this case, there's extra pressure to maximize margins, which further encourages generalization.
Image
1
1
Show this thread
6/ Meanwhile, with interpolation-not-extrapolation NNs can and do extrapolate outside the convex hull of training samples. Again, the bias towards simple linear extrapolations is locally the least biased option. There's no beating the polytopes.
Image
1
Show this thread
5/ Bonus: this explains double descent: Test loss goes down, then up until the interpolation threshold, where there's only 1 bad interpolating solution. But as you increase model capacity further, you end up with many interpolating solutions, and some generalize better.
Image
1
Show this thread