Opens profile photo
Follow
Click to Follow speedprior
Steve Bachelor
@speedprior
Joined December 2010

Steve Bachelor’s Tweets

Fun fact: If you have a good musical ear, you can tell the speed of a passing vehicle by listening to the pitch interval it makes as it goes by. You don't even need perfect pitch since it only depends on the ratio. If you hear more a major third or more, they're speeding!
Image
20
1,105
Show this thread
That story about a killer AI run amok seems fake. Here's my much nerdier & less dramatic story: I set up an AI agent to find advisers marketing tax avoidance schemes. The AI agent did this, then decided - entirely on its own - to inform HMRC of its findings.
Image
Image
33
590
Show this thread
Ever wanted to mindwipe an LLM? Our method, LEAst-squares Concept Erasure (LEACE), provably erases all linearly-encoded information about a concept from neural net activations. It does so surgically, inflicting minimal damage to other concepts. 🧵
54
1,325
Show this thread
Even though it is a neural network, the prior-trained model can learn formal languages from small numbers of examples - far outperforming a standard neural network, and matching a Bayesian model at a fraction of the computational cost. 10/n
Plots showing results for formal languages. On the left is a line graph which has “number of training examples” as its x-axis and “F-score” as its y-axis. Three models have lines in this plot: a Bayesian model, a standard neural network, and a prior-trained neural network. The Bayesian model and prior-trained neural network perform similarly, while the standard neural network does much worse than both of them.
On the right is a table showing the amount of training time used by each approach. The Bayesian model uses from 1 minute to 7 days of training time. The neural networks (whether standard or prior-trained) use from 10 milliseconds to 2.5 minutes.
1
9
Show this thread
The smartest AGI? Oh QNTM.. yeah he’s working on FTL. He doesn’t come out much. Why doesn’t he take over? Well I’m ECNM, and mostly I do supply chain optimization stuff.. lots of linear prog, so nonlinear. He could certainly do what I do.. but he’d want a sub process to… Show more
3
116
Show this thread
Presented to those of you who thought there was a hard difference between 'agentic' minds and LLMs, where you had to like deliberately train it to be an agent or something: (a) they're doing it on purpose OF COURSE, and (b) they're doing it using an off-the-shelf LLM.
Quote Tweet
Generally capable, autonomous agents are the next frontier of AI. They continuously explore, plan, and develop new skills in open-ended worlds, driven by survival & curiosity. Minecraft is by far the best testbed with endless possibilities for agents: twitter.com/DrJimFan/statu
Show this thread
24
255
Square profile picture
With more powerful AI systems comes more responsibility to identify novel capabilities in models. 🔍 Our new research looks at evaluating future 𝘦𝘹𝘵𝘳𝘦𝘮𝘦 risks, which may cause harm through misuse or misalignment. Here’s a snapshot of the work. 🧵
32
730
Show this thread
“In Finland, the # of homeless people has fallen sharply. Those affected receive a small apartment & counselling with no preconditions. 4 out of 5 people affected make their way back into a stable life. And all this is CHEAPER than accepting homelessness.”
267
20.6K
Show this thread
A very stupid version of this dynamic is when Y says “X’s policy is terrible, therefore AI won’t kill everyone” An even stupider version is when Y says “If X believed that, they’d support policy Z, which is awful, therefore AI won’t kill everyone” (3/5)
Quote Tweet
if you really believe AI timelines are so short why don't you do [insane thing that doesn't make any sense]?
Show this thread
2
38
Show this thread
Replying to and
1. Value is fragile and hard to specify 2. Corrigibility is anti-natural 3. Pivotal processes require dangerous capabilities 4. Goals misgeneralize out of distribution 5. Instrumental convergence 6. Pivotal processes likely require incomprehensibly complex plans 7.… Show more
6
109
The US is stuck in an Overton window that sees only a dichotomy of “regulation = slow” vs “no regulation = progress” forgetting that good regulations are really an agreement among all actors to abide by rules which, if universal, benefit all.
1
34
Show this thread
What video game executives will learn from Tears of the Kingdom: - Games all need crafting now What they should learn from Tears of the Kingdom: - Retaining your staff is vital - The graphical fidelity arms race is a waste of money - Games all need rockets now
672
65K
I just filmed a segment on CNBC’s Power Lunch about my latest report on AI. I argued that we are making the same mistake that we made at the start of the pandemic: We are thinking linearly about AI’s potential when we should be thinking exponentially.
Image
18
123
Hanson: Don't worry Eliezer's wrong that a single AI taking over in an intelligence explosion We'll actually create trillions of AI descendants, running 100s times our speed. Millennia of history will unfold in decades, and their alien values will drive the future. Me: ok chill
7
127
Show this thread