CloudFront and ELB are easily two of the biggest TLS/SSL things at Amazon, and I'd previously worked on OpenSSL things, like Apache's mod_ssl, so then the issue went public ... I was one of the first people paged. I was on the 14th floor of our Blackfoot building.
-
Show this thread
-
It was very very quickly evident that Heartbleed wasn't like other vulnerabilities. Normally there's a window between going public and exploits being crafted, but heartbleed was so easy to exploit that it took just minutes of poking around.
1 reply 4 retweets 86 likesShow this thread -
Heartbleed was a memory disclosure vulnerability, which in theory is supposed to be less significant than a remote execution vulnerability, but this was scarier than any bug I'd ever seen. XKCD has an explainer ...https://xkcd.com/1354/
1 reply 15 retweets 167 likesShow this thread -
The TLS protocol had been extended to include a "Heartbeat" extension. It was intended for keep-alives and MTU discovery for DTLS, which uses UDP, but OpenSSL had included it in regular TLS too (which uses TCP).pic.twitter.com/IjhygwcXJV
2 replies 17 retweets 145 likesShow this thread -
And at bottom, the bug was simple, you send a small amount of data, and ask the server to send you back up to 16k of data, and it would send back 16K of decrypted plaintext from memory. URLs, passwords, cookies, credit cards, just about anything could be in there. Ouch ouch.
1 reply 16 retweets 138 likesShow this thread -
OpenSSL was and is very very widely used, just about everyone was impacted in some way. AWS services, our competitors services, basically all of our customers in their own stacks. It felt like the internet was on fire.
1 reply 7 retweets 87 likesShow this thread -
At Amazon we use conference calls for high severity events, usually operational, this was declared a security sev-1 (I've never seen another like this). Call leader that day was Kevin Miller. He just happened to be at all, but it worked out well because he had crypto experience.
2 replies 7 retweets 110 likesShow this thread -
We quickly figured that we'd be patching everything that day, so an emergency was declared and all AWS software deployments were paused. This is incredibly disruptive, but the call leader has the authority to do this on their own. Our CEO and SVP agreed with the call.
1 reply 9 retweets 113 likesShow this thread -
Within Amazon, we have our own package system called Brazil. At the time a part of http://Amazon.com (retail) owned our internal OpenSSL package, but over on ELB we took it over that day and came up with a minimal 2-line hot-patch. Didn't want new risks.
3 replies 4 retweets 90 likesShow this thread -
-
-
Replying to @colmmacc
Ok, I should have imagine that
Thanks for sharing!0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
