Opens profile photo
Follow
Click to Follow labmlai
labml.ai
@labmlai
labml.aiJoined December 2020

labml.aiโ€™s Tweets

Found a bug in the Prompt Art iPhone app's latest version (1.5.6), that crashes whenever you tap on the "create similar" button. This happens for iOS version 16.x We submitted a build to the app store with a fix and will update once approved.
2
Deep Learning่ซ–ๆ–‡ใฎๅฎŸ่ฃ…ใพใจใ‚ github.com/labmlai/annota ใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใจใใ‚Œใจ้–ข้€ฃใ—ใŸใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ใฎใ‚ทใƒณใƒ—ใƒซใชPyTorchๅฎŸ่ฃ…ใฎใ‚ณใƒฌใ‚ฏใ‚ทใƒงใƒณใ€‚ๆฏŽ้€ฑๆ–ฐใ—ใ„ๅฎŸ่ฃ…ใŒ่ฟฝๅŠ ใ•ใ‚Œใฆใ„ใ‚‹
Image
Image
Image
779
We reverted to stable diffusion 1.5. It is possible to pick v2.0 on our web (promptart.labml.ai/playground) iPhone app uses v1.5 (apps.apple.com/ca/app/prompt-) and we will add the option to chose on the next release ๐Ÿ‘‡๐Ÿงต
Image
Quote Tweet
Just updated apps.apple.com/ca/app/prompt- and promptart.labml.ai to use stable diffusion 2.0. It seems much better at understanding the prompt. Follow @PromptArtApp for updates
Show this thread
1
31
Show this thread
Just updated apps.apple.com/ca/app/prompt- and promptart.labml.ai to use stable diffusion 2.0. It seems much better at understanding the prompt. Follow for updates
Quote Tweet
Excited to announce the release of Stable Diffusion 2.0! Many new features in v2: โ€ข Base 512x512 and 768x768 models trained from scratch with new OpenCLIP text encoder โ€ข X4 upscaling text-guided diffusion model โ€ข New โ€œDepth2Imageโ€ functionality Blog: stability.ai/blog/stable-di
Show this thread
Image
3
So exciting to see our work on FlashAttention having an impact, beyond NLP models๐Ÿš€๐Ÿš€๐Ÿš€! Thanks for the integration effort
Quote Tweet
Speed up stable diffusion by ~50% using flash attention ๐Ÿ“Annotated implementation: nn.labml.ai/diffusion/stab ๐Ÿ–ฅ Github: github.com/labmlai/annota We got close to 50% speedup on A6000 by replacing most of cross attention operations in the U-Net with flash attention ๐Ÿงถ๐Ÿ‘‡
Show this thread
Image
118
Speed up stable diffusion by ~50% using flash attention ๐Ÿ“Annotated implementation: nn.labml.ai/diffusion/stab ๐Ÿ–ฅ Github: github.com/labmlai/annota We got close to 50% speedup on A6000 by replacing most of cross attention operations in the U-Net with flash attention ๐Ÿงถ๐Ÿ‘‡
9
499
Show this thread
ๆทฑๅฑคๅญฆ็ฟ’ใฎๆœ€่ฟ‘ใฎ็ ”็ฉถ๏ผŒไธป่ฆใช็ ”็ฉถใฎPyTorchๅฎŸ่ฃ…้›†ใชใฎใงใ™ใŒ๏ผŒใ“ใ‚Œใฏใ‹ใชใ‚Šใ„ใ„ใงใ™ nn.labml.ai ๅ˜ใซgithubใซใ‚ณใƒผใƒ‰ใ‚’่ผ‰ใ›ใฆๅฐ‘ใ—ใ‚ณใƒกใƒณใƒˆใ—ใฆ็ต‚ใ‚ใ‚Š๏ผŒใงใฏใชใ๏ผŒๅฐ‚็”จใฎใ‚ตใ‚คใƒˆใง๏ผŒๆ•ฐๅผใจใ‚ณใƒผใƒ‰ใฎๅฎŸ่ฃ…้ƒจๅˆ†ใฎๅฏพๅฟœใ‚’่งฃ่ชฌใ—ใฆใŠใ‚Š๏ผŒใ‚ใพใ‚Š็Ÿฅใ‚‰ใชใ„ๅˆ†้‡Žใงใ‚‚ใ‚ใ‹ใ‚Šใ‚„ใ™ใ„
Image
Image
2
1,816