1/ I had a question about successfully handling high traffic on computationally expensive deep learning models (like DeOldify). I'm pretty new to this myself and therefore may be very wrong but I do have a take and I'm wondering what others think. Here's what I said:
-
-
3/ "Because we knew that would be hard and quite frankly not something we’re comfortable with taking on as a two person team. Before we decided to go down this route, we were originally going to go with an app that strictly ran on iPhone hardware (6s+)."
Show this thread -
4/ "Yes,this actually worked, and quite well (noticeably better than open source). As you can imagine- getting rid of servers like this solves the scaling problem :) We’ve also considered doing a desktop app (haven’t ruled it out, just not right now). "
Show this thread -
5/ "Again- deferring huge, expensive computation to the users, hence avoiding a lot of complication on our end. And we’d argue it’s better for the users."
Show this thread -
6/ "My hot take on “the cloud” approaches is that they tend to be the default approach to a fault and traditional desktop/local deployments tend to get overlooked even if they make the most sense."
Show this thread
End of conversation
New conversation -
-
-
Honestly - you saved yourself a huge headache. Deploying GAN models at scale is a big challenge. I found that out the hard way. There is next to nothing written about this since its so new. Most people writing guides assume your servers will deal with millisecond requests.
-
Yeah, you right about that- there's not much to go by in terms of "how to" literature out there. That's part of what's motivating me to write about this on Twitter. Thanks for confirming what I suspected!
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.