A tremendously useful explainer about symbolic APIs (Sequential + Functional API) and Model subclassing in TF 2.0, by @random_forests:https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021 …
-
Show this thread
-
Symbolic APIs are APIs to build graphs of layers. Their strong points are that: - They match how we think about our networks (NNs are always visualized as graphs of layers in textbooks & papers) - They run extensive static checks during model construction, like a compiler would
1 reply 7 retweets 23 likesShow this thread -
This gives you the guarantee that any model that you can build, will run. The only form of debugging you'd have to do at runtime would be convergence-related. The UX of these APIs is highly intuitive and productive
1 reply 4 retweets 15 likesShow this thread -
Meanwhile, the subclassing API has the look and feel of objected-oriented Numpy development. It's ideal if you're doing anything that cannot easily be expressed as a graph of layers, and you feel comfortable with software engineering best practices and large Python projects.
1 reply 3 retweets 10 likesShow this thread -
It will involve execution-time debugging, more code, and will expose a greater error surface, but at the same time it will give you greater flexibility to express unconventional architectures.
1 reply 2 retweets 10 likesShow this thread -
Importantly, in TF 2.0, both of these styles are available and are fully interoperable. You can mix and match models defined with either style. At the end of the day, everything is a Model! That way, you are free to pick the most appropriate API for the task at hand.
1 reply 2 retweets 9 likesShow this thread -
In general I expect ~90-95% of use cases to be covered by the Functional API. The Model subclassing API targets deep learning researchers specifically (about 5% of use cases).
3 replies 4 retweets 19 likesShow this thread -
Replying to @fchollet
Is there a way to relax the requirement that loss func take in 2 Tensors of same shape? I have resort to some hack that hurt perf to trade off for some external elegance. If I m wrong, I will be glad to be enlightened.
1 reply 0 retweets 0 likes -
Replying to @kelvindotchan
Just call `layer.add_loss(loss_tensor)` or `model.add_loss(loss_tensor)` with a tensor you've computed yourself
3 replies 0 retweets 3 likes -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.