Symbolic APIs are APIs to build graphs of layers. Their strong points are that: - They match how we think about our networks (NNs are always visualized as graphs of layers in textbooks & papers) - They run extensive static checks during model construction, like a compiler would
-
-
Show this thread
-
This gives you the guarantee that any model that you can build, will run. The only form of debugging you'd have to do at runtime would be convergence-related. The UX of these APIs is highly intuitive and productive
Show this thread -
Meanwhile, the subclassing API has the look and feel of objected-oriented Numpy development. It's ideal if you're doing anything that cannot easily be expressed as a graph of layers, and you feel comfortable with software engineering best practices and large Python projects.
Show this thread -
It will involve execution-time debugging, more code, and will expose a greater error surface, but at the same time it will give you greater flexibility to express unconventional architectures.
Show this thread -
Importantly, in TF 2.0, both of these styles are available and are fully interoperable. You can mix and match models defined with either style. At the end of the day, everything is a Model! That way, you are free to pick the most appropriate API for the task at hand.
Show this thread -
In general I expect ~90-95% of use cases to be covered by the Functional API. The Model subclassing API targets deep learning researchers specifically (about 5% of use cases).
Show this thread -
I think it's great that we don't silo researchers and everyone else into completely separate frameworks. It's all one API, that enables a spectrum of workflows, from really easy (Sequential) to advanced (Functional) to fully flexible and hackable (Model subclassing)
Show this thread
End of conversation
New conversation -
-
-
Do your have any idea about the time of
#TensorFlow 2 release? -
There will be an alpha release in spring. Meanwhile you try the TF 2.0 preview nightly build.
- Show replies
New conversation -
-
-
I was a fan of TF since 0.7 and now I like its probability package. But after trying PyTorch 1.0 I felt that it looks more consistent and intuitive. It seems for me that PyTorch's team has more clear vision what kind of tools ML developers need.
-
I use TF and PT, but now when I am going to implement something new from arxiv papers I subconsciously make "import torch" not tensorflow with its mess of eager modes or TF2.0 not yet ready.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.