It feels a bit more like the "Estimator" training style. Full Colab notebook:https://colab.research.google.com/drive/1zzLcJ2A2qofIvv94YJ3axRknlA6cBSIw …
-
-
Show this thread
-
This usage pattern enables you to use `fit`/etc with losses or metrics that have completely arbitrary signatures. Such endpoint layer may also have different behavior during training or inference.
Show this thread
End of conversation
New conversation -
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
To use it at inference time, do you do model surgery and just get the logits output or is there a more idiomatic approach?
-
You'd do: inference_model = Model(inputs, logits) inference_model.predict(...)
End of conversation
New conversation -
-
-
It seems utter gibberish to me right now, but from hereon I take the pledge to understand this tongue!!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
what about CTC loss ?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is it possible to pass in/return keyword arguments and use the Keras training loop? Helps prevent needing to turn datasets that may have named outputs into a tuple
-
How about using a standard loss but having the model return something else than the input to that loss. For example: predictions = argmax(probabilities) ?
End of conversation
New conversation -
-
-
I thought I saw someone computing custom loss using a custom layer, although in a much more hackish way... I thought it may be one of the first keras implementation of yolo v2.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.