|
| kylebgorman wrote:
| If I may, there's no real reason to break out ACL vs. NAACL vs.
| EMNLP, since they're all run by the ACL and one would be hard-
| pressed to say how the EMNLP community might differ from the ACL
| community at this point. And if you're doing NAACL you might want
| to do EACL and IJCNLP too.
| jitl wrote:
| What about JAX?
| kavalg wrote:
| JAX is really cool, but still somewhat immature. I would love
| to see it taking more ground and improving wrt e.g. integration
| with tensorboard and getting all the goodies we have in
| tensorflow. If you are looking for a higher level framework, I
| would recommend elegy [0] which is very close to the keras API.
|
| [0] https://github.com/poets-ai/elegy
| PartiallyTyped wrote:
| Jax is great, but there are some rough edges.
|
| I am using Jax for differentiable programming, and in many
| cases, I saw enormous speedup after jit, sometimes in the
| ballpark of 1e4.
|
| For Neural Networks, I use Equinox, and/or Elegy.
| cweill wrote:
| Can someone please share the current state of deploying Pytorch
| models to productions? TensorFlow has TF serving which is
| excellent and scalable. Last I checked there wasn't an equivalent
| PyTorch equivalent.
|
| I'm curious how these charts look for companies that are serving
| ML in production, not just research. Research is biased towards
| flexibility and ease of use, not necessarily scalability or
| having a production ecosystem.
| brutus1213 wrote:
| I'm a professional scientist, so let me give my two cents on this
| matter. Being able to compare your work against SOTA (state of
| the art) is pretty critical in academic publications. If everyone
| else in your area uses framework X, it makes a lot of sense for
| you to do it too. For the last few years, Pytorch has been king
| for the topics I care about.
|
| However .. one area where Tensorflow shined was the static graph.
| As our models get even more intensive and needs different parts
| to execute in parallel, we are seeing some challenges in
| PyTorch's execution model. For example:
|
| https://pytorch.org/docs/stable/notes/cuda.html#use-nn-paral...
|
| It appears to me that high performance model execution is a bit
| tricky if you want to do lots of things in parallels. TorchServe
| also seems quite simple compared to offerings from Tensorflow. So
| in summary, I think Tensorflow still has some features unmatched
| by others. It really depends on what you are doing.
| erwincoumans wrote:
| Indeed, Google/Alphabet is gradually making the shift to JAX
| but also to ML Pathways towards models that support multiple
| tasks and multiple sensory inputs and sparse instead of dense:
|
| See https://blog.google/technology/ai/introducing-pathways-
| next-...
|
| and Jeff Dean's TED talk:
| https://www.ted.com/talks/jeff_dean_ai_isn_t_as_smart_as_you...
| The_rationalist wrote:
| what are your thoughts on
| https://github.com/tensorflow/runtime ?
| erwincoumans wrote:
| For Academic Papers (the context of this HN topic), JAX and
| PyTorch makes more sense. A new runtime could be useful in
| production.
| probably_wrong wrote:
| I think Tensorflow made a bad move in academia by being so damn
| difficult to use on their earlier versions. Sure, their
| performance was always better than PyTorch's, but when you are
| an overworked PhD student you care less about your code being
| efficient and more about your code working at all.
|
| Word got around that debugging PyTorch was relatively painless,
| those earlier models made it into publications, and now here we
| are.
| p1esk wrote:
| But the funny thing is - TF has never been faster than
| Pytorch. Even when Pytorch just came out they were roughly
| the same in terms of speed.
___________________________________________________________________
(page generated 2022-03-13 23:00 UTC) |