Renato Negrinho

@rmpnegrinho

Machine Learning PhD Student at Carnegie Mellon University

Vrijeme pridruživanja: siječanj 2015.

Tweetovi

Blokirali ste korisnika/cu @rmpnegrinho

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @rmpnegrinho

  1. Prikvačeni tweet
    1. lis 2019.

    I'm thrilled to finally announce the release of DeepArchitect, a modular and programmable architecture search framework that you can actually use for your use cases. This work has been accepted at . Preprint: (see thread)

    Prikaži ovu nit
    Poništi
  2. 19. pro 2019.
    Prikaži ovu nit
    Poništi
  3. 14. pro 2019.
    Prikaži ovu nit
    Poništi
  4. 14. pro 2019.

    Hate how you constantly misrepresent our community's understanding of the limitations of current methods. Please submit your papers with concrete solutions to next year's NeurIPS. 2/2

    Prikaži ovu nit
    Poništi
  5. 10. pro 2019.

    I'm attending at NeurIPS this week. Reach out if you want to chat. Our poster presentation is on Thursday 10:45-12:45, East Exhibition Hall B + C #36, so definitely stop by then.

    Poništi
  6. 2. pro 2019.

    Wrote a blog post with a collection of tips and references for writing academic papers. These are things that I learned over the years. Still trying to improve.

    Poništi
  7. proslijedio/la je Tweet
    26. stu 2019.

    Here's Richard Feynman, in 1985, describing Douglas Lenat's heuristic-based system and its winning solutions. Reminds me of reinforcement learning efforts of today. Given objectives & constraints, algorithms may lead to unexpected consequences. Full clip:

    Prikaži ovu nit
    Poništi
  8. 29. lis 2019.

    Created a Google Colab to easily play with DeepArchitect and run the examples in the paper, blog post, and repo readme. Check it out: .

    Prikaži ovu nit
    Poništi
  9. 22. lis 2019.

    Just found . The graduate section should be required reading for PhD students and advisors alike.

    Poništi
  10. 1. lis 2019.

    CC'ing some relevant handles:

    Prikaži ovu nit
    Poništi
  11. 1. lis 2019.

    Reach out if you want to get involved (e.g., by implementing existing search spaces or search algorithms; or extending the framework somehow). I'll be maintaining the framework. Plenty of references to resources can be found here: . Feedback appreciated.

    Prikaži ovu nit
    Poništi
  12. 1. lis 2019.

    This implementation addresses many of the limitations of the initial prototype here: Supports multi-input multi-output modules, hyperparameter sharing, and multiple frameworks.

    Prikaži ovu nit
    Poništi
  13. 1. lis 2019.

    The language relies on notions of lazy evaluation, e.g., we might know that we want an encoder, but only after choosing values for all the hyperparameters, we will have a concrete encoder. Nonetheless, until then, we still have sub-graph that we can refer to.

    Prikaži ovu nit
    Poništi
  14. 1. lis 2019.

    Crucially, compared to hyperparameter optimization, you don't have to write separately the encoding of hyperparameter space and the mapping from instances in the space to their implementations. In our language, the mapping is created automatically from the encoding.

    Prikaži ovu nit
    Poništi
  15. 1. lis 2019.

    In our implementation, see these search space transitions () for an example search space (search_space_5; ). A more complex example here () for the search space here ().

    Prikaži ovu nit
    Poništi
  16. 1. lis 2019.

    If you only want to get a gist, our search space example section in the paper is a good reference (see images).

    Prikaži ovu nit
    Poništi
  17. 1. lis 2019.

    This language allows us to decouple the implementations of search spaces and search algorithms to a large extent. This allow architecture search code to be more widely used and reused in the literature. See documentation for more information.

    Prikaži ovu nit
    Poništi
  18. 1. lis 2019.

    We design the language to encode search spaces over computational graphs (e.g., deep learning architectures). We often have good inductive biases about what ops are useful, but don't know exact sequence + hyperps. Our language gives us constructs to express design uncertainty.

    Prikaži ovu nit
    Poništi
  19. 1. lis 2019.

    It is extensible to any domain for which it makes sense to do architecture search, but currently supports Tensorflow, Keras, and Pytorch. Easy to add support for new domains. Code: Documentation: Examples:

    Prikaži ovu nit
    Poništi
  20. 1. lis 2019.
    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·