Guillaume Chevalier

@guillaume_che

Passionned about the applications of . Loves to solve complicated and to innovate creatively.

Joined September 2016

Tweets

You blocked @guillaume_che

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @guillaume_che

  1. Pinned Tweet
    May 2

    You can now "pip install conv", my first package uploaded on : a tiny library for doing clean and concise convolutional loops. Do neat convolution-styled loops on your lists. 1D and 2D convolutions are supported.

    Undo
  2. Aug 16

    My first solo paper on arXiv! The LARNN: Linear Attention Recurrent Neural Network. Yes, it LARNNs. (Okay, I wrote that in May, but anyway here it is on arXiv)

    Undo
  3. Aug 1

    Pretty mind-blowing paper, 3D-printed representations of neural networks you can run inference on by shining light through... literal *light speed* inference times. They only have classification networks so far, but I'd love to see pix2pix or a GAN soon.

    Show this thread
    Undo
  4. Jul 7

    I didn't expect this question to cause a buzz on StackExchange haha!

    Undo
  5. Jul 2

    Just thought of a U-Net RNN. Wouldn't that yield awesome results on various tasks? I've been thinking a lot on transforming signal from one format to another format, and a U-Net RNN sounds promising.

    Undo
  6. Jun 19
    Undo
  7. Jun 14
    Undo
  8. Jun 12
    Undo
  9. May 26

    Launching and setting up my Deep Learning consulting company, ! 😀

    Undo
  10. May 20

    7/ The fact that the Multi-Head Attention Mechanisms might work better with added activations on its linear layers (at least in this context) is an interesting discovery. See figure, where test accuracy over time is shown.

    Show this thread
    Undo
  11. May 16

    And here is the Annotated Multi-Head Attention Mechanism, an analysis of mine: Printing the dimensions of the attention mechanism and its positional embedding, plus some experimental modifications!

    Undo
  12. May 15

    6/ Here is some code for a concrete implementation of the LARNN I've made: Yes, it LARNNs. It uses a Multi-Head Attention Mechanism as in Attention Is All You Need, and the equations are derived from the LSTM cell.

    Show this thread
    Undo
  13. May 14
    Undo
  14. May 8
    Undo
  15. May 5

    It's fascinating to see what increases superlinearly (patents, GDP, electricity, crime, AIDS cases) and sublinearly (gas stations, road surface). And, of course, a priori it's not obvious that any of these should be functions so purely of population.

    Show this thread
    Undo
  16. May 5

    To all the machine learning researchers who sat on the sidelines during the "chihuahua or muffin?" wars, we need your support now more than ever.

    Undo
  17. May 2

    We rely on 15 people to do our science. Without them matplotlib, numpy and pandas would not be maintained.

    Show this thread
    Undo
  18. Apr 30

    XKCD tackles Python packaging today 😀 

    Show this thread
    Undo
  19. May 2
    Undo
  20. Apr 25

    Yesterday class was more hands-on. We went through the new features introduced by 0.4.0 and analysed the DCGAN training example. Finally, we provided some pointers about the checkerboard artefacts which are nicely fixed following

    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·