Trenton Bricken

@TrentonBricken

Interested in computational biology. Particularly Deep Learning and Biosecurity. Currently doing research in the Marks Lab at Harvard Medical School. Duke 2020.

London
Vrijeme pridruživanja: ožujak 2014.

Tweetovi

Blokirali ste korisnika/cu @TrentonBricken

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @TrentonBricken

  1. proslijedio/la je Tweet
    29. sij

    genomes from China are accumulating a bit of genetic diversity (seen at ). This is expected given RNA virus error-prone replication and does not indicate functional differences. 1/3

    Prikaži ovu nit
    Poništi
  2. 29. sij
    Poništi
  3. proslijedio/la je Tweet
    26. sij

    I present Love Thy Nearest Neighbor: a Markov chain generator trained on the King James Bible and Kevin Murphy’s Machine Learning: A Probabilistic Perspective. Behold...

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    17. sij

    Total self advertisement - feeling bitter and twisted of course :)

    Poništi
  5. proslijedio/la je Tweet
    13. sij

    Why you should always wash your hands: this is a petri dish of a hand showing what microbes are in there

    Poništi
  6. proslijedio/la je Tweet

    You: This one untested supplement will restore balance to my body Your body:

    Poništi
  7. 2. sij

    Cool piece surveying developments in giant 3D printers that are capable of making boats, bridges, buildings and rockets more efficiently and of a higher quality than conventional methods:

    Poništi
  8. 26. pro 2019.

    I really hope this happens because it seems obvious the public should have access to the results of research that they helped fund. I know the same policy is being considered in Europe. Curious to see how this plays out.

    Poništi
  9. proslijedio/la je Tweet
    23. pro 2019.

    A few ideas about the reading literature side of research. I’m by no means an expert-but having had to teach myself a lot of this stuff, I’m surprised by how little has been written about it, and here are some things I wish I knew when I started! (Thread 1/n)

    Prikaži ovu nit
    Poništi
  10. 20. pro 2019.

    Cool new method from et al: "High-Throughput Mapping of B Cell Receptor Sequences to Antigen Specificity" They found some interesting broadly neutralizing antibodies for flu and HIV from HIV patients that hadn't been characterized before.

    Poništi
  11. 19. pro 2019.

    We need to use NLP to summarize legal documents including terms and conditions and detect any unusual terms. I think this would help indirectly hold more companies accountable to dubious practices

    Tweet je nedostupan.
    Poništi
  12. 18. pro 2019.

    There is more work to be done but this is a big deal with much worse alternatives to protein sequencing in existence currently.

    Prikaži ovu nit
    Poništi
  13. 18. pro 2019.

    The ability to sequence protein sequences is a big deal!

    Prikaži ovu nit
    Poništi
  14. 17. pro 2019.

    (neither Top-K or Nucleus Sampling have done empirical validation before, probably for the very reasons why I am finding it difficult!) More details on what I have tried and why this validation is hard are in the blog post :)

    Prikaži ovu nit
    Poništi
  15. 17. pro 2019.

    This work is currently a blog post rather than a paper because I have been unsuccessful in empirically validating Tail Free Sampling against Top-K and Nucleus Sampling.

    Prikaži ovu nit
    Poništi
  16. 17. pro 2019.

    I argue this approach explicitly finds the set of “replaceable” tokens for a particular context and that languages (including that of biology) have this replaceability property. If you’re interested please reach out and/or give me feedback.

    Prikaži ovu nit
    Poništi
  17. 17. pro 2019.

    Tail Free Sampling tries to ensure you sample diverse and high quality sequences by finding where the probability distribution for the next token to be generated plateaus. Here is an example with different hyperparameters: 0.9 (green) and 0.95 (blue) tend to work well

    Prikaži ovu nit
    Poništi
  18. 17. pro 2019.

    Generating sequences from a language model using Ancestral, Top-K, or Nucleus Sampling? Consider using Tail Free Sampling instead! 👇Thread

    Prikaži ovu nit
    Poništi
  19. 17. pro 2019.

    This figure in particular is crazy

    Prikaži ovu nit
    Poništi
  20. 17. pro 2019.

    "While the model contains a large degree of uncertainty, it suggests that on average industry-affiliated AI scholars receive 34.6 (UK) times and 20.6 (US) times as many mentions as AI scholars without industry affiliation." Citations != Public influence

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·