Trenton Bricken

@TrentonBricken

Interested in computational biology. Particularly Deep Learning and Biosecurity. Currently doing research in the Marks Lab at Harvard Medical School. Duke 2020.

London
Joined March 2014

Tweets

You blocked @TrentonBricken

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @TrentonBricken

  1. Retweeted
    Jan 29

    genomes from China are accumulating a bit of genetic diversity (seen at ). This is expected given RNA virus error-prone replication and does not indicate functional differences. 1/3

    Show this thread
    Undo
  2. Jan 29
    Undo
  3. Retweeted
    Jan 26

    I present Love Thy Nearest Neighbor: a Markov chain generator trained on the King James Bible and Kevin Murphy’s Machine Learning: A Probabilistic Perspective. Behold...

    Show this thread
    Undo
  4. Retweeted
    Jan 16

    Total self advertisement - feeling bitter and twisted of course :)

    Undo
  5. Retweeted
    Jan 13

    Why you should always wash your hands: this is a petri dish of a hand showing what microbes are in there

    Undo
  6. Retweeted

    You: This one untested supplement will restore balance to my body Your body:

    Undo
  7. Jan 1

    Cool piece surveying developments in giant 3D printers that are capable of making boats, bridges, buildings and rockets more efficiently and of a higher quality than conventional methods:

    Undo
  8. I really hope this happens because it seems obvious the public should have access to the results of research that they helped fund. I know the same policy is being considered in Europe. Curious to see how this plays out.

    Undo
  9. Retweeted
    23 Dec 2019

    A few ideas about the reading literature side of research. I’m by no means an expert-but having had to teach myself a lot of this stuff, I’m surprised by how little has been written about it, and here are some things I wish I knew when I started! (Thread 1/n)

    Show this thread
    Undo
  10. Cool new method from et al: "High-Throughput Mapping of B Cell Receptor Sequences to Antigen Specificity" They found some interesting broadly neutralizing antibodies for flu and HIV from HIV patients that hadn't been characterized before.

    Undo
  11. We need to use NLP to summarize legal documents including terms and conditions and detect any unusual terms. I think this would help indirectly hold more companies accountable to dubious practices

    This Tweet is unavailable.
    Undo
  12. There is more work to be done but this is a big deal with much worse alternatives to protein sequencing in existence currently.

    Show this thread
    Undo
  13. The ability to sequence protein sequences is a big deal!

    Show this thread
    Undo
  14. (neither Top-K or Nucleus Sampling have done empirical validation before, probably for the very reasons why I am finding it difficult!) More details on what I have tried and why this validation is hard are in the blog post :)

    Show this thread
    Undo
  15. This work is currently a blog post rather than a paper because I have been unsuccessful in empirically validating Tail Free Sampling against Top-K and Nucleus Sampling.

    Show this thread
    Undo
  16. I argue this approach explicitly finds the set of “replaceable” tokens for a particular context and that languages (including that of biology) have this replaceability property. If you’re interested please reach out and/or give me feedback.

    Show this thread
    Undo
  17. Tail Free Sampling tries to ensure you sample diverse and high quality sequences by finding where the probability distribution for the next token to be generated plateaus. Here is an example with different hyperparameters: 0.9 (green) and 0.95 (blue) tend to work well

    Show this thread
    Undo
  18. Generating sequences from a language model using Ancestral, Top-K, or Nucleus Sampling? Consider using Tail Free Sampling instead! 👇Thread

    Show this thread
    Undo
  19. This figure in particular is crazy

    Show this thread
    Undo
  20. "While the model contains a large degree of uncertainty, it suggests that on average industry-affiliated AI scholars receive 34.6 (UK) times and 20.6 (US) times as many mentions as AI scholars without industry affiliation." Citations != Public influence

    Show this thread
    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·