Jason Eisner

@adveisner

Professor of CS at Johns Hopkins University, Director of Research at Microsoft Semantic Machines, ACL Fellow

Joined August 2017

Tweets

You blocked @adveisner

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @adveisner

  1. Jan 8

    Just as an iff statement is stronger than an if statement, a claim about all nonn-cromulent objects is stronger than a claim about all non-cromulent objects.

    Show this thread
    Undo
  2. Jan 8

    The abbreviation "iff" = "if and only if" is common and extremely useful. Occasionally you see "whenn" = "when and only when." John Conway proposed "onne" = "one and only one." "Nonn" doesn't extend this pattern. But it DOES look like a mathematical shorthand akin to "iff."

    Show this thread
    Undo
  3. Jan 8

    People often say "non" in these settings but "nonn" is what they really mean. It's tempting to also coin innfinite and irrrational. However, I think nonn-finite and nonn-rational are safer!

    Show this thread
    Undo
  4. Jan 8

    How about "nonn" as shorthand for "not necessarily"? "Let F be any nonn-convex function" "The bounded region defined by these nonn-linear constraints" "The trouble is the nonn-decomposable path score" "REINFORCE works in nonn-Markov environments" "WFSAs are nonn-determinizable"

    Show this thread
    Undo
  5. 15 Oct 2021

    (People keep suggesting that blue states use the Texas device to pass unconstitutional gun control laws. But tit-for-tat constitutional violations are playing with fire. The law above makes the point neatly, via a right everyone holds dear, yet affects no one … ALMOST no one.😄)

    Show this thread
    Undo
  6. 15 Oct 2021

    Hey you states! Can someone quick pass a law? Make it illegal to express any positive opinion of any Supreme Court decision or Justice. State can't enforce, but anyone else can sue violators for $10K + costs. How long till SCOTUS strikes down this heinous abrogation of 1A rights?

    Show this thread
    Undo
  7. 8 Oct 2021

    Summer 2022 NLP/ML internships at Microsoft Semantic Machines! Fun with creating the most powerful, natural, and helpful human-AI interactions. We had a great time hosting a very talented collection of interns last summer, working on diverse challenges.

    Undo
  8. Retweeted
    11 Jun 2021

    - at (paraphrased): "early works on ML/Stat-based NLP were basically people doing their homework in public, learning a new topic and finding some trivial application for it. this is how we got so many methods into our toolbox"

    Undo
  9. Retweeted

    But, our question for the overall community is whether there are good existing models about how to enable sharing of assignments and teaching materials in a sustainable way? please offer some pointers!

    Show this thread
    Undo
  10. 9 Jun 2021

    "Learning how to ask: Querying LMs with mixtures of soft prompts" In the Best Paper session, an entertaining short presentation from of . Today (Wed) at 12:30 PDT (session is 11:40-1:10 PDT).

    Show this thread
    Undo
  11. 9 Jun 2021

    Experimenting to find a language model prompt that works for your NLP task? Why not automate that? Prompts are made of word EMBEDDINGS ... so just tune those vectors by SGD. Even random init works fine. Bonus: Continuous "soft prompts" are more expressive than discrete ones.

    Show this thread
    Undo
  12. 4 Jun 2021

    Careful analysis here, featuring the complexity class P/poly: "Limitations of Autoregressive Models and Their Alternatives" talk by , Session 14D (Wed 09 Jun 2021 5PM PDT)

    Show this thread
    Undo
  13. 4 Jun 2021

    It may be EASY to determine whether x is grammatical, yet IMPOSSIBLE to do so by testing p(x) > 0, for ANY p that is modeled autoregressively.😭 (The required autoregressive factors are uncomputable: halting problem!)

    Show this thread
    Undo
  14. 4 Jun 2021

    If your LM needs to reflect a speaker's reasoning or planning, you need a more powerful family of language models where next-word prediction isn't cheap by design, but requires lookahead or other marginalization.

    Show this thread
    Undo
  15. 4 Jun 2021

    Everyone's using big autoregressive language models. But ... they predict the next word with a polysized circuit (computation graph). So they can't accurately model settings where that prediction is NP-hard. 😢

    Show this thread
    Undo
  16. 31 May 2021

    The Roads All Taken I shall be drawing this (Figure B) Somewhere pages and pages hence: Two paths diverged in a graph, and we Went nondeterministically; And then we assessed the difference.

    Undo
  17. 19 Apr 2021

    Here's a version of the ad for even more senior people. Microsoft is committed to building a diverse workforce: we strongly encourage women and URMs to apply for both positions.

    Show this thread
    Undo
  18. 19 Apr 2021

    The Semantic Machines group @ Microsoft is looking to hire a few more extremely talented NLP/ML researchers. I can say firsthand it's an incredible team to work with! Wanna redesign dialogue systems to truly help human users via truly natural conversation?

    Show this thread
    Undo
  19. 16 Mar 2021

    Kudos to philosophy prof for minting+blurbing+selling this fine piece of conceptual art. My Ph.D. student kindly alerted me to it. Her dissertation research focuses on modeling the interplay of types and tokens in natural language: reusable linguistic units.

    Show this thread
    Undo
  20. 16 Mar 2021

    At least anyone can verify from the Ethereum blockchain that I do own *this* NFT: (thanks to via for help on that) (and apologies that I didn't realize the environmental cost of Ethereum 1.0 transactions)

    Show this thread
    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·