Michael Dinitz

@mdinitz

Associate Professor, Department of Computer Science, Johns Hopkins University

Baltimore, MD
Joined April 2007

Tweets

You blocked @mdinitz

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @mdinitz

  1. 15 Dec 2021

    One of the all-time great algorithms and papers!

    Undo
  2. Retweeted
    6 Dec 2021

    So thrilled to present our paper "Burst-tolerant data center networks with " at the upcoming ACM '21 conference. Vertigo employs packet deflection to prevent drops when facing microbursts. Check out the paper:

    Undo
  3. Retweeted
    1 Dec 2021

    roses are red indie bands love obscurity i would rate you in the top 1% (relative to other students i’ve interacted with at my home institution) in Emotional Maturity

    Undo
  4. Retweeted
    29 Nov 2021

    Aside: this deservedly won best paper at the upcoming SODA, which puts Merav Parter on a 4-in-a-row best paper streak (including best student papers) 👀 is this some kind of record? n/n

    Show this thread
    Undo
  5. 25 Nov 2021

    The new result finds a really natural and neat counterexample: the complete binary matroid! I haven't read all the details yet, but I'm super excited that this has finally been resolved!

    Show this thread
    Undo
  6. 25 Nov 2021

    It seemed obvious that not all matroids have this property, but we never actually found a counterexample. I raised this question explicitly later in a survey:

    Show this thread
    Undo
  7. 25 Nov 2021

    Back in SODA '09 we (, , , and ) introduced the notion of matroids having a "partition property", and made the obvious observation that having this property was sufficient for the matroid secretary problem:

    Show this thread
    Undo
  8. 25 Nov 2021

    Great Thanksgiving present: new paper by Abdolazimi, Karlin, Kaplan, and solves a problem that's been bugging me since grad school! .

    Show this thread
    Undo
  9. Retweeted
    4 Nov 2021

    I am looking for PhD students to join my effort on ML for trustworthy AI fall 2022. Consider applying to our PhD program in the CS department of JHU . We have a great program and active multidisciplinary research. More info:

    Undo
  10. Retweeted
    14 Oct 2021

    They should replace baseball umpires with cameras and robots, but with one extra robot whose job is to argue with and eject managers

    Undo
  11. Retweeted
    14 Oct 2021

    UMass Amherst is hiring, specifically in Theoretical CS. Come and work with us.

    Undo
  12. 9 Oct 2021

    It’s a little sad how excited this makes me. Love single column with no page limits. No reason to stick with horrible ACM and IEEE double-column formats.

    Undo
  13. Retweeted
    30 Aug 2021

    20 faculty positions (including some in theory&algorithms) at Ohio State University CSE! Please spread the word and/or apply.

    Show this thread
    Undo
  14. 22 Jul 2021

    I'm very excited by this line of work: using ML to speed up traditional algorithms, particularly through "warm-start". Cool algorithms in both practice and theory, and some of the first formal justification for warm-start!

    Show this thread
    Undo
  15. 22 Jul 2021

    Improves significantly over Hungarian in theory, and slightly over more complicated state of the art algorithms. But improves massively in our experiments! I very rarely write papers with experiments, but we put a huge amount of effort into these, and I think they're convincing

    Show this thread
    Undo
  16. 22 Jul 2021

    Lots of technical work (e.g., what if learned duals aren't feasible for your instance?), but at the end of the day get a learning algorithm which feeds into a modified Hungarian algorithm to compute matchings. Can prove running time based on accuracy of learned duals.

    Show this thread
    Undo
  17. 22 Jul 2021

    Maybe surprisingly, answer is yes! After not too many samples, can learn "reasonable" values for the *dual*. Can then feed these starting duals as a "warm-start" into the traditional Hungarian algorithm.

    Show this thread
    Undo
  18. 22 Jul 2021

    Much of the focus has been on online algorithms, but we go back to traditional running times. Basic question: if we're given a bunch of matching instances from some distribution, can we learn something so that in the future we can compute matchings much faster?

    Show this thread
    Undo
  19. 22 Jul 2021

    There's been a super interesting line of work on "algorithms with ML predictions", where the goal is to show that it's possible to combine ML with more traditional algorithms to get the best of both worlds: traditional worst-case, but great performance if ML is accurate

    Show this thread
    Undo
  20. 22 Jul 2021

    Super excited about a new preprint, "Faster Matchings via Learned Duals", with Sungjin Im, Thomas Lavastida, Ben Moseley, and . Long story short: we can use ML to massively speed up min-cost perfect matching computations!

    Show this thread
    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·