PLDI needs to start taking HCI evaluation seriously. Example: paper claims its PL has "intuitive semantics" (https://dl.acm.org/doi/pdf/10.1145/3385412.3386007 …). Press release claims "the first intuitive programming language for quantum computers" (https://ethz.ch/en/news-and-events/eth-news/news/2020/06/the-first-intuitive-programming-language-for-quantum-computers.html …). Let's see the proof.
-
-
WHAT? Are you kidding? You're getting "more intuitive" from "shorter"? Why don't you compare J and Python, then tell me which one's shorter, and which one's more readable and writable. For reference, J quicksort: quicksort=: (($:@(<#[), (=#[), $:@(>#[)) ({~ ?@#)) ^: (1<#)
Show this thread -
I do think the authors claimed this in good faith. They made sensible high-level arguments in Sec2 about the value of their programming model ("automatic uncomputation"). They also argue in Sec8.2 about how their PL better captures the "intent" of an algorithm than Q#.
Show this thread -
This shows the huge gap in evaluation between what we can quantitatively show and what we really care about. The authors have sophisticated PL tools, but primitive HCI tools. They either need better experimental methodologies or better cognitive models for testing comprehension.
Show this thread -
For this paper, I think a better eval would be to pick 3 interesting programs and do a qualitative comparison of their lang vs. Q#. No LoC, no metrics, just a careful expert analysis of why they think their lang is better a better representation.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
cognitive psychology. PhD