Federico Cabitza

@cabitzaf

Health Informatics scholar (PhD), Assistant Professor of Human-Computer Interaction & Data Visualization @ University of Milano-Bicocca, Italy.

যোগদান করেছেন মে ২০১৭

টুইট

আপনি @cabitzaf-কে ব্লক করেছেন

আপনি কি এই টুইটগুলি দেখতে চাওয়ার বিষয়ে নিশ্চিত? টুইটগুলি দেখা হলে @cabitzaf অবরোধ মুক্ত হবে না।

  1. পিন করা টুইট
    ২০ জুলাই, ২০১৭
    পূর্বাবস্থায়
  2. ২ ঘন্টা আগে

    Credit: by Nathan Yau (), to me an example of very well-designed interactive infoviz.

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  3. ২ ঘন্টা আগে

    Only 10-15% of readers of interactive visualizations on the New York Times actually click buttons. Maybe, as suggested by , "dataviz people spend too much time thinking about the interactions themselves and less about the audience who is supposed to be using them."?

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  4. ২৩ ঘন্টা আগে

    "It has become evident that automation doesn't supplant human activity; rather it changes the nature of the work that humans do, often in ways unintended and unanticipated by the designers of automation." Written in 1997 (by Parasuraman & Riley). Nothing new, but still current.

    পূর্বাবস্থায়
  5. ১৮ জুলাই

    From a systematic review on the use of in health care: "the few published studies were mainly quasi-experimental, and rarely evaluated efficacy or safety." Not much differently from other AI applications...

    পূর্বাবস্থায়
  6. ১২ জুলাই

    A study on Ann. Med. focuses on the range of strong assumptions, biases and limitations that affect even the top-cited and most influential RCTs. Vitriolic bottom line: is scientific understanding undermined in the name of computer-based randomisation?

    পূর্বাবস্থায়
  7. পুনঃ টুইট করেছেন
    ১১ জুলাই

    Given that for much clinical IT including AI we cannot guarantee safety because we delegate some autonomy to them and not all operating contexts can be anticipated ahead of time, clinical AI needs to be designed to be resilient

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  8. ৯ জুলাই

    Is it really too far-stretched to see this narrative of brain-boosting drugs close to the AI augmentation one? Both narratives have this idea of the inadequacy of the human brain wrt modern time pressures and accuracy requirements...

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  9. ৮ জুলাই

    "Rather than outsourcing cognition, it’s about changing the operations and representations we use to think". Ambitious contribution to the evergreen narrative of AI as Intelligence Augmentation, by Google affiliates. In good company (w/ IBM, Deloitte...).

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  10. ৭ জুলাই

    A few questions by to ponder carefully, before even hoping, let alone advocating, that AI-powered decision support technology will spread in clinical settings any time soon.

    পূর্বাবস্থায়
  11. ৭ জুলাই

    Lessons to assess AI fitness to medicine:"Nowadays in addition to data interpretation and required knowldege drs must be able to work as part of a team, communicate well [...] The perfectionist traits and personality types of those performing well in A level may not be suitable."

    পূর্বাবস্থায়
  12. ৭ জুলাই

    The “Wizard of Oz design technique”: “You simulate what the ultimate experience of something is going to be. And... when it comes to AI, there is a person behind the curtain rather than an algorithm... to know if there was sufficient demand for it before making the investment."😮

    পূর্বাবস্থায়
  13. ৩ জুলাই

    "the potential of AI to contribute to human discussion" (checking claims, recognizing types of arguments, summarizing points, offering alternative views and probing reasons): If I had to cheer for a practical application of AI, I would bet on this.

    পূর্বাবস্থায়
  14. ৩ জুলাই

    "if the doll says it is cold and the child asks his or her parents to buy it a coat, is that advertising?"

    পূর্বাবস্থায়
  15. ১ জুলাই

    I don't mean to be nominalistic, but is a misnomer wrt . Bias doesn't lie in algos: in fact it's all in the data itself. And ultimately it's us who discriminate people, taking the machine output at face value. It's machine-induced bias, or credulity.

    পূর্বাবস্থায়
  16. ৩০ জুন

    Explainability advocates seem to overestimate the capability of users to make sense of AI explanations. However these are but other data, output metadata,in the absence of a causal explanatory framework. Distrusting a black box is easier than a white box. Glassy automation bias.

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  17. ৩০ জুন

    A conjecture re algo explainability: What if making medical AI more explicable made it generally more convincing, and hence more effective in inducing errors and bias when it's inaccurate? Plausible explanations may be decisive, but also for the worse. The white box paradox.

    এই থ্রেডটি দেখান
    পূর্বাবস্থায়
  18. ২৯ জুন

    The augmentation narrative of oracular AI in medicine considered harmful. For this technology aims to augment our cognition, not perception.

    পূর্বাবস্থায়
  19. ২৮ জুন
    পূর্বাবস্থায়
  20. পুনঃ টুইট করেছেন
    ২৭ জুন

    Deceived by Design - How tech companies use dark patterns to discourage us from exercising our rights to privacy

    পূর্বাবস্থায়
  21. ২৬ জুন
    পূর্বাবস্থায়

লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।

টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।

    আপনিও পছন্দ করতে পারেন

    ·