Federico Cabitza

@cabitzaf

An interactionist with a bent for Health Informatics; Associate Professor of Human-Computer Interaction @ University of Milano-Bicocca, Italy.

Vrijeme pridruživanja: svibanj 2017.

Tweetovi

Blokirali ste korisnika/cu @cabitzaf

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @cabitzaf

  1. Prikvačeni tweet
    8. pro 2018.
    Odgovor korisnicima
    Poništi
  2. 1. velj

    Many reasons exist why an AI effort can fail. First of all, focusing on accuracy and performance, instead of trust and practice. Then, decoupling AI from data governance. AI regards task automation, and its deployment is an agent of change.

    Poništi
  3. 28. sij

    "In a 2010 Pew Research survey of some 400 prominent thinkers, more than 80 percent agreed that, “by 2020, people’s use of the Internet [will have] and they [will] become smarter and make better choices.” The year 2020 has arrived."

    Poništi
  4. 28. sij

    A new paper addressing "ambiguity, intended as a characteristic of *any data expression* for which a unique meaning cannot be associated ... for either lack of information or multiple interpretations of the same configuration". Ground Truth, anyone?

    Poništi
  5. 27. sij

    ‘How to Implement Responsible AI' is different from ‘How to Implement AI Responsibly’. "Was the WEF suggesting that AI should itself be held responsible for, say, bias or lack of transparency in its decisions?..."

    Poništi
  6. 23. sij

    "Can we develop better methods for preparing the human part of the centaur to better manage the machine intelligence? [...] Instead of building smarter machines, we need to build machines that make us smarter".

    Poništi
  7. 15. sij

    A study that "included data from approximately 100 million patient encounters with about 155 000 physicians from 417 health systems [found that] physicians spent an average of 16 minutes and 14 seconds per encounter [actively] using EHRs".

    Poništi
  8. 11. sij

    "If doctors ask the wrong questions to begin with—if they put AI to work pursuing faulty premises—then the technology will be a bust. It could even serve to amplify our earlier mistakes." VIA

    Poništi
  9. 8. sij

    There is little to add to this inspired and inspiring viewpoint by . We've recently focused on a specific type of cyber-social systems, those bound together by a sense of collaborative effort and common ground. We called'em cyborks, to recognize their liquid nature.

    Prikaži ovu nit
    Poništi
  10. 8. sij

    "The degree to which humans can control a cyber-social system depends on the nature of human-machine coalitions [...But] our understanding of how blended coalitions of humans and AI function is just as unevolved and requires interdisciplinary study".

    Prikaži ovu nit
    Poništi
  11. 5. sij

    "A new field ... called “ML ops”, that focuses not on building or developing models, but rather managing them in operation. ML ops is focused on model versioning, governance, security, iteration, and discovery", that is on QA in the context of AI.

    Poništi
  12. 5. sij

    Ethics in the AI discourse is, IMO, to cultivate a concern for the implications of automating something, and a sense of whether these consequences are good or bad. Thus, it is focused, practical, open, situated, and with the rare virtue of inconclusiveness. cc 🙏

    Poništi
  13. 30. pro 2019.

    Human-AI Interaction must go beyond the traditional model of HCI, which dates back to the 60s and is conditioned by the limits of "single-user" input devices, so to enter the dimension of collaborative and collectively creative work (the only dimension where AI can be useful IMO)

    Poništi
  14. 26. pro 2019.

    "Studies show, software with impressive results in a computer lab can founder when tested in real time [...] That’s because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate."

    Poništi
  15. 22. pro 2019.

    “Having a robot write for you — it's a rather clever business plan, but it seems like a complete betrayal”. The 'mushin shodo' (a no-mind way of writing) of our time. Via

    Poništi
  16. 22. pro 2019.

    “All progress in increasing the fertility of the soil for a given time, is a progress towards ruining the lasting sources of that fertility.” (K. Marx, Capital, 1867, p. 555). Might this also apply to the fertility of the mind? Our children will tell us.

    Poništi
  17. 17. pro 2019.

    Once again: "special attention should be paid to the possible development of an emotional connection btw humans and robots ‒ particularly in vulnerable groups [for] the issues re the serious... impact that this emotional attachment could have on humans"

    Poništi
  18. 15. pro 2019.

    Quantified "feedback can lead to overtraining, poor results and unhealthy behaviors. The bottom line, many experts say, is that far too many athletes are overdependent on their devices." VIA

    Poništi
  19. 14. pro 2019.

    "Disclosures about AI pose their own risks: Explanations can be hacked, releasing additional information may make AI more vulnerable to attacks, and disclosures can make companies more susceptible to lawsuits or regulatory action."

    Poništi
  20. 13. pro 2019.

    "There’s little scientific basis to emotion recognition technology (ERT) so it should be banned from use in decisions that affect people’s lives" (the AI Now Institute in its annual report). It's wrong and dangerous, that's why ERT should be banned.

    Poništi
  21. 12. pro 2019.

    Since we entered the consumer age, and even more, of personalization, we have built around us a "comfort cage" from which it's very difficult to get out. Wasted effort to try to escape the pleasure tarball for *our* humanity; let's try to understand where its offspring is headed.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·