It's alarming that NeurIPS papers are being rejected based on "ethics reviews". How do we guard against ideological biases in such reviews? Since when are scientific conferences in the business of policing the perceived ethics of technical papers?
We already have computer science-wide ethical standards, set by ACM and others. It's not clear we need special ones for AI. Also, standards for research and standards for deployed applications are very different things.
-
-
ACM code of ethics: https://www.acm.org/code-of-ethics how many of these, especially points 1.2 (avoid harm), 1.4 (take action not do discriminate), and 1.6 (respect privacy) are consistently respected in ML research or application? Especially in the areas I mentioned like person reID
-
What are the consequences of breaking this code, given that many practitioners and some portion of researchers are not ACM members anyway? Should the principles of this code not be enforced in paper review?
- Näytä vastaukset
Uusi keskustelu -
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.