Inspired by & tx for the shout-out, & because asked: here’s a long 🧵 about my Drphil, both the dissertation and the projects I did alongside. As a doctoral student, my process & path were atypical and productively messy.
Conversation
(I reckon big research projects are messy; one part of the joy of writing is thinking about crafting a particular narrative out of that years-long experience. Without a fully-funded PhD, you often just pick up whatever you can work-wise; these will influence the final outcome.)
1
5
PhD level work is humbling-in the best & worst ways-wrt the making of any knowledge. Here, one is asked to explicitly acknowledge how research contributes to a field. Hence, it makes fields;
1
6
and as such, it can be political in its use of methods and influences to refuse or reinforce gatekeeping about what is (in) a field.
1
8
So, reflexively, I became interested in how we are making knowledge about AI’s objects, how language and ideas get (re)shaped through the power of innovation, ‘science’, & tech. This influenced my inquiry.
1
8
My work was also influenced by coming to the academy in this "atypical" manner, and after being a researcher in NGOs outside it for so long. The learning and unlearning have been worth it. And so hard.
3
6
While I was finishing the dissertation, I did two projects related to knowledge-making about AI. This was not in the final dissertation text tho, but it has influenced my thinking: A Dictionary of AI: A Is For Another is one of these aisforanother.net
Replying to
There’s also a piece forthcoming in ACM Interactions Mag about AI metaphors based on similar ideas of knowledge-making about emergent technologies. The metaphors project and the 'Dictionary' are applied “cultural semiotics” -sort of- and Humanities projects
1
2
about who gets to speak about AI, and how, and what this reveals about the assumptions about this technology.
1
1
Representationalism, acc to Karen Barad, is the not-making-clear of where we get our representations about the world from. About six years ago, I thought it was interesting how ‘ethics’ was taking shape around AI.
1
1
I argue that knowledge about ‘ethics’ (in the analytical philosophy traditions) is being produced and normalised through the logics and demands of computational (varieties of current AI/data tech) systems, their business policies, organisations.
1
5
And vice versa. Strands of analytical philosophy-oriented approaches to ethics influenced by behavioural economics, mathematics, game theory, social psychology, cognitive sci research are historically tied to these AI/ML tech.
1
3
In the opening lines of this essay, Evgeny Morozov has writes about how Web 3 is taking shape thru the performative force of language, and how that happened with Tim O Reilly's influential utterances
1
3
Utterances. Interpellations. Performativity. From prayers and chants, to the use of and limits of pronouns, there are social and economic apparatuses that enable ideas, people, communities to take shape. All that good poststructuralist stuff.
1
3
Science has its languages: Numbers, measures and measurement devices, diagrams, and similar visual metaphors of mathematical relationships. If you unpick these languages then you can understand the assumptions underlying them, and the worlds they create and shape.
1
1
2
I’m saying something that is well-established, that this scientific knowledge is an object of culture too. Some fine ideas about how we might track and know algorithms *in* and *as* culture by have been super helpful in my work doi-org.ezp.lib.cam.ac.uk/10.1177/205395
1
3
The title of my work is The Ethical Apparatus. It investigates the social-cultural, material, and performative practices that enable meaning-making about tech. How and why do we understand a car as 'driverless', or a technology as ‘ethical’?
1
1
4
This is where this work is not just about language and semiotics but also the situated, specific, material, and cultural contexts of that language’s emergence and shape. (I know this sounds like anthropology to some people, but it is not!)
1
2
Between the desire/need to regulate Silicon Valley, and Stuart Russell's ‘value alignment’ problem, I.e building ML/reinforcement learning systems with goals aligned to human value systems, ‘ethics’ has taken on a strange disposition.
1
2
This work tracks the rise and fall of one such distinct discourse of ethics, that of the ethics of autonomous driving.
1
2
I'm saying that this frame around ethics is intentionally made and I wanted to undertand this process and its implications for knowledge-making and tech power, and for human bodies, societies, and spaces
1
1
The 'ethical apparatus' is an analytical frame to think about the AV as device and dispositif, to show how the imaginaries, infrastructures of the technology engineer (unintended pun) measures of autonomy and ethics.
1
1
This work was most definitely *not* about arriving at a normative frame for ethics, but about how the figure of the driverless car- as imaginary, as infrastructure, as a highly regulated and regulating 20th c media technology -influences the notion of the ethical,& ethical norms
1
2
I make all three cultural ontologies of this machine -imaginary, infrastructure, media - part of my analysis and writing. Convergently, my colleague brings a similar kind of framing to the term he leads, on Histories of AI, in our jointly-led MSt
1
5
I do a deep dive into proposals for Machine Ethics, the Trolley Problem, and the Moral Machine (MM) Project. I triangulate w/ expert interviews to understand their emergence and inter-relationships. And how the MM presages a shift of the ethical to the probabilistically optimal.
1
1
I discuss how stakeholders & experts brought their own disciplinary training and frothy epistemic-culture anxieties to their engagement with this artifact of the car. There’s also amplification of “the ethics of autonomous driving” by think tanks, academics, TED talks, tech press
1
2
1
1
Key contributions: the emergence of machine learning as its own regulator in the machine ethics frame; the computer becomes the destination of its own address. And the transformation of the ethical into the optimal.
1
1
The frame of incomputability actually holds together the work on the ethics discourse and the other big chapter, the one on the 'ironies of autonomy'. In the dissertation text, I did not really dwell enough on 'incomputabilities' explicitly enough.
1
2
The ‘ironies of autonomy’ chapter was published in one version here at the end of 2020. nature.com/articles/s4159 I’m also on the podcast talking about it
1
1
3
Here I discuss the auto-pilot technology and show the influence of aviation on this high end mobility innovation, but also in how we think about machine-operator relationships. And how this shapes the AV as a car, but also as data infrastructure. I write about crashes here.
1
I write about the dissonance, blur as Bratton puts it, in the shaping of this artifact, the re-mediation in this. Lots of enjoyable media theory, tech theory writing in this, woven in with field experiences. I like this chapter a lot.
1
Key contributions: The multiple ironies of ‘autonomy’ in terms of a mapping of the substantial cognitive and data flows of embodied human work inside the large distributed infrastructures of crash accountability in automated driving.
1
1
And: of the world having to always change to accommodate the robot/AI; in the case of the advanced (-ing) tech, humans tend to be penalised for not accommodating the machine. What refers to as the “moral crumple zone”.
1
1
2
Shout out also to the v insightful work by and on the 'flat ontology of vehicular navigation'. The AV reconstructs map as territory, there is no real outside the map for an AV-as-algorithmic infrastructure: elibrary.steiner-verlag.de/content/chapte
1
2
The crisis of the emergent driverless car, its crashes, its "ethics" is that it is a compression of many layers of incomputabilities. This work attempts to trace where these manifest, and how.
1
Incomputabilities:There is something unknowable to humans that only computation can achieve, and hence the interest in using algorithmic systems to compute at scale. Then there are the limits of knowing by algorithmic systems, where the world changes or simply presents realities
1
2
Show replies
