Several data brokers including Epsilon (owned by French Publicis), Nielsen (UK), Neustar (TransUnion), Eyeota (Dun & Bradstreet), Oracle, Experian, LiveRamp sold lists of 'union members' via the Xandr data marketplace.
...which is highly problematic, not just under the GDPR.
The most striking aspect of Apple's Privacy Manifest developer workshop was its expansive definition of fingerprinting: "Fingerprinting is using signals from the device to try to identify the device or user." (1/X)
Several data brokers including Epsilon (owned by French Publicis), Nielsen (UK), Neustar (TransUnion), Eyeota (Dun & Bradstreet), Oracle, Experian, LiveRamp sold lists of 'union members' via the Xandr data marketplace.
...which is highly problematic, not just under the GDPR.
The marketing data industry is broken and out of control. These data brokers harvest personal data on billions every day and sell to myriads of parties. No one knows exactly where the data flows to and how it is used. This destroys any trust into digital technology and must stop.
Machine learning models don't yield scientific understanding because they don't represent the underlying casual processes they predict.
Important exception: when that underlying causal process actually IS an ML model (e.g. third party external algorithm audits)
Headline in the sports section of the London edition of The Daily Worker, today 9 June 1936 calling for athletes and spectators to travel to Barcelona's antifascict People's Olympiad not the racist Nazi Games in Berlin, Source Warick Univ
I'm afraid this confirms what I've thought for a while. The regulatory space around AI has been captured by the sector. No room for academics, third sector, or external policy advisors. Quotes here from execs at Deepmind, Anthropic, Palantir, MS, and of course the PM.
The UK will host the first global summit on AI safety.
Acting as an international nexus, the UK will lead conversation on safe, responsible and regulated AI that could have the power to change lives for the better.
Find out more belowhttps://gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence…
This press release for the UK AI Safety Summit features DeepMind, Anthropic, Palantir, Microsoft and Faculty and not a single voice from civil society or academia, and no one with lived experience of algorithmic harms
on "Electronic Monitoring Smartphone Apps: An Analysis of Risks from Technical, Human-Centered, and Legal Perspectives." I did this work in collaboration with Anita Alem,
These photos are beautiful, almost enough to convince me our season wasn't so bad after all.
They capture the moments football is all about: nervous energy on the high road; subs scoring hat tricks off the bench; the milliseconds between ball hitting net and crowd roaring ...
Conference registration fee for in-person attendees: $300
Registration fee for virtual attendees: $260
Is the conference venue, food, etc, really, really, crap? Or the virtual platform really, really good?
A while back a gave a talk on whether monitoring wastewater for covid measurement purposes could constitute processing personal data. From this thread it looks like in Ohio they are closing in on a uniquely risky coronopooper
Amid many government nudge campaigns released by Meta, we found counter terror and counter radicalisation nudge campaigns and campaigns for people to put their bins out on time. But the level of microtargeting - and the use of proxies for sensitive cat data - was often worrying.
- our research shows the Home Office and UK government using extensive microtargeting for digital 'Go Home' nudge campaign - but which also targeted many Arabic speakers in France and Belgium with terrifying ads.
NEW: In a bid to deter immigration, the U.K. government is targeting vulnerable communities with threatening and deeply invasive online messages, reports @JohnnyHistone in an exclusive for New Lines. https://newlinesmag.com/reportage/invisible-and-unaccountable-how-governments-communicate/…
Yep. Recent CDT call a good case in point. An entire cohort of PhD training limited to what interests industry. As if there aren't pressing social and humanitarian problems to research!
This is incredibly frustrating. As academics we are expected to have a positive social impact; one of the best ways to do that is by working directly with low-resource orgs, and yet research partnerships are expected to follow the model of big industrial firms with R&D budgets
There seems to be a presumption baked into funding from the Research Councils that everyone outside of academia is rolling around in piles of spare cash, hoping to be asked to work for free as part of the noble cause of furthering research. But! Sadly, that's not the case.
from September, research how we can audit privacy technologies, and understand algorithmic risks for research in social and health science.
Apply by June 19th or please share around!
What does climate inequality look like?
The top map shows national responsibility for climate breakdown. The bottom maps show projected climate-change attributable health and mortality risks in 2050.
To be clear, at this time and for the foreseeable future, there does not exist any AI model or technique that could represent an extinction risk for humanity. Not even in nascent form, and not even if you extrapolate capabilities far into the future via scaling laws.
What Kathleen Stock thinks about your true metaphysical gender is pretty trivial in comparison to Kathleen Stock campaigning to rescind your access to spaces you need to exist in public and for people to be free to treat you in antagonistic and humiliating ways at school and work
Given where I work and what I do, I feel like I ought to say:
- I don't believe AI is going to destroy humanity
- I don't identify with existential risk, effective altruism, transhumanism, etc.
- I don't take funding from big tech
And the same goes for many of my co-workers 😄
Written how the "cost of living crisis" contains something worse: a "cost of adaptation crisis": the chaotic adjustment to a world undergoing severe ecological stress. 1/3
💯 !
data protection is not (just) about privacy, or individual control… Anyone who claims this is (knowingly or not) validating dishonest big tech narratives and actively harms the potential for what little actionable law we have to constrain AI/data power
Very interesting to see FTC throwing around requirements to delete illegally gathered data when EU with all its powers lags behind... even the Meta deletion order unlikely to actually happen and no clear indication deletion / retraining might be required re ChatGPT cases
WOW: Under the proposed federal court order also filed by DOJ, Amazon will be required to delete inactive child accounts and certain voice recordings and geolocation information and will be prohibited from using such data to train its algorithms.
@lilianedwards@RDBinns twitter.com/FTC/status/166…
Data protection IS AI regulation. With appropriate enforcement, it can stop harmful AI at the point of data collection (and beyond). Which is why I'm still optimistic about the work we started while I was at the
Speaking of real AI regulation grounded in reality!
The part about Amazon being "prohibited from profiting from unlawfully collected consumer videos" is huge. Data protection IS AI regulation. & in this case will likely mean undoing datasets, retraining/disposing of models, etc. twitter.com/FTC/status/166…
Speaking of real AI regulation grounded in reality!
The part about Amazon being "prohibited from profiting from unlawfully collected consumer videos" is huge. Data protection IS AI regulation. & in this case will likely mean undoing datasets, retraining/disposing of models, etc.
FTC says Ring employees illegally surveilled customers, failed to stop hackers from taking control of users' cameras. Under proposed order, Ring will be prohibited from profiting from unlawfully collected consumer videos, pay $5.8M in consumer refunds: https://bit.ly/3qm0J4A /1
“England, the land of our Nativity, is to be a common Treasury of livelihood to all, without respect of persons.”
‘A declaration from the poor oppressed people of England’, a pamphlet written by Gerrard Winstanley and signed by 44 Diggers, was first published #OnThisDay 1649.
"...many of us may have occasionally felt that our bosses lacked empathy. With algorithmic management, the good news is that the level of empathy is constant and predictable. The bad news is that there won’t be any"
Imagine thinking that those of us who disagree are necessarily not experts. There are so many experts who don't agree with anything about this, and many who don't even think "AGI" is a meaningful concept. I mean, hell, even within the same Turing Award you've got one holdout
If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
Came to a similar conclusion in 2018 that users have agency & create their own echo chambers - called it ‘socio-technical recursion’. 0 citations & no interest. Win some & lose some. Also, big quant always wins.
A new study in @Nature finds that exposure to and engagement with #partisan or unreliable news on Google Search are driven by users’ own partisan choices rather than algorithmic curation. https://nature.com/articles/s41586-023-06078-5…
This suggests hyper-partisans create their own echo chambers.
As somebody who teaches on an AI programme, the students are all very sensible and their ideas are great. The only stumbling block, if any, to critical thought is other academics have taught them Kurtzweil's and other TESCREAL ideologies without balance or critical thought. 1/
3/6
2a / The "accidental" researcher who did not or could not plan their career path. For those who did not have a career plan, the “accidental” career pathway generally took a welcomed turn, e.g. being continuously employed by a research group successful at securing funding.
's Computer Science department has utilised a sophisticated AI system to reveal that if Leonardo da Vinci had continued painting his Mona Lisa on a larger canvas it would have included an LZ 127 Graf Zeppelin and two demons drinking pints of lager.