Botometer.org is an online tool to check the recent activity of a Twitter account and calculate the likelihood that it uses automation. Higher scores are more bot-like. Formerly known as BotOrNot, part of the Observatory on Social Media at Indiana University.
Botometer
@Botometer
Tool to calculate likelihood an account is automated (a bot). Formerly known as BotOrNot, part of OSoMe at IU. Pls read FAQ before asking about permissions!
Botometer’s Tweets
Today's #recommendedreading is this article analysing the relationship between #partisanship, #echochambers, and vulnerability to online #misinformation by studying #news sharing behavior on #Twitter.
misinforeview.hks.harvard.edu/article/right-
6
7
Check out the new Hoaxy tool from the Observatory on Social Media: now working with the Twitter API v2, a new 3D diffusion network visualization, faster labeling of likely automated accounts, and more!
3
20
36
🚨New dataset🚨
Studying online discussion about the 2022 US midterm elections? Check out our dataset MEIU22, a collection of social media posts from multiple platforms.
Paper: arxiv.org/abs/2301.06287
Data+code: github.com/osome-iu/MEIU22
1/5
3
57
148
Show this thread
Check out our op-ed: Using Science To Guide Social Media Regulation
17
15
Our article (with ) 'How Twitter data sampling biases U.S. voter behavior characterizations' was one of the top 5 most viewed #NetworkScienceOnlineSocialNetworks articles published in journal in 2022! 😊
6
14
Submit your papers to the 4th Cyber Social Threats workshop !
🚨Select papers invited for an extended article in a Special Issue of EPJ Data Science‼️
CfP: bit.ly/3Pmeb0G
Website: cy-soc.github.io/2023/
*Deadline: Feb. 6, 2023*
#TheWebConf #misinformation
1
25
15
Show this thread
🚨🚨 🚨 Weekly reminder to submit to our workshop at 🚨 🚨 🚨
The deadline is approaching (February 6th)!
Quote Tweet
Working on #misinformation and #cyberthreats?
Submit to the fourth #CySoc2023 workshop at @TheWebConf!
CfP: bit.ly/3Pmeb0G
Website: cy-soc.github.io/2023/
Long (8 pg), short (4 pg), and demo papers (2 pg) accepted!
Deadline (6 Feb) is approaching!
Please RT!
Show this thread
4
3
Fil Menczer understands the problems that made Twitter a terrible source of information for many people. There's wonderful material there, but the algorithms aren't designed to push people to the most honest, or informative of it.
3
7
Here 👇 our vision on the current climate of uncertainty in the Twittersphere (and beyond) + a summary of our initiative to support social media regulation 🔜 about to start with the CARISMA project (carisma-project.org)
1
4
8
Show this thread
Twitter was always a deeply deceptive operation. The algorithms were always geared to amplify divisive content, as well as random noise. Some researchers, such as Fil Menczer of have been studying how it all works:
2
14
28
Show this thread
Particularly worrisome given our study linking COVID vaccine misinfo to lower vaccination rates (doi.org/10.1038/s41598): Twitter ends enforcement of COVID misinformation policy
11
20
📢 Fri 12/2 4pm CT: "Hacking Online #Virality"
Filippo Menczer [ | ] will 💬 #NetworkAnalytics #modeling #ML efforts to 1⃣ study viral spread of #misinformation 2⃣ develop tools for countering #OnlineManipulation of opinions. 🖱️ bit.ly/menczer_12_2 [hybrid]
read image description
ALT
1
6
10
Show this thread
We comment on one of the many changes announced by Elon Musk, and how it might make it harder to detect Twitter abuse:
7
13
1
2
6
Dashboard to track social media activity on elections | news - Indiana Public Media
7
8
1
13
43
Show this thread
It's time to step up for Luddy! Meet Fil Menczer, who steps up to make Luddy one-of-a-kind. Here he is kicking back in the Luddy AI Center, the home of AI at IU.
Wondering what it means to step up? Find out at bit.ly/LuddyStepUp.
#LuddyStepUp
1
5
8
Our team just launched the US #Midterm #Elections 2022 #dashboard. See the top websites, hashtags, images, & accounts disseminating election content across Twitter, Facebook, and Instagram (more platforms to be added later). Check it out at
48
42
5
6
1. Super happy to be back with as the new executive director 🥳
2. We're looking for a new postdoc! Tell your friends! Tell your enemies! Okay, maybe not your enemies...
7
8
join to work in a super cool project on social media modeling and moderation policies:
carisma-project.org
4
8
We're hiring a new postdoc at the Observatory on Social Media to model the spread of harmful disinformation and other abuse of social media, and to evaluate regulatory policies: please join or help us spread the word!
2
43
42
The awesome Fil Menczer speaks at about “Hacking Online Virality” and the #complexity of the news ecosystem: echo chambers, complex contagion, limited attention, and much more. #CCS2022
1
4
25
Show this thread
Next Keynote #CCS2022 talk is Prof. Filippo Menczer from Indiana University on “Hacking Online Virality”, misinformation and echo chambers! 📰🦠 “Awesome”topics
Sponsored by
12
32
So excited to meet Prof. Filippo Menczer in person and hear the awesome talk at #CCS2022 Hacking Online Virality - Very cool echo chamber emergence demo osome.iu.edu/demos/echo/ game interacting with misinformation fakey.osome.iu.edu and bot detection
3
13
Very honored our work ⚡️ identifying and characterizing superspreaders of misinformation ⚡️ won the Best Student Extended Abstract award ! 🕺🏻🕺🏻
More details are in this thread...
twitter.com/mdeverna2/stat
Quote Tweet
Can we find and predict which accounts spread the most #misinformation on Twitter? What is Twitter doing about misinformation #superspreaders?
We take a stab at this problem in our new working paper. doi.org/10.48550/arXiv
A
for results… 
Show this thread
6
25
Checkout the summary thread for more details!
twitter.com/mdeverna2/stat
Quote Tweet
80% of fake news shares on social platforms are from a mere .1% of misinformation superspreaders. Who are they? @mdeverna2 shares strategies for identifying and characterizing online misinformation superspreaders #TTOCON2022
1
5
13
80% of fake news shares on social platforms are from a mere .1% of misinformation superspreaders. Who are they? shares strategies for identifying and characterizing online misinformation superspreaders #TTOCON2022
8
22
More details can be found in our paper "Exposure to social bots amplifies perceptual biases and regulation propensity" Link: osf.io/ap2qf/
With , , and .
7/7
4
9
Show this thread
Participants prefer stricter regulations after exposure to bots. They express preferences for regulations targeting bot operators and social media companies, which may be due to uncertainty about one's ability to mitigate bot manipulation and fear of bot influence on others.
6/7
1
2
9
Show this thread
Participants perceive a higher bot influence on others than on themselves. Perceived influence on others and on self both increase after exposure to bots, and the gap widens, showing a stronger bias.
5/7
1
2
6
Show this thread
Participants are confident in their ability to identify bots, but become less confident after exposure to bots. Interestingly, their self-assessment is not a very reliable predictor of their actual performance. Exposure to bots exacerbates their unreliability.
4/7
1
6
7
Show this thread
Now the fun begins. Participants, on average, report that 32% of social media accounts are bots, which is substantially larger than the estimates provided by Twitter and even Musk. This number becomes 38% after exposure to bots.
3/7
1
3
10
Show this thread
First, the setup. We ran an experiment in which participants are asked to distinguish bot-like social media profiles from authentic users. They answer questions regarding their perceptions of bots before and after bot exposure.
2/7
1
2
6
Show this thread
The prevalence of social bots has been at the center of the dispute between Twitter and Elon Musk amid Musk's takeover. But how does the public perceive the issue of social bots? Our new working paper sheds some light.
🧵 1/7
Link: osf.io/ap2qf/
2
19
41
Show this thread
6
9
17
21
Lots more in these threads:
Quote Tweet
We've done a decent amount of research on the use of GAN-generated images over the last two years, mostly fake face "photos" such as those produced by thispersondoesnotexist(dot)com. Here are all of our related threads in one place.
cc: @ZellaQuixote twitter.com/conspirator0/s…
Show this thread
1
Show this thread
Can we incorporate detection of GAN-generated faces into next version of ? @conspirator0 do you just use the eye position or is there more? Do you have software for this?
Quote Tweet
Suggestion for Twitter, Facebook, and other social media companies: require any account with a GAN-generated face to prominently disclose that the image is artificially generated and does not depict a real person. Remove accounts that don't comply with this policy as inauthentic. twitter.com/conspirator0/s…
Show this thread
2
1
3
Show this thread

















