OSN bot - chenyang03/Reading GitHub Wiki

Bot Overview

  • {Ferrara16} Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, Alessandro Flammini. The Rise of Social Bots. Communications of the ACM, July 2016, Vol. 59 No. 7, Pages 96-104. Today’s social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society
  • {Salge17} Carolina Alves de Lima Salge and Nicholas Berente. Is that social bot behaving unethically? Commun. ACM, 2017, 60(9):29–31.
  • {Cresci17} Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo Spognardi, and Maurizio Tesconi. The Paradigm-Shift of Social Spambots: Evidence, Theories, and Tools for the Arms Race. Proc. of WWW Companion, 2017.
  • {Abulaish20} Muhammad Abulaish, Mohd Fazi. Socialbots: Impacts, Threat-Dimensions, and Defense Challenges. IEEE Technology and Society Magazine, 2020, 39(3):52-61. Socialbots are program-controlled online social network profiles that imitate human behavior to camouflage their identity and project themselves as human beings
  • {Kaund22} Tuja Khaund, Baris Kirdemir, Nitin Agarwal, Huan Liu, and Fred Morstatter. Social Bots and Their Coordination During Online Campaigns: A Survey. IEEE Transactions on Computational Social Systems, 2022, 9(2):530-545. presents a detailed survey of social bots, their types and behaviors, and how they impact social media, identification algorithms, and their coordination strategies in OSNs
  • {Mei24} Qiaozhu Mei, Yutong Xie, Walter Yuan, and Matthew O. Jackson. A Turing test of whether AI chatbots are behaviorally similar to humans. Proceedings of the National Academy of Sciences of the United States of America, 2024, 121(9):e2313925121. We administer a Turing test to AI chatbots. We examine how chatbots behave in a suite of classic behavioral games that are designed to elicit characteristics such as trust, fairness, risk-aversion, cooperation, etc., as well as how they respond to a traditional Big-5 psychological survey that measures personality traits. ChatGPT-4 exhibits behavioral and personality traits that are statistically indistinguishable from a random human from tens of thousands of human subjects from more than 50 countries. Chatbots

Bot Behavior

  • {Varol17} Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, Alessandro Flammini. Online Human-Bot Interactions: Detection, Estimation, and Characterization. Proc. of AAAI ICWSM, 2017. simple bots tend to interactwith bots that exhibit more human-like behaviors
  • {Chavoshi17} Nikan Chavoshi, Hossein Hamooni, Abdullah Mueen. Temporal Patterns in Bot Activities. Proc. of WWW Companion, 2017. perform temporal pattern mining on bot activities in Twitter
  • {Stella18} Massimo Stella, Emilio Ferrara, and Manlio De Domenico. Bots increase exposure to negative and inflammatory content in online social systems. Proceedings of the National Academy of Sciences, 2018, 115(49):12435-12440.
  • {Ishowo-Oloko19} Fatimah Ishowo-Oloko, Jean-François Bonnefon, Zakariyah Soroye, Jacob Crandall, Iyad Rahwan, and Talal Rahwan. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nature Machine Intelligence, 2019, 1:517-521. bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency
  • {Kim20} Taewan Kim, Mintra Ruensuk, and Hwajung Hong. 2020. In Helping a Vulnerable Bot, You Help Yourself: Designing a Social Bot as a Care-Receiver to Promote Mental Health and Reduce Stigma. Proc. of ACM CHI, 2020. proposed a Facebook-based social bot displaying depressive symptoms and disclosing vulnerable experiences that allows users to practice providing reactions online
  • {Iacopo20} Pozzana Iacopo, Ferrara Emilio. Measuring Bot and Human Behavioral Dynamics. Frontiers in Physics, 2020, 8:Article 125. study the behavioral dynamics that bots exhibit over the course of an activity session, and highlight if and how these differ from human activity signatures
  • {Caldarelli20} Guido Caldarelli, Rocco De Nicola, Fabio Del Vigna, Marinella Petrocchi, Fabio Saracco. The role of bot squads in the political propaganda on Twitter. Communications Physics, 2020, 3:Article 81. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers.
  • {Pastor-Galindo20} Javier Pastor-Galindo, Mattia Zago, Pantaleone Nespoli, Sergio López Bernal, Alberto Huertas Celdrán, Manuel Gil Pérez, José A. Ruipérez-Valiente, Gregorio Martínez Pérez, Félix Gómez Mármol. Spotting Political Social Bots in Twitter: A Use Case of the 2019 Spanish General Election. IEEE Transactions on Network and Service Management, 2020, 17(4):2156-2170. classified involved users as social bots or humans, and examined their interactions from a quantitative (i.e., amount of traffic generated and existing relations) and qualitative (i.e., user’s political affinity and sentiment towards the most important parties) perspectives
  • {Wischnewski21} Magdalena Wischnewski, Rebecca Bernemann, Thao Ngo, and Nicole Krämer. Disagree? You Must be a Bot! How Beliefs Shape Twitter Profile Perceptions. Proc. of ACM CHI, 2021. Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent
  • {Chen21} Wen Chen, Diogo Pacheco, Kai-Cheng Yang & Filippo Menczer. Neutral bots probe political bias on social media. Nature Communications, 2021, 12:Article 5580. deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions
  • {Mazza22} Michele Mazza, Marco Avvenuti, Stefano Cresci, Maurizio Tesconi. Investigating the difference between trolls, social bots, and humans on Twitter. Computer Communications, 2022, 196:23–36.
  • {Le22} Thai Le, Long Tran-Thanh, and Dongwon Lee. Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning. Proc. of WWW, 2022. we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected
  • {He22} Hao He, Haonan Su, Wenxin Xiao, Runzhi He, and Minghui Zhou. GFI-Bot: Automated Good First Issue Recommendation on GitHub. Proc. of ESEC/FSE, 2022. a proof-of-concept machine learning powered bot for automated GFI recommendation in practice
  • {Kat22} Daniel Kats and Mahmood Sharif. "I Have No Idea What a Social Bot Is": On Users' Perceptions of Social Bots and Ability to Detect Them. Proc. of HAI, 2022. we conducted an online study with 297 users of seven OSNs to explore their mental models of bots and evaluate their ability to classify bots and non-bots correctly
  • {Na23} Seung Ho Na, Sumin Cho, and Seungwon Shin. Evolving Bots: The New Generation of Comment Bots and their Underlying Scam Campaigns in YouTube. Proc. of ACM IMC, 2023. presents a pioneering investigation into a novel form of scam advertising method on YouTube, termed "social scam bots" (SSBs). These bots have evolved to emulate benign user behavior by posting comments and engaging with other users, oftentimes appearing prominently among the top rated comment
  • {Cai23} Meng Cai, Han Luo, Xiao Meng, Ying Cui, Wei Wang. Network distribution and sentiment interaction: Information diffusion mechanisms between social bots and human users on social media. Information Processing and Management, 2023, 60:103197. social bots tend to transmit information with negative sentiments more than positive sentiments; XGBoost-based detection model
  • {Mouronte-López24} Mary Luz Mouronte-López, Javier Gómez Sánchez-Seco & Rosa M. Benito. Patterns of human and bots behaviour on Twitter conversations about sustainability. Scientific Reports, 2024, 14: Article 3223.
  • {Baudier25} Patricia Baudier, Elodie de Boissieu. Are social media robot influencers credible? A cross-continental analysis in a fashion context. Computers in Human Behavior, 2025, 162:108447. The impact on attitude of expertise, physical attractiveness, content attractiveness, and trustworthiness is demonstrated. The anthropomorphic appearance of the robot influencer is considered a key element.

Bot detection

  • {Beskow19} David M. Beskow, Kathleen M. Carley. Its all in a name: detecting and labeling bots by their name. Computational and Mathematical Organization Theory, 2019, 25:24–35.
  • {Yao20} Tianjun Yao, Qing Li, Shangsong Liang, and Yadong Zhu. 2020. BotSpot: A Hybrid Learning Framework to Uncover Bot Install Fraud in Mobile Advertising. Proc. of ACM CIKM, 2020.
  • {Yang20} Kai-Cheng Yang, Onur Varol, Pik-Mai Hui, Filippo Menczer. Scalable and Generalizable Social Bot Detection through Data Selection. Proc. of AAAI, 2020. propose a frameworkthat uses minimal account metadata, enabling efficient analy-sis that scales up to handle the full stream of public tweets ofTwitter in real time
  • {Sayyadiharikandeh20} Mohsen Sayyadiharikandeh, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, and Filippo Menczer. Detection of Novel Social Bots by Ensembles of Specialized Classifiers. Proc. of ACM CIKM, 2020. supervised learning techniques suffer severe performance deterioration when attempting to detect behaviors not observed in the training data; propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule
  • {Rout20} Rashmi Ranjan Rout, Greeshma Lingam, D. V. L. N. Somayajulu. Detection of Malicious Social Bots Using Learning Automata With URL Features in Twitter Network. IEEE Transactions on Computational Social Systems, 2020, 7(4):1004-1018. a learning automata-based malicious social bot detection (LA-MSBD) algorithm is proposed by integrating a trust computation model with URL-based features for identifying trustworthy participants (users) in the Twitter network
  • {Martini21} Franziska Martini, Paul Samula, Tobias R Keller, and Ulrike Klinger. Bot, or not? Comparing three methods for detecting social bots in five political discourses. Big Data & Society, 2021, 8(2). While all three bot detection methods classified accounts as bots in all our cases, the comparison shows that the three approaches produce very different results
  • {Schuchard21} Schuchard RJ, Crooks AT. Insights into elections: An ensemble bot detection coverage framework applied to the 2018 U.S. midterm elections. PLOS ONE, 2021, 16(1): e0244309. findings suggest that social bot research efforts must incorporate multiple detection sources to account for the variety of social bots operating in OSNs, while incorporating improved or new detection methods to keep pace with the constant evolution of bot complexity
  • {Zhao21} Tianxiang Zhao, Xiang Zhang, Suhang Wang. GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. Proc. of ACM WSDM, 2021. GraphSMOTE constructs an intermediate embedding space with a feature extractor, and train an edge generator and a GNN-based node classifier simultaneously on top of that
  • {Feng21} Yebo Feng, Jun Li, Lei Jiao, Xintao Wu. Towards Learning-Based, Content-Agnostic Detection of Social Bot Traffic. IEEE Transactions on Dependable and Secure Computing, 2021 proposes a method called BotFlowMon that relies only oncontent-agnostic flow-level data as input to identify OSN bot traffic
  • {Fazil21} Mohd Fazil, Amit Kumar Sah, Muhammad Abulaish. DeepSBD: A Deep Neural Network Model With Attention Mechanism for SocialBot Detection. IEEE Transactions on Information Forensics and Security, 2021, 16:4211-4223. The DeepSBD models users' behavior using profile, temporal, activity, and content information. It jointly models OSN users' behavior using Bidirectional Long Short Term Memory (BiLSTM) and Convolutional Neural Network (CNN) architectures.
  • {Abdellatif22} Ahmad Abdellatif, Mairieli Wessel, Igor Steinmacher, Marco A. Gerosa, and Emad Shihab. BotHunter: an approach to detect software bots in GitHub. Proc. of MSR, 2022.
  • {Moghaddam23} Samaneh Hosseini Moghaddam, Maghsoud Abbaspour. Friendship Preference: Scalable and Robust Category of Features for Social Bot Detection. IEEE Transactions on Dependable and Secure Computing, 2023, 20(2):1516-1528. in this article, a new category of features, called friendship preference, is proposed. Friendship preference features are extracted from the profile attributes of the followers
  • {Li23} Shilong Li, Boyu Qiao, Kun Li, Qianqian Lu, Meng Lin, and Wei Zhou. Multi-modal Social Bot Detection: Learning Homophilic and Heterophilic Connections Adaptively. Proc. of ACM MM, 2023. propose a multi-modal social bot detection method with learning homophilic and heterophilic connections adaptively (BothH for short)
  • {Arin23} Efe Arin, Mucahid Kutlu. Deep Learning Based Social Bot Detection on Twitter. To appear: IEEE Transactions on Information Forensics and Security.
  • {Mbona23} Innocent Mbona, Jan H P Eloff. Classifying social media bots as malicious or benign using semi-supervised machine learning. Journal of Cybersecurity, 2023, 9(1):tyac015. focus on identifying meaningful features indicative of anomalous behavior between benign and malicious bots
  • {Hays23} Chris Hays, Zachary Schutzman, Manish Raghavan, Erin Walk, and Philipp Zimmer. Simplistic Collection and Labeling Practices Limit the Utility of Benchmark Datasets for Twitter Bot Detection. Proc. of WWW, 2023. Our findings reveal that predictions are highly dependent on each dataset’s collection and labeling procedures rather than fundamental differences between bots and humans
  • {Yang23} Yingguang Yang, Renyu Yang, Hao Peng, Yangyang Li, Tong Li, Yong Liao, Pengyuan Zhou. 2023. FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot Detection. Proc. of WWW, 2023.
  • {Li23} Shudong Li, Chuanyu Zhao, Qing Li, Jiuming Huang, Dawei Zhao, Peican Zhu. BotFinder: a novel framework for social bots detection in online social networks based on graph embedding and community detection. World Wide Web, 2023, 26:1793–1809. We propose BotFinder, a framework to detect malicious social bots in OSNs. Specifcally, it combines machine learning and graph methods so that the potential features of social bots can be effectively extracted.
  • {Zhou23a} Ming Zhou, Dan Zhang, Yuandong Wang, Yangli-ao Geng, and Jie Tang. Detecting Social Bot on the Fly using Contrastive Learning. Proc. of ACM CIKM, 2023. propose a Contrastive Learning-driven Social Bot Detection framework (CBD)
  • {Zhou23b} Ming Zhou, Wenzheng Feng, Yifan Zhu, Dan Zhang, Yuxiao Dong, and Jie Tang. Semi-Supervised Social Bot Detection with Initial Residual Relation Attention Networks. Proc. of ECML/PKDD, 2023. we analyze human-bot networks and propose SIRAN, which combines relation attention with initial residual connection to reduce and prevent the noise aggregated from neighbors to improve the capability of distinguishing different kinds of nodes on social graphs with heterophily
  • {Liu25} Feng Liu, Chunfang Yang, Zhenyu Li, Daofu Gong, Fenlin Liu. Accou2vec: A Social Bot Detection Model Based on Community Walk. To appear: IEEE Transactions on Dependable and Secure Computing. propose a social bot detection method, named Accou2vec, based on community walk. First, in order to cut off the attacking edges between the human and bot accounts, the deep autoencoder-like non-negative matrix factorization community detection algorithm is leveraged to divide the social graph into multiple subgraphs. Then, we design the community walk rule that controls the intra-community walk and inter-community walk differently, considering both the number of nodes and edges in the community. Subsequently, the graph representation learning is used to learn the representation vector of each account. Finally, the representation vectors of labeled social bots and human accounts are used to train the classifier for social bots detection
  • {Zhang25} Xuan Zhang, Tingshao Zhu, and Baobin Li. Online Social Behaviors: Robust and Stable Features for Detecting Microblog Bots. To appear: IEEE Transactions on Computational Social Systems. introduces a category of online social behavior features (OSBF), derived from microblog behaviors such as emotional expression, language organization, and self-description.

Bot detection - Deep Learning

  • {Feng21b} Shangbin Feng, Herun Wan, Ningnan Wang, Jundong Li, Minnan Luo. SATAR: A Self-supervised Approach to Twitter Account Representation Learning and its Application in Bot Detection. Proc. of ACM CIKM, 2021.
  • {Feng22a} Shangbin Feng, Zhaoxuan Tan, Rui Li, Minnan Luo. Heterogeneity-aware Twitter Bot Detection with Relational Graph Transformers. Proc. of AAAI, 2022. [PDF] leverages the topological structure of user-formed heterogeneous graphs and models varying influence intensity between users.
  • {Ng23} Lynnette Hui Xian Ng, Kathleem M. Carley. BotBuster: Multi-Platform Bot Detection Using A Mixture of Experts. Proc. of AAAI ICWSM, 2023. In this paper, we propose BotBuster, a social bot detector built with the concept of a mixture of experts approach. Each expert is trained to analyze a portion of account information, e.g. username, and are combined to estimate the probability that the account is a bot.
  • {Liu23} Yuhan Liu, Zhaoxuan Tan, Heng Wang, Shangbin Feng, Qinghua Zheng, and Minnan Luo. BotMoE: Twitter Bot Detection with Community- Aware Mixtures of Modal-Specific Experts. Proc. of SIGIR, 2023. we propose BotMoE1, a Twitter bot detection framework that jointly utilizes multiple user information modalities (metadata, textual content, network structure) to improve the detection of deceptive bots. Furthermore, BotMoE incorporates a community-aware Mixture-of-Experts (MoE) layer to improve domain generalization and adapt to different Twitter communities.
  • {Liu24b} Feng Liu, Zhenyu Li, Chunfang Yang, Daofu Gong, Haoyu Lu, Fenlin Liu. SEGCN: a subgraph encoding based graph convolutional network model for social bot detection. Scientific Reports, 2024, 14:4122. propose a subgraph encoding based GCN model, SEGCN, with stronger expressive power for social bot detection. Each node representation of this model is computed as the encoding of a surrounding induced subgraph rather than encoding of immediate neighbors only.
  • {Yang24} Yingguang Yang, Qi Wu, Buyun He, Hao Peng, Renyu Yang, Zhifeng Hao, and Yong Liao. SeBot: Structural Entropy Guided Multi-View Contrastive Learning for Social Bot Detection. Proc. of ACM KDD, 2024. use structural entropy as an uncertainty metric to optimize the entire graph’s structure and subgraph-level granularity, revealing the implicitly existing hierarchical community structure
  • {Cai24} Zijian Cai, Zhaoxuan Tan, Zhenyu Lei, Zifeng Zhu, Hongrui Wang, Qinghua Zheng, and Minnan Luo. LMBot: Distilling Graph Knowledge into Language Model for Graph-less Deployment in Twitter Bot Detection. Proc. of ACM WSDM, 2024. propose a novel bot detection framework LMBot that distills the graph knowledge into language models (LMs) for graph-less deployment in Twitter bot detection to combat data dependency challeng
  • {Qiao25} Boyu Qiao, Wei Zhou, Kun Li, Shilong Li, Songlin Hu. Dispelling the Fake: Social Bot Detection Based on Edge Confidence Evaluation. To appear: IEEE Transactions on Neural Networks and Learning Systems. Interaction with real accounts results in the graph structure containing camouflaged and unreliable edges. These unreliable edges interfere with the differentiation between bot and human representations, and the iterative graph encoding process amplifies this unreliability. In this article, we propose a social Bot detection method based on Edge Confidence Evaluation (BECE). Our model incorporates an edge confidence evaluation module that assesses the reliability of the edges and identifies the unreliable edges

Bot detection - adversarial attacks

  • {Cresci19} Stefano Cresci, Marinella Petrocchi, Angelo Spognardi, Stefano Tognazzi. Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection. Proc. of ACM WebSci, 2019. manipulate generationsof Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques
  • {Cresci22} Stefano Cresci, Marinella Petrocchi, Angelo Spognardi, Stefano Tognazzi. Adversarial Machine Learning for Protecting Against Online Manipulation. IEEE Internet Computing, 2022, 26:47-52. Similarly to computer vision and automatic speech recognition, adversarial attacks can alter the features of social bots, without impacting their activity, thus allowing them to evade detection
  • {Wang23} Lanjun Wang, Xinran Qiao, Yanwei Xie, Weizhi Nie, Yongdong Zhang, and Anan Liu. My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection. Proc. of ACM MM, 2023. first study exploring the resilience of bot detection through graph node injection
  • {Najari24} Shaghayegh Najari, Davood Rafiei, Mostafa Salehi1, Reza Farahbakhsh. Adversarial botometer: adversarial analysis for social bot detection. Social Network Analysis and Mining, 2024, 14:220. we examine the behavior of a text-based bot detector across three key scenarios: adversarial interactions between bots and detectors, attack examples generated by bots to poison datasets, and cross-domain analysis of different types of bots

Bot detection (unsupervised / semi-supervised)

  • {Mendoza20} Marcelo Mendoza, Maurizio Tesconi, and Stefano Cresci. 2020. Bots in Social and Interaction Networks: Detection and Impact Estimation. ACM Trans. Inf. Syst., 2020, 39(1):Article 5. we propose a simple yet effective semi-supervised method that allows distinguishing between bots and legitimate users with high accuracy. The method learns a joint representation of social connections and interactions between users by leveraging graph-based representation learning
  • {Giorgi21} Salvatore Giorgi, Lyle H. Ungar, H. Andrew Schwartz. Characterizing Social Spambots by their Human Traits. Proc. of ACL/IJCNLP (Findings), 2021. even an unsupervised clustering using the same 17 attributes could yield nearly as accurate of social bot identification (F1 = 0:925).

Retweet network analysis / retweet bots

  • {Shao18} Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang, Alessandro Flammini & Filippo Menczer. The spread of low-credibility content by social bots. Nature Communications, 2018, 9:Article number: 4787. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions.
  • {Schuchard19a} Ross Schuchard, Andrew T. Crooks, Anthony Stefanidis and Arie Croitoru. Bot stamina: examining the influence and staying power of bots in online social networks. Appl Netw Sci, 2019, 4:Article 55. employ a methodological framework that aggregates and fuses data from multiple global Twitter conversations with an available bot detection platform and ultimately classifies the relative importance and persistence of social bots in online social networks (OSNs)
  • {Schuchard19b} Ross Schuchard, Andrew Crooks, Anthony Stefanidis, Arie Croitoru. Bots fired: examining social bot evidence in online mass shooting conversations. Palgrave Communications, 2019, 5:Article number: 158. while social bots accounted for fewer than 1% of total corpus user contributors, social network analysis centrality measures identified many bots with significant prominence in the conversation networks, densely occupying many of the highest eigenvector and out-degree centrality measure rankings, to include 82% of the top-100 eigenvector values of the Las Vegas retweet network
  • {Yuan19} Xiaoyi Yuan, Ross J. Schuchard, and Andrew T. Crooks. Examining Emergent Communities and Social Bots Within the Polarized Online Vaccination Debate in Twitter. Social Media + Society, 2019, 5(3):1-12. bots prove themselves to be hyper-social in this conversation with 9,127 retweets being initiated by bots (i.e., the sum of "bot-retweets-bot" and "bot-retweets-human")
  • {Abu-El-Rub19} Noor Abu-El-Rub, Abdullah Mueen. BotCamp: Bot-driven Interactions in Social Campaigns. Proc. of WWW, 2019.
  • {Pacheco21} Diogo Pacheco, Pik-Mai Hui, Christopher Torres-Lugo, Bao Tran Truong, Alessandro Flammini, Filippo Menczer. Uncovering Coordinated Networks on Social Media: Methods and Case Studies. Proc. of AAAI ICWSM, 2021. We present a general approach to detect coordination, which can in principle be applied to any social media platform where data is available. Since the method is completely unsupervised, no labeled training data is required
  • {Pastor-Galindo22} Javier Pastor-Galindo, Félix Gómez Mármol, Gregorio Martínez Pérez. Profiling users and bots in Twitter through social media analysis. Information Sciences, 2022, 613:161–183. profile groups of users and bots of Twitter from seven perspectives; semi-automated accounts would be more threatening than fully automated ones
  • {Chang22} Ho-Chun Herbert Chang, Emilio Ferrara. Comparative analysis of social bots and humans during the COVID-19 pandemic. Journal of Computational Social Science, 2022, 5:1409–1425. compares the behavior of humans and bots politically and semantically during the pandemic
  • {Bruno22} Matteo Bruno, Renaud Lambiotte, Fabio Saracco. Brexit and bots: characterizing the behaviour of automated accounts on Twitter during the UK election. EPJ Data Science, 2022, 11:Article number: 17.
  • {Elmas22} Tuğrulcan Elmas, Rebekah Overdorf, Karl Aberer. Characterizing Retweet Bots: The Case of Black Market Accounts. Proc. of AAAI ICWSM, 2022. We characterize retweet bots that have been uncovered by purchasing retweets from the black market. We detect whether they are fake or genuine accounts involved in inauthentic activities and what they do in order to appear legitimate. We also analyze their differences from human-controlled accounts.
  • {Ng22} Lynnette Hui Xian Ng, Kathleen M. Carley. Online Coordination: Methods and Comparative Case Studies of Coordinated Groups across Four Events in the United States. Proc. of ACM WebSci, 2022. uncover coordinating communities through semantic, referral and social coordination | A link between two users represent the presence of a synchronized action within the time window, and the weight of the link represent strength of coordination, calculated by the number of times the action is performed within a time window
  • {Xu22} Wentao Xu, Kazutoshi Sasahara. Characterizing the roles of bots on Twitter during the COVID-19 infodemic. Journal of Computational Social Science, 2022, 5:591-609. the segregated topology of their retweet networks
  • {Jiang23} Julie Jiang, Xiang Ren, Emilio Ferrara. Retweet-BERT: Political Leaning Detection Using Language Features and Information Diffusion on Social Networks. Proc. of AAAI ICWSM, 2023. We introduce Retweet-BERT, a simple and scalable model to estimate the political leanings of Twitter users. Retweet-BERT leverages the retweet network structure and the language used in users' profile descriptions.
  • {Luceri24} Luca Luceri, Valeria Pantè, Keith Burghardt, and Emilio Ferrara. Unmasking the Web of Deceit: Uncovering Coordinated Activity to Expose Information Operations on Twitter. Proc. of WWW, 2024. introduce novel models for identifying coordinated actors driving IOs
  • {Pacheco24} Diogo Pacheco. Bots, Elections, and Controversies: Twitter Insights from Brazil's Polarised Elections. Proc. of WWW, 2024. The data revealed a strong correlation between bot engagement and the number of replies during a single day (𝑟 = 0.66, 𝑝 < 0.01).
  • {Unlu24} Ali Unlu, Sophie Truong, Nitin Sawhney, Jonas Sivelä, Tuukka Tammi. Long-term assessment of social amplification of risk during COVID-19: challenges to public health agencies amid misinformation and vaccine stance. Journal of Computational Social Science, 2024, 7:809–836. underscores the need to understand social media dynamics to counter misinformation and foster accurate public communication on COVID-19 and vaccination

Dataset

  • {Feng21a} Shangbin Feng, Herun Wan, Ningnan Wang, Jundong Li, Minnan Luo. TwiBot-20: A Comprehensive Twitter Bot Detection Benchmark. Proc. of ACM CIKM, 2021. TwiBot-20, a massive Twitter bot detection benchmark, which contains 229,573 users, 33,488,192 tweets, 8,723,736 user property items and 455,958 follow relationships
  • {Feng22b} Shangbin Feng, Zhaoxuan Tan, Herun Wan, et al. TwiBot-22: Towards Graph-Based Twitter Bot Detection. Proc. of NeurIPS, 2022. Twibot22 Dataset(https://twibot22.github.io/)
  • {Yang24} Kai-Cheng Yang and Filippo Menczer. Anatomy of an AI-powered malicious social botnet. Journal of Quantitative Description: Digital Media, 2024, 4:1-36. This paper presents a case study about a Twitter botnet that appears to employ ChatGPT to generate human-like content. Through heuristics, we identify 1,140 accounts and validate them via manual annotation. These accounts form a dense cluster of fake personas that exhibit similar behaviors, including posting machine-generated content and stolen images, and engage with each other through replies and retweets. [Fox8-23 Dataset]
  • IU Bot Repository