
-
Venue: Seminar Room, #01-03, innovation 4.0, 3 Research Link, Singapore 117602
Date: 28 April 2025, 9:00 AM to 4:35 PM
The Information Gyroscope Symposium on Mis-, Dis-, and Mal-Information (iGYRO SMDM 2025), organised by the
Centre for Trusted Internet and Community (CTIC) at the National University of Singapore (NUS),
is a pivotal event designed to tackle the pervasive issues of misinformation and disinformation in our digital ecosystems.
This symposium brings together thought leaders, researchers, and practitioners to enhance understanding, develop solutions, and
build partnerships to tackle complex digital information challenges.
Join us on Monday, April 28, at the i4.0 Seminar Room
[map],
[outdoor photo],
[indoor photo]
for a dynamic, day-long programme featuring a mix of keynote speeches, oral presentations, and panel discussions,
punctuated with Q&A sessions to encourage interaction and dialogue. Seats are limited – register now to secure your spot!
Programme
Time | Schedule |
---|---|
9:00 AM – 9:30 AM | Registration |
9:30 AM – 9:40 AM | Welcome Address Emcee: Asst. Prof. Jun Yu, Speaker: Prof. Tsuhan Chen |
9:40 AM – 10:20 AM | Keynote 1: Speaker: Prof. Preslav Nakov |
10:20 AM - 10:40 AM | Teabreak |
10:40 AM – 12:00 PM | Panel: The Art & Science of Mitigating Mis-, Dis-, and Mal-Information in Today’s Digital Age Panellists: Prof. Preslav Nakov, Prof. Jeannie Paterson, Prof. Noah Lim, Prof. Tsuhan Chen, Prof. Simon Chesterman Moderator: Asst. Prof. Kokil Jaidka |
12:00 PM – 1:30 PM | Lunch and Demos & Posters |
1:30 PM – 2:10 PM | Keynote 2: Legal and Policy Choices in Making Platforms Liable for Mis-, Dis-, and Mal-Information Speaker: Prof. Jeannie Paterson |
2:10 PM – 3:10 PM | Oral Presentations 1: Real or Fake? Exploring Human Perception of AI-Generated Content Presenter: Dr. Shaojing Fan The Elite Effect on Misinformation Susceptibility Presenter: Asst. Prof. Kokil Jaidka SNIFFER: A Multimodal LLM for Explainable Out-of-Context Misinformation Detection System Presenter: Dr. Peng Qi |
3:10 PM - 3:30 PM | Teabreak |
3:30 PM – 4:30 PM | Oral Presentations 2: Political Polarization Versus Veracity and Congruence as a Determinant of Believing and Sharing the News Presenter: Dr. Wencong Li The Misapprehension of Online Harm Remedies in Misinformation and Disinformation Presenter: Dr. Eka Nugraha Putra The Paradox of Women’s Digital Wellbeing on Reddit Presenter: Dr. Renae Loh |
4:30 PM – 4:35 PM | Closing |
Keynote Speakers
The following speakers are invited to give keynotes at iGYRO SMDM 2025. Please click on their profile image to view their talk details.
Bio
Dr Preslav Nakov is a Professor at Mohamed bin Zayed University of Artificial Intelligence. Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair-Elect of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees.
Bio
Prof. Jeannie Marie Paterson is a Professor of Consumer Protection and Technology Law and Director of the Centre for Artificial Intelligence and Digital Ethics at the University of Melbourne. She holds a current practising certificate and is a Fellow of the Australian Academy of Law. Jeannie’s research and teaching expertise lies in the fields consumer and data protection law, as well as regulatory design for responsible and safe AI. Much of her current work focuses on the regulatory and ethical challenges of AI, including concerns around misrepresentation, misinformation, and deepfake fraud. She is also interested in the realisation of normative values of fairness, transparency, and consent in law and regulation.
Title
Legal and policy choices in making platforms liable for mis, dis, and mal-information
Abstract
Law is only one part of the response to the risks of harm to individuals and society from online mis, dis, and mal information. Indeed, law may be a last line of defence coming after other initiatives to improve information resilience. Nonetheless, law has an important role in setting the boundaries for the information environment and supporting the dissemination of trustworthy information online. Increasingly, laws concerned with the information environment are not only aimed at transgressing individuals but at digital platforms, as the intermediaries of harmful information. Digital platforms are accessible as a regulatory target and moreover play a role in disseminating and amplifying harmful information. There are complex policy choices in deciding what to regulate and questions of ‘regulatory design’ in deciding how to tackle the information responsibilities of digital platforms, including through ex ante obligations and ex post liabilities. This talk will explore these issues with reference to regulatory initiatives in Singapore, Australia and the UK. It will compare approaches to Online Safety, marketing misinformation and misleading electoral communications, as well as considering how online ‘harm’ is itself is understood and the role of transparency in responding to mis, dis and mal-information concerns.
Panellists
The following panelists will participate in a panel discussion on The Art & Science of Mitigating Mis-, Dis-, and Mal-Information in Today’s Digital Age.
Bio
Prof. Noah Lim is the Director of the National University of Singapore (NUS) Global Asia Institute, Head of the Department of Marketing, and Provost's Chair Professor at NUS Business School. Previously, he held the John P. Morgridge Distinguished Chair in Business and was a Professor of Marketing at the University of Wisconsin-Madison. Prof. Lim is a behavioural economist whose research applies theories and methods from economics, statistics, and psychology to understand how customers, managers, and salespeople make decisions. In the field of marketing, he is renowned for his scholarly work on designing incentive contracts for salespeople. He is also an expert on pricing strategy and business models, having taught and consulted for numerous companies in the US and Asia on these subjects. At NUS, he leads research on behavioural change and oversees the $17M NUS programme to Improve Health in Asia (NIHA). His research has been published in leading journals such as the Journal of Marketing Research (JMR), Management Science, and Marketing Science.
Bio
Prof. Tsuhan Chen is a Distinguished Professor at the National University of Singapore (NUS) and an expert in pattern recognition, computer vision, and machine learning. He previously served as Deputy President (Research and Technology) at NUS and Chief Scientist of AI Singapore. Before joining NUS, he held leadership positions at Cornell University, Carnegie Mellon University, and Nanyang Technological University, where he championed faculty recruitment and interdisciplinary research initiatives. Prof. Chen received the Charles Wilts Prize for outstanding independent research at the California Institute of Technology in 1993. He was a recipient of the US National Science Foundation CAREER Award from 2000 to 2003. He received the Benjamin Richard Teare Teaching Award in 2006, the Eta Kappa Nu Award for Outstanding Faculty Teaching in 2007, both at the Carnegie Mellon University, and the Michael Tien Teaching Award in 2014 at the Cornell University. Prof. Chen has published more than 300 technical papers and holds close to 30 US patents.
Bio
Prof. Simon Chesterman is David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore, where he is also the founding Dean of NUS College. He serves as Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law. Professor Chesterman is the author or editor of more than twenty books, including We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (CUP, 2021); One Nation Under Surveillance (OUP, 2011); You, the People (OUP, 2004); and Just War or Just Peace? (OUP, 2001). He is a recognized authority on international law, whose work has opened up new areas of research on conceptions of public authority — including the rules and institutions of global governance, state-building and post-conflict reconstruction, the changing role of intelligence agencies, and the emerging role of artificial intelligence and big data. He also writes on legal education and higher education more generally, and is the author of five novels including the Raising Arcadia trilogy and Artifice.
Bio
Prof. Preslav Nakov is a Professor at Mohamed bin Zayed University of Artificial Intelligence. Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair-Elect of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees.
Bio
Prof. Jeannie Marie Paterson is a Professor of Consumer Protection and Technology Law and Director of the Centre for Artificial Intelligence and Digital Ethics at the University of Melbourne. She holds a current practising certificate and is a Fellow of the Australian Academy of Law. Jeannie’s research and teaching expertise lies in the fields consumer and data protection law, as well as regulatory design for responsible and safe AI. Much of her current work focuses on the regulatory and ethical challenges of AI, including concerns around misrepresentation, misinformation, and deepfake fraud. She is also interested in the realisation of normative values of fairness, transparency, and consent in law and regulation.
Bio
Asst. Prof. Kokil Jaidka is an Assistant Professor in Computational Communication. She has a Bachelor's degree in Engineering from PEC University of Technology, India, and a PhD in Information Studies from Nanyang Technological University, Singapore. Prior to joining the National University of Singapore, Kokil was a Senior Data Scientist at Adobe Research (2013-2016), a postdoctoral researcher at the University of Pennsylvania (2016-2018), and a presidential postdoctoral fellow at Nanyang Technological University (2018-2019). She has several patents to her name, as well as first-author publications in the Proceedings of the National Academy of Sciences, Journal of Computer-Mediated Communication, and Journal of Communication, among other venues.
Oral Presenters
The following speakers will be giving oral presentations at iGYRO SMDM 2025. Please click on their profile image to view more details.
Bio
Dr. Wencong Li is a Research Fellow at the Global Asia Institute, National University of Singapore (NUS). Before joining GAI, she obtained her Ph.D. degree in Economics at NUS in 2024. Her research explores the intersection of behavioral and applied microeconomics, with a particular focus on how individuals update their beliefs in boundedly rational ways and how they process information received from others. She employs laboratory experiments to investigate these questions.
Title
Political Polarization Versus Veracity and Congruence as a Determinant of Believing and Sharing the News
Abstract
The research investigates the relationship between politically oriented news and the inclination of readers to share it with others. It explores how the veracity of a news headline and its alignment with a reader's political beliefs influence their decision to share it, mainly focusing on the differences between Democrat and Republican voters. We conducted a preregistered study with a between-subject design (Sender sample N= 2,200, Receiver sample N= 740) of a U.S. representative sample. The study uses a controlled experiment where participants are presented with various news headlines that are either true or fake and congruent or incongruent with their political views. The participants are then asked if they would share the headline with another participant and how much money they would be willing to accept for doing so. The study found that the veracity of a headline is a more significant factor in the decision to share than its political congruence. Both Democrats and Republicans were more willing to share true headlines that were incongruent with their political views than fake headlines that were congruent. However, the study also found that Republicans were more willing to share fake, congruent news than Democrats.
Bio
Dr. FAN Shaojing is currently a senior lecturer in the Department of Electrical and Computer Engineering and a research member of the iGyro Project at the Centre for Trusted Internet and Community, National University of Singapore (NUS). Dr. Fan’s research focuses on the intersection of social psychology and artificial intelligence (AI), including cognitive vision, computational social science, and human-centered multimedia data analysis. She believes that understanding human cognition can enhance computational models and is passionate about bridging psychological and neurological insights with AI advancements.
Title
Real or Fake? Exploring Human Perception of AI-Generated Content
Abstract
As AI-generated content (AIGC) becomes more prevalent, concerns about its role in spreading misinformation are increasing. Our research explores human perception of AIGC and its misinformation risks, aiming to develop strategies for effective mitigation. We created the MhAIM Dataset, a collection of 154,552 media posts, including 111,153 AI-generated entries. A human study on MhAIM reveals varying levels of user sensitivity---participants were most attuned to AI-generated social media posts with both text and visuals, but less so for text-only content. While general distrust toward AIGC persists, well-crafted AI content could still elicit high receptivity and significant impact. Leveraging these insights, we developed T-Lens, a system that computationally models human responses (i.e., belief and sharing tendency) to online content. T-Lens also provides clear explanations for its predictions, enhancing credibility and reducing long-term misinformation effects. Experiments with real-world data show promising results. Our research highlights the interplay between humans, AI, and online media, supporting efforts for a more trustworthy digital information ecosystem.
Bio
Asst. Prof. Kokil Jaidka is an Assistant Professor in Computational Communication. She has a Bachelor's degree in Engineering from PEC University of Technology, India, and a PhD in Information Studies from Nanyang Technological University, Singapore. She has several patents to her name, as well as first-author publications in the Proceedings of the National Academy of Sciences, Journal of Computer-Mediated Communication, and Journal of Communication, among other venues.
Title
The Elite Effect on Misinformation Susceptibility
Abstract
While social media platforms have been widely blamed for increasing political polarization and spreading misinformation, the causal mechanisms behind these phenomena remain debated. Previous research has focused primarily on selective exposure and echo chambers, but evidence for their effects has been mixed. This study examines whether reducing exposure to political elites and partisan content on Twitter affects users' susceptibility to misinformation and levels of affective polarization. To investigate the causal impact of political content exposure on social media, we developed a mobile application that allowed for experimental manipulation of users' Twitter timelines. Rather than focusing on cross-cutting exposure or echo chambers, we examined the broader effects of reducing overall exposure to political content. This approach helps isolate whether routine exposure to political discourse on social media drives polarization and misinformation susceptibility. Analysis of our field experiment revealed that users who were prevented from viewing their self-selected political content and elite accounts for one month showed a significant decrease in both misinformation susceptibility and affective polarization as measured by standard social distance and feeling thermometer metrics. These users were also less likely to engage actively or passively with content on X, although those with the most content blocked were likely to open the app more frequently everyday as compared to those in other conditions. These findings indicate that the basic structure of political discourse on social media - centered around constant engagement with political elites - may itself be contributing to broader patterns of misinformation susceptibility and partisan animosity. Therefore, while social media platforms' efforts to combat misinformation and reduce political hostility are important, the routine amplification of elite political voices may be a more fundamental driver of these phenomena.
Bio
Dr. Peng Qi is a Research Fellow at the Centre for Trusted Internet and Community, National University of Singapore (NUS). Before joining NUS, she obtained her Ph.D. degree in the Institute of Computing Technology, Chinese Academy of Sciences in 2023. Her research interests mainly lie in misinformation detection and intervene, and multimedia content analysis.
Title
SNIFFER: A Multimodal LLM for Explainable Out-of-Context Misinformation Detection System
Abstract
Misinformation is a prevalent societal issue due to its potential high risks. Out-of-context (OOC) misinformation, where authentic images are repurposed with false text, is one of the easiest and most effective ways to mislead audiences. Current methods focus on assessing image-text consistency but lack convincing explanations for their judgments, which is essential for debunking misinformation. While Multimodal Large Language Models (MLLMs) have rich knowledge and innate capability for visual reasoning and explanation generation, they still lack sophistication in understanding and discovering the subtle crossmodal differences. SNIFFER is a novel multimodal large language model specifically engineered for OOC misinformation detection and explanation. SNIFFER employs two-stage instruction tuning on InstructBLIP. SNIFFER not only detects inconsistencies between text and image but also utilizes external knowledge for contextual verification. Our experiments show that SNIFFER surpasses the original MLLM by over 40% and outperforms state-of-the-art methods in detection accuracy. SNIFFER also provides accurate and persuasive explanations as validated by quantitative and human evaluations.
Bio
Dr. Eka Nugraha Putra is a Research Fellow at NUS's Centre for Trusted Internet and Community and author of the upcoming book "Free Speech in Indonesia: Legal Issues and Public Interest Litigation" (Routledge). He holds an SJD from Indiana University Bloomington, the United States. He received a Fulbright Scholarship in 2018 and an AIFIS-Luce Fellowship (2019-2020).
Title
The Misapprehension of Online Harm Remedies in Misinformation and Disinformation
Abstract
The rapid proliferation of misinformation and disinformation in the digital age poses significant challenges for legal systems worldwide, yet current legal frameworks often fail to address these online harms adequately. This paper presents a qualitative analysis of legislative responses to misinformation and disinformation, focusing on the severity and efficacy of imposed punishments. Through a comparative examination of various countries' legal approaches—ranging from censorship and hefty fines to content removal and imprisonment—the study highlights a spectrum of punitive measures to curb the spread of false information. Despite the stringent nature of these laws, our analysis reveals that their effectiveness in reducing misinformation and disinformation's dissemination remains questionable. By evaluating the balance between the severity of penalties and their impact on the prevalence of online falsehoods, this paper aims to provide insights into the alignment (or lack thereof) between legislative remedies and the real-world harm caused by misinformation and disinformation. The findings underscore the need for more nuanced and effective legal strategies to address harmful online information’s complex and evolving nature.
Bio
Dr. Renae Loh is a Research Fellow at the NUS Centre for Trusted Internet and Community. Her research revolves around the development and outcomes of digital skills, competences and literacy, as well as its policy implications, with a focus on youths and adolescents.
Title
The Paradox of Women’s Digital Wellbeing on Reddit
Abstract
In this ever evolving digital landscape, both risks and opportunities abound and take on different forms. This paradox is apparent in women’s digital wellbeing, where the same features can facilitate both the suppression and uplifting of women. On one hand, online spaces and digital affordances facilitate online misogyny, patriarchy and the radicalisation of disenfranchised men, and technology facilitated (sexual) violence against women. On the other hand, online spaces and digital affordances can empower women and girls, allowing them to reclaim space, build resilience, support each other through sharing and seeking advice. In this project, we analyse the posts and comments from three (women only) subreddits, and apply the Digital Wellbeing Indicator Framework (DWIF). The five domains of digital life that underpin digital wellbeing, as scaffolded by the DWIF, are varyingly salient in online discussions in these subreddits; the empowerment driven largely by women themselves.
Demos and Posters
Click below to see the paper list for the poster session.
Demos:
[1] Barid Xi Ai, Min-Yen Kan. Multilingual QACheck: A Demonstration System for Multilingual Question-Guided Multi-Hop Fact-Checking.
[2] Juan Hu, Sanjay Saha, Shaojing Fan, Terence Sim. Combating Deepfake Forgeries.
[3] Jundong Xu, Hao Fei, Liangming Pan, Qian Liu, Mong-Li Lee, Wynne Hsu. Faithful Logical Reasoning via Symbolic Chain-of-Thought. 62nd ACL 2024.
[4] Anthony Tung, Yiqun Sun, Yixuan Tang, Qiang Huang, YuanYuan Shi. KALEIDO: Top-K retrievAL of nEws with Interpretable Embedding and Diversified Ordering.
Posters:
[1] Harry Cheng, Yangyang Guo, TIanyi Wang, Liqiang Nie, Mohan Kankanhalli. Diffusion Facial Forgery Detection. ACMMM 2024.
[2] Lei Tan, Shuwei Li, Mohan Kankanhalli, Robby T. Tan. Aggregating Diverse Cue Experts for AI-Generated Image Detection.
[3] Svetlana Churina. Are shadowbans effective in checking the spread of misinformation?
[4] Yiqian Huang, Shiqi Zhang, Laks V.S. Lakshmanan, Wenqing Lin, Xiaokui Xiao, Bo Tang. Efficient and Effective Algorithms for A Family of Influence Maximization Problems with A Matroid Constraint.
[5] Tianjie Ju, Bowen Wang, Hao Fei, Zhenyu Shao, Mong-Li Lee, Wynne Hsu, Zhuosheng Zhang, Gongshen Liu. Investigating the Adaptive Robustness with Knowledge Conflicts in LLM-based Multi-Agent Systems.
[6] Fakhar Abbas, Simon Chesterman. Building Trust in Generative AI Era: A Systematic Analysis of Global Regulatory Frameworks to Counter Disinformation and Strengthen Digital Resilience.
[7] Shaleen Khanal, Hangzhou Zhang, Araz Taeihagh. Why and How is the Power of Big Tech Increasing in the Policy Process? The Case of Generative AI. Policy and Society, March 2024.
Organising Committee
Role | Name |
---|---|
Programme | Dr.Hao Fei (Lead), NUS CTIC |
Dr. Diego Salazar, NUS Global Asia Institute | |
Harry Cheng, NUS School of Computing | |
Xinyuan Lu , NUS CTIC | |
Dr. Xingguang Chen , NUS CTIC | |
Dr. Fakhar Abbas, NUS CTIC | |
Local Arrangement | Wendy Poh (Lead), NUS CTIC |
Phan Ying Ling, NUS CTIC | |
Meng Luo, NUS CTIC | |
Website | Jundong Xu , NUS CTIC |
Advisors | Prof. Tsuhan Chen , NUS School of Computing |
Prof. Mong Li Lee, NUS CTIC |
Location
3 Research Link, Singapore 117602
Contact Us
Please feel free to reach out if you have any inquiries: Hao Fei, Wendy Poh, and Mong Li Lee.