Wikimedia Research/Showcase - MediaWiki
Jump to content
From mediawiki.org
Wikimedia Research
The
Monthly Wikimedia Research Showcase
is a public showcase of recent research by the Wikimedia Foundation's
Research Team
and guest presenters from the academic community. The showcase is hosted virtually
every 3rd Wednesday of the month at 9:30 a.m. Pacific Time/18:30 p.m. CET
and is
live-streamed on YouTube
. The schedule may change, see the calendar below for a list of confirmed showcases.
How to attend
edit
We live stream our research showcase every month on YouTube. The link will be in each showcase's details below and is also announced in advance via
wiki-research-l
analytics-l
, and
@WikiResearch
on Twitter. You can join the conversation and participate in Q&A after each presentation using the YouTube chat. We expect all presenters and attendees to abide by our
Friendly Space Policy
Upcoming Events
edit
March 2026
edit
No showcase. Join us for Wiki Workshop on
March 25-26, 2026.
Register (free) at:
Note from the Research Team
edit
We are pausing the monthly Research Showcases starting in April 2026.
This hiatus will allow us to focus on other community initiatives (such as the
Research Track at Wikimania
) while giving our team space to reflect on years of organizing showcases. We want to use this time to figure out how to improve the showcase format for the future.
We'd value your input on this! If you have suggestions, please reach out to the Research Team at research-wmf@lists.wikimedia.org.
We expect to share more updates in August 2026.
Archive
edit
For information about past research showcases (2013-present), you can search below or see
listing of all months here
2026
edit
February 2026
edit
Time
Wednesday, February 25, 17:30 UTC.
Find your local time
here
. (Please note that we made a change to the fourth Wednesday of the month, instead of the third).
Theme
AI and Communities
February 25, 2026
Video:
LLMs in Wikipedia: Investigating How LLMs Impact Participation in Knowledge Communities
By
Moyan Zhou (University of Minnesota)
Large language models (LLMs) are reshaping knowledge production as community members increasingly incorporate them into their contribution workflows. However, participating in knowledge communities involves more than just contributing content - it is also a deeply social process shaped by members' level of expertise. While communities must carefully consider appropriate and responsible LLM integration, the absence of concrete norms has left individual editors to experiment and navigate LLM use on their own. Understanding how LLMs influence community participation across expertise levels is therefore critical in shaping future norms and supporting effective adoption. To address this gap, we investigated Wikipedia, one of the largest knowledge production communities, to understand participation in three dimensions: 1) how LLMs influence the ways editors gather knowledge, 2) how editors leverage strategies to align LLM outputs with community norms, and 3) how other editors in the community respond to LLM-assisted contributions. Through interviews with 16 Wikipedia editors with different levels of expertise who had used LLMs for their edits, we revealed a participation gap mediated by expertise in adopting LLMs in knowledge contributions across knowledge gathering, alignment with community norms, and peer responses. Based on these findings, we challenge existing models of novice editors' involvement and propose design implications for LLMs that support community engagement, highlighting opportunities for LLMs to sustain mentorship, knowledge transmission, and legitimacy building by scaffolding and feedback, process documentation, and LLM disclosure by good-faith editors.
Paper:
AI Didn't Start the Fire: Examining the Stack Exchange Moderator and Contributor Strike
By
Yiwei Wu (University of Texas at Austin)
Online communities and their host platforms are mutually dependent yet conflict-prone. When platform policies clash with community values, communities have resisted through strikes, blackouts, and even migration to other platforms. Through such collective actions, communities have sometimes won concessions, but these have frequently proved to be temporary. Although previous research has investigated strike events and migration chains, the processes by which community-platform conflict unfolds remain obscure. How do community-platform relationships deteriorate? How do communities organize collective action? How do the participants proceed in the aftermath? We investigate a conflict between the Stack Exchange platform and community that occurred in 2023 around an emergency arising from the release of large language models (LLMs). Based on a qualitative thematic analysis of 2,070 messages from Meta Stack Exchange and 14 interviews with community members, we reveal how the 2023 conflict was preceded by a long-term deterioration in the community-platform relationship, driven in particular by the platform's disregard for the community's highly valued participatory role in governance. Moreover, the platform's policy response to LLMs aggravated the community's sense of crisis, triggering strike mobilization. We analyze how the mobilization was coordinated through a tiered leadership and communication structure, as well as how community members pivoted in the aftermath. Building on recent theoretical scholarship in social computing, we use Hirschman's exit, voice, and loyalty framework to theorize the challenges of community-platform relations evinced in our data. Finally, we recommend ways that platforms and communities can institute participatory governance to be durable and effective.
blogpost
January 2026
edit
Time
Thursday, January 22, 17:30 UTC.
Find your local time
here
Theme
Celebrating 25 Years of Wikipedia and the Research Behind It
Zoom link:
(Registration is required. This showcase will not be recorded)
2025
edit
December 2025
edit
Time: Wednesday, December 10, 17:30 UTC. Find your local time
here
December 10, 2025
Video:
Experimentation on Wikipedia- Special Panel
By
Morten Warncke-Wang (Wikimedia Foundation), Neil Thompson (Massachusetts Institute of Technology), and Yan Chen (University of Michigan)
November 2025
edit
No Showcase
October 2025
edit
Time
Wednesday, October 15, 16:30 UTC.
Find your local time
here
Theme
Celebrating 13 Years: Wikidata’s Role in Learning and Culture
October 15, 2025
Video:
Lessons learned from a decade of implementing Wikidata into Academia
By
Shani Evenstein-Sigalov (University of London; Wikimedia Foundation Board of Trustees)
The presentation will cover lessons learned from over a decade of working with Wikidata in Academia from 3 complementary perspectives: 1) Teaching: implementing Wikidata into the academic curriculum of a for-credit elective course; 2) Research: A PhD research exploring Wikidata as a learning platform; 3) Outreach: using Wikidata / Wikibase as a platform for preserving academic heritage. The presentation will conclude with a note to the future, and current work exploring how GenAI could be harnessed to help cultural and academic heritage institutions preserve knowledge through LOD platforms like Wikidata.
Using Wikidata in GLAM Institutions: a Labs Approach
By
Gustavo Candela (University of Alicante)
GLAM (Galleries, Libraries, Archives and Museums) organisations host extensive digital collections that have been made available to the public for over three decades. Advances in technology have facilitated the reuse of digital collections as rich data sources. In this context, Wikidata has emerged as a leading, innovative, and collaborative platform that enriches the digital collections provided by GLAM institutions. However, a comprehensive analysis of the current status, potential and challenges of its use in the GLAM sector was still lacking. This presentation will introduce an overview of Wikidata use in GLAM institutions within the context of the work of the International GLAM Labs Community (glamlabs.io). This work was presented in 2024 at the TPDL conference. Looking into the future, in this talk we will describe additional research topics and discuss how Wikidata will continue to play a central role in GLAM.
September 2025
edit
Time
Wednesday, September 24, 16:30 UTC.
Find your local time
here
. (Please note that we made a one-time change to the fourth Wednesday of the month, instead of the third).
Theme
Readers and Readership Research
September 24, 2025
Video:
Readers and Readership research: Key learnings from a decade of research and what’s ahead
By
WMF Research Team
For over a decade, the Wikimedia Foundation's Research team and its collaborators have studied Wikipedia's readership and readers. In this showcase, we will synthesize the key insights from this body of work, illustrate how these learnings inform our current research, and present recommendations for future research into understanding Wikipedia's readers and readership globally.
Slides on
figshare
August 2025
edit
No showcase. See you at
Wikimania
July 2025
edit
Time
Wednesday, July 16, 16:30 UTC. Find your local time
here
Theme
Examining the
Impact of LLMs on Knowledge Production Communities
July 16, 2025
Video:
The Rise of AI-Generated Content in Wikipedia
By
Creston Brooks and Denis Peskoff (Northwestern University)
In the age of online AI inundation, how frequently are people using LLMs when creating new Wikipedia articles and for what purposes? In the summer of 2024, we took initial steps towards addressing these questions by using two AI-generated text detectors, GPT-Zero and Binoculars, to compare detection scores from Wikipedia articles written in August 2024 to those created before the release of GPT-3.5 in March 2022. After calibrating each tool, we conduct a small case study, inspecting each of the 45 English articles flagged as AI-generated by both tools to better understand the motivations for using LLMs to create Wikipedia pages.
Paper:
The Rise of AI-Generated Content in Wikipedia
The consequences of generative AI for online knowledge communities
By
Gordon Burtch (Boston University's, Questrom School of Business)
Generative artificial intelligence technologies, especially large language models (LLMs) like ChatGPT, are revolutionizing information acquisition and content production across a variety of domains. These technologies have a significant potential to impact participation and content production in online knowledge communities. We provide initial evidence of this, analyzing data from Stack Overflow and Reddit developer communities between October 2021 and March 2023, documenting ChatGPT’s influence on user activity in the former. We observe significant declines in both website visits and question volumes at Stack Overflow, particularly around topics where ChatGPT excels. By contrast, activity in Reddit communities shows no evidence of decline, suggesting that social fabric plays a crucial role as a buffer against the community-degrading effects of LLMs. Finally, the decline in participation on Stack Overflow is found to be concentrated among newer users, indicating that more junior, less socially embedded users are particularly likely to exit.
Paper:
The consequences of generative AI for online knowledge communities
Generative AI Degrades Online Communities
June 2025
edit
Time
Wednesday, June 18, 16:30 UTC. Find your local time
here
Theme
Ensuring Content Integrity on Wikipedia
June 18, 2025
Video:
The Differential Effects of Page Protection on Wikipedia Article Quality
By
Manoel Horta Ribeiro (Princeton University)
Wikipedia strives to be an open platform where anyone can contribute, but that openness can sometimes lead to conflicts or coordinated attempts to undermine article quality. To address this, administrators use “page protection"—a tool that restricts who can edit certain pages. But does this help the encyclopedia, or does it do more harm than good? In this talk, I’ll present findings from a large-scale, quasi-experimental study using over a decade of English Wikipedia data. We focus on situations where editors requested page protection and compare the outcomes for articles that were protected versus similar ones that weren’t. Our results show that page protection has mixed effects: it tends to benefit high-quality articles by preventing decline, but it can hinder improvement in lower-quality ones. These insights reveal how protection shapes Wikipedia content and help inform when it’s most appropriate to restrict editing, and when it might be better to leave the page open.
Paper:
Seeing Like an AI: How LLMs Apply (and Misapply) Wikipedia Neutrality Norms
By
Joshua Ashkinaze (University of Michigan)
Large language models (LLMs) are trained on broad corpora and then used in communities with specialized norms. Is providing LLMs with community rules enough for models to follow these norms? We evaluate LLMs' capacity to detect (Task 1) and correct (Task 2) biased Wikipedia edits according to Wikipedia's Neutral Point of View (NPOV) policy. LLMs struggled with bias detection, achieving only 64% accuracy on a balanced dataset. Models exhibited contrasting biases (some under- and others over-predicted bias), suggesting distinct priors about neutrality. LLMs performed better at generation, removing 79% of words removed by Wikipedia editors. However, LLMs made additional changes beyond Wikipedia editors' simpler neutralizations, resulting in high-recall but low-precision editing. Interestingly, crowdworkers rated AI rewrites as more neutral (70%) and fluent (61%) than Wikipedia-editor rewrites. Qualitative analysis found LLMs sometimes applied NPOV more comprehensively than Wikipedia editors but often made extraneous non-NPOV-related changes (such as grammar). LLMs may apply rules in ways that resonate with the public but diverge from community experts. While potentially effective for generation, LLMs may reduce editor agency and increase moderation workload (e.g., verifying additions). Even when rules are easy to articulate, having LLMs apply them like community members may still be difficult.
Paper:
May 2025
edit
No showcase. Join for our 12th annual
Wiki Workshop
April 2025
edit
Time
Wednesday, April 16, 16:30 UTC. Find your local time
here
Theme
Motivation of Wikipedia Editors
April 16, 2025
Video:
Motivating Experts to Contribute to Digital Public Goods: A Personalized Field Experiment on Wikipedia
By
Yan Chen (University of Michigan)
We conducted a large-scale personalized field experiment to examine how match quality, recognition, and social impact influence domain experts' contributions to Wikipedia. Forty-five percent of the experts expressed willingness to contribute in the baseline condition, while 51% (a 13\% increase over the baseline) expressed interest when they received a signal that an article matched their expertise. However, none of the treatments had a significant effect on actual contributions. Greater actual match quality between a recommended Wikipedia article and an expert's expertise, measured by cosine similarity, an expert's reputation and the Wikipedia article length, were the most important predictors of both contribution length and quality. These findings suggest that match quality between volunteers and tasks is critically important in encouraging contributions to digital public goods, and likely to volunteering in general.
Quantitative Analysis of Zambian Wikipedia Contributions: Assessing Awareness, Willingness, Motivation, and the Impact of Gamified Leaderboards and Badges
File:Quantitative Analysis of Zambian Wikipedia Contributions.pdf
By
Lighton Phiri
University of Zambia
Wikipedia is a widely recognized and valuable source of information, However, it encounters persistent challenges in attracting and retaining active contributors. It is recorded that only 10 people from Zambia contribute and create content on wikipedia in the month of may 2023. While a large number consumes Wikipedia content, there is a noticeably low number of Wikipedians that contribute content on and about Zambia. This talk outlines a Facebook plugin, WikiMotivate, aimed at motivating Zambian Wikipedians to update pre-existing content, add new entries, and share their natural expertise. WikiMotivate was implemented as a Facebook plugin that utilizes leaderboard and badge gamification features to encourage and incentivize active Wikipedia content contribution. Using a mixed-methods approach, historical Wikipedia edit histories were used to quantify content contributed by Zambian Wikipedians. In addition, user surveys were conducted to determine relative levels of awareness about Wikipedia, willingness to contribute content on Wikipedia and perceived motivating factors that affect content contribution on Wikipedia. Furthermore, a Facebook plugin, WikiMotivate, was implemented in order to be used as a service for motivating potential Zambian Wikipedians. Finally, in order to determine the most effective approach, a comparative analysis of leader-boards and badges was conducted with nine (9) expert evaluators. The results clearly indicate that a significant proportion of Wikipedia content on and about Zambia is authored by Wikipedians from outside Zambia, with only 11% of the contributors, out of the 224, originating from Zambia. In addition, study participants were largely unaware of the various editing practices on Wikipedia; interestingly enough, most participants expressed their willingness to contribute content if trained. In terms of motivating factors, “Information Seeking and Educational Fulfillment” was the key motivating factor. The Facebook plugin implemented suggests that incorporating leaderboards and badges is a more effective approach to motivating contributions to Wikipedia. This study provides useful insight into the landscape of Wikipedia content contribution in the Global South.
Project Website:
Wikipedia Content Contribution in the Global South
Paper: Chalwe, C., Chanda, C., Muzyamba, L., Mwape, J., Phiri, L. (2024).
Quantitative Analysis of Zambian Wikipedia Contributions: Assessing Awareness, Willingness, Motivation, and the Impact of Gamified Leaderboards and Badges
. In: Gerber, A. (eds) South African Computer Science and Information Systems Research Trends. SAICSIT 2024. Communications in Computer and Information Science, vol 2159. Springer, Cham. DOI:
10.1007/978-3-031-64881-6_2
March 2025
edit
Time
Wednesday, March 19, 16:30 UTC. Find your local time
here
Theme
Gender Gaps
March 19, 2025
Video:
Online Images Amplify Gender Bias
By
Douglas Guilbeault (Stanford University)
Each year, people spend less time reading and more time viewing images, which are proliferating online. Images from platforms like Google and Wikipedia are downloaded by millions every day, and millions more are interacting via social media like Instagram and TikTok that primarily consist of exchanging visual content. In parallel, news agencies and digital advertisers are increasingly capturing attention online through the use of visual content, which people process more quickly, implicitly, and memorably than text. In this paper, we show that the rise of images online significantly exacerbates gender bias, both in its statistical prevalence and its psychological impact. We examine the gender associations of 3,495 social categories (such as nurse or banker) in over one million images from Google, Wikipedia, and IMDb, as well as in billions of words from these platforms. We find that gender bias is stronger and more prevalent in images than text for both female- and male-typed categories. We further show that the documented underrepresentation of women online is worse in images compared to not only text, but also public opinion and US census data. Finally, we conducted a nationally representative, pre-registered experiment which shows that googling for images rather than textual descriptions of occupations amplifies gender bias in participants’ beliefs. Addressing the societal impact of this large-scale shift toward visual communication will be essential for developing a fair and inclusive future for the internet.
Measuring the Gender Gap
slides
By
Tianwa Chen (The University of Queensland)
In this presentation, I would like to present our three research works aimed at measuring the gender gap on Wikipedia through data-driven strategies. Our first study explores the estimation of gender completeness within Wikipedia, offering a new methodology for assessing content gaps. The second study analyses the evolution of gender diversity, employing visualizations to track the gender distribution in Wikipedia articles categorized under ‘Person’ over time. The third and ongoing study delves into the gender balancing efforts among Wikipedia editors. We are currently conducting interviews within the editor community and planning to develop a dashboard through a co-design approach. These studies collectively advance our understanding of gender representation and provide actionable insights to foster gender equality in the Wikipedia community.
Addressing Wikipedia’s Gender Gaps Through Social Media Ads
By
Reham AL Tamime (University of Strathclyde)
Wikipedia’s well-documented gender gap remains a persistent challenge, with women underrepresented among contributors. While past efforts—such as Edit-a-thons, workshops, and social media campaigns—have aimed to bridge this gap, more targeted approaches remain under-explored. In this talk, I will present our project, which explores the use of social media advertising to reach and recruit women as Wikipedia editors. I will share preliminary findings from our targeted advertisements on LinkedIn, where we designed a survey to assess the effectiveness of the reach of the advertisement. Building on these insights, I will discuss how we have expanded our approach to include multiple social media platforms, refined targeting strategies, and developed various messages to increase reach and eventually participation in Wikipedia. (
meta page
February 2025
edit
Time
Wednesday, February 26, 17:30 UTC. Find your local time
here
Theme
Wikipedia Administrators
February 26, 2025
Video:
Where have all the admins gone? Wikipedia administrator recruitment, retention, and attrition.
slides
By
Eli Asikin-Garmager, Yu-Ming Liou, Claudia Lo, Caroline Myrick (Wikimedia Foundation)
Wikipedia administrators are users with extra rights who do work beyond editing, such as settling disputes and preventing repeated vandalism. The tens of thousands (sometimes hundreds of thousands) of administrative actions taken every month are vital to the healthy operation of any language version of Wikipedia. We will present results from a project investigating reasons administrators come, stay, and leave the role. We obtained baseline metrics of administrator in/outflow and activity, and conducted a mixed method, cross-comparative study of potential, current, and former administrators. Results show that the number of active administrators on larger Wikipedias has been declining since 2018, with some exceptions. Our presentation will unpack these findings and explore topics such as what motivates administrators to stay in the role, and what barriers may prevent new administrators from joining. (
Report
Meta-Wiki
January 2025
edit
Time
Wednesday, January 22, 17:30 UTC. Find your local time
here
Theme
Reader Attention and Curiosity
January 22, 2025
Video:
Collective Attention Across Wikipedia and the Web
By
Patrick Gildersleve, University of Exeter
Wikipedia, as one of the most popular websites globally, serves as an important indicator of collective attention online. Readers of news and social media often turn to Wikipedia as a secondary resource for supporting or clarifying information, and this is reflected in the patterns of page views and edits on the online encyclopaedia. Wikipedia is also not just a vast repository of information; it is a network of interconnected articles that exists within the broader ecosystem of the World Wide Web. To fully comprehend the dynamics of online popularity, we must study how individuals navigate between articles and how external platforms drive traffic to Wikipedia, not just Wikipedia articles (or alternative online records) in isolation. In this talk, I will review research on how major news events spark networked surges of collective attention to Wikipedia articles, how Twitter users both navigate and contribute to Wikipedia in response to viral social media content, and how we can combine data from Reddit and Wikipedia to study patterns of attention towards current events, influxes of traffic from social media towards Wikipedia, and the use of Wikipedia in discussions on social media.
Architectural styles of curiosity in global Wikipedia mobile app readership
By
Dale Zhou, University of California, Irvine
A historico-philosophical examination of texts over two millennia previously revealed three styles of curiosity: the wandering “busybody”, the targeted “hunter,” and the creative “dancer.” In this talk, I will review network signatures of these three styles from an analysis of 482,760 readers using Wikipedia’s mobile app in 14 languages from 50 countries or territories. By measuring the structure of knowledge networks constructed by readers weaving a thread through articles in Wikipedia, we expand upon prior work in the laboratory that found evidence for distinct knowledge network architectures constructed by each curiosity style. Moreover, we found associations, globally, between the structure of knowledge networks and population-level indicators of spatial navigation, education, mood, well-being, and inequality. This presentation will describe how these findings advance our understanding of Wikipedia’s global readership and demonstrate how cultural and geographical properties of the digital environment relate to different styles of curiosity.
Paper:
Retrieved from "
Categories
Wikimedia Research
Research
Wikimedia Research/Showcase
Add topic