As we step into 2026, we are reflecting on a year of building, learning, and momentum at SimPPL.

In 2025, we reached several important milestones:
We expanded and launched projects with Spreeha Foundation in Bangladesh, Deutsche Welle Akademie in Germany, Southeast Asia, and Kenya, Cohere AI based in North America, and with Jagran New Media in India. We won awards from Google, Omidyar Network, Ford Foundation, and several others jointly with our partners at Harvard University and MIT.
The year also ended on a particularly high note.
A special thanks to Sharda University School of Media, Film, and Entertainment in collaboration with Jagran New Media for featuring a talk about our work on information integrity.
Our contributions were recognized by the Collective Intelligence Project for our developing one of the first LLM evaluations specific to multilingual reproductive health conversations, assessing medical accuracy, linguistic quality, and safety. This work highlights critical risks in real world deployments as AI-mediated health guidance scales. We extend a special thanks to Faisal Lalani from CIP for his collaboration and support!
In 2026, we are excited about what's ahead: new Arbiter case studies developed with journalists exploring how AI discourse differs across African countries, how H-1B conversations reflect broader anxieties about labor and migration, and how digital governance debates are evolving as governments move to hold Big Tech accountable; along with deeper research partnerships and continued work at the intersection of AI, media, and public interest.
Keep an eye out for future newsletters as we share more from the year ahead at SimPPL.
Collection of Cyber Threat Intelligence sources from the deep and dark web - fastfire/deepdarkCTI
Learn MoreReciprocal rank fusion (RRF) is a method for combining multiple result sets with different relevance indicators into a single result set. RRF requires...
Learn MoreSimpleMem: Efficient Lifelong Memory for LLM Agents - aiming-lab/SimpleMem
Learn More🆕 Have Chinese AI models pulled ahead of their global counterparts? Our latest issue brief analyzes China’s diverse open-weight model ecosystem and examines the policy implications of their widespread global diffusion. Read the key highlights and full report here: https://lnkd.in/g5Wvx68S This brief was authored by Caroline Meinhardt, S...
Learn MoreBased on nearly 60 interviews with staff at tech companies, critics of big tech, civil society groups impacted by tech-amplified social media, and new tech startups, research revealed three distinct but complementary narratives or approaches to thinking about polarization and social cohesion in digital spaces. The “User-Centered” Narrative d...
Learn MoreDark patterns – deceptive designs that steer users into unintended actions – erode the trust that is essential to a healthy internet. To better understand this problem, we moved beyond rigid technical definitions to ask a simple question: How do people actually experience and perceive these designs?
Learn MorePlatform Interventions Literature Review Codebook Platform Interventions: How Social Media Counters Influence Operations Original created: July 22, 2020 This version created: December 10, 2020 Author: Kamya Yadav Field Description Name/Short description If the entry is a platform interv...
Learn MoreA study of personality convergence across language models
Learn MoreDespite remarkable progress in large language models, Urdu—a language spoken by over 230 million people—remains critically underrepresented in modern NLP systems. We introduce Qalb, achieving state-of-the-art performance with a weighted average score of 90.34.
Learn MoreView recent discussion. Abstract: AI co-scientists are emerging as a tool to assist human researchers in achieving their research goals. A crucial feature of these AI co-scientists is the ability to generate a research plan given a set of aims and constraints. The plan may be used by researchers for brainstorming, or may even be implemented afte...
Learn MoreNature - Artificial intelligence boosts individual scientists’ output, citations and career progression, but collectively narrows research diversity and reduces collaboration, concentrating...
Learn MoreIn collaboration with Nature, I investigated the impact of the Trump administration on US science one year after its return to office. More than 7,800 research grants were cancelled or frozen, affecting around 25,000 scientists and research staff and resulting in an estimated US$32 billion in lost funding. This project asks a methodological qu...
Learn MoreCreate interactive, responsive & beautiful data visualizations with the online tool Datawrapper — no code required.
Learn MoreA series of graphics reveals how the Trump administration has sought historic cuts to science and the research workforce.
Learn MoreReddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.The key point: This doesn't touch Reddit's servers....
Learn More