
Digital Frontlines: Interoperable Semantic Profiling of Disinformation Incidents
Contact persons
About
Non-democratic states increasingly deploy coordinated campaigns—spanning human, media, and financial networks—to undermine public trust and sway opinions in target countries. These “hybrid” threats challenge legal norms and are difficult to define, measure, and counter. This research anchors AI agent development on robust knowledge representations, such as expert-curated ontologies and knowledge graphs, to improve detection and understanding of such operations.
Focus and outcome
The student will gain hands-on experience with expert-curated data models (e.g., DISARM) and apply AI models to analyze media and documented activities from recent high-stakes elections (e.g., Moldova, Côte d'Ivoire), using data from partnered NGOs (e.g. German Media Registry). The project involves designing, implementing, and evaluating human-in-the-loop AI pipelines to systematically assess AI’s strengths and limitations in interpreting complex, real-world influence operations from diverse data sources.
The research also offers the student an opportunity to:
- Collaborate with students from MIT in Oslo during Summer ’26.
- Join an international collaboration of researchers from leading disinformation research labs (e.g., Cambridge Social Decision-Making Lab, Center for an Informed Public, etc.).
- Co-author results for high-impact publications.
Qualifications
- Exceptional motivation & problem-solving mindset.
- Interest in pursuing an international-level PhD.
- Comfort working with Python, Linux, and open-source software modules.