Workshop with MediaFutures

Sign up

When

5. June.

Time: 12:00 - 15:00


What

Together with their partners, SFI MediaFutures will highlight the ongoing research and innovation in the field of disinformation and fake news in Norway. After Russia's large-scale invasion of Ukraine in February 2022, they have flooded the internet with disinformation. This has led to renewed attention to automated fact-checking technologies' potential in combating false information online.
In Norway, there are several initiatives from academia and the industry in researching and providing tools and insights into this area.
 
Together with Media City Bergen, SFI MediaFutureswill host a workshop on 5 June where we bring together researchers, entrepreneurs, media tech professionals, and the media industry for presentations and discussions of current affairs and challenges with disinformation and fake-news.

Program

Kl 12:00 - Welcome - Christoph Trattner
Kl 12:10 - Faktisk.no/IJ - Henrik Brattli Vold
Kl 12:40 - NORDIS - Laurence Dierickx
Kl 13:10 - Coffee and snacks
Kl 13:25 - MediaFutures - Duc Tien Dang Nguyen and Sohail Ahmed Khan
Kl 14:05 - Factiverse/UiS - Vinay Setty
Kl 14:35 - Discussion - Led by: Duc Tien Dang Nguyen
Kl 15:00 - End

Descriptions of the talks and speakers 

Faktisk.no/IJ - How Faktisk Verifiserbar verifies the war

Since the start of the war in Ukraine, Faktisk Verifiserbar has been developing methods and
workflows to counter propaganda and get precise and verified information out from social media and
into Norwegian newsrooms. During the course of these months, the newsroom has educated more
than 30 journalists in open-source intelligence (OSINT) gathering, and their knowledge is being used
to raise knowledge and awareness of the methods in many large newsrooms all over Norway.
Henrik Brattli Vold will show you how they succeeded, and how they taught themselves to work with
free or inexpensive tools to unmask the Ukrainian stories.

 

 

 

 

Henrik Brattli Vold is working at the Institute of Journalism in Pressens Hus, and he is also a fellow ofthe Faktisk Verifiserbar newsroom. He has experience working in very different fields and mediumsand he is now a journalism trainer.




 

 

 


NORDIS - Fact-checking the Ukraine war

As part of the Nordic Observatory for Digital Media and Information Disorder (NORDIS), the
University of Bergen focused on the tools and technologies likely to support or augment fact-
checking practices. The research in this context encompasses state-of-the-art fact-checking
technologies, a study on the Nordic fact-checkers user needs, and designing and developing
a set of multimedia forensic tools to support a human-in-the-loop approach, considering the
end-user requirements. 


Professional fact-checking activities can be schematized in a three-stage pipeline, which
consists of identifying claims, verifying claims, and providing a verdict. However, this process
is challenged by the complex application domain to which it relates – scientific, economic,
political, cultural, social – and by the nature of the fact to check, either textual or audiovisual.
Hence, to better understand the fact-checkers user needs and requirements in context, we
studied the challenges of the Russian-Ukrainian war, which relates to the attempts to
manipulate public opinions through propaganda. What particular challenges do fact-checkers
face? Does the socio-professional context affect the difficulties that fact-checkers encounter?
Do fact-checkers perceive that they have adequate resources to perform their job efficiently?

To answer these questions, the research method included structured interviews with fact-
checkers and an online questionnaire distributed during the Global Fact 9 Conference,
organized in June 2022 in Oslo. 85 fact-checkers from 46 countries participated in this
survey. Initial results showed that the main challenges they face concern access to reliable
sources on either side of the conflict. They also underlined struggling to verify the information
presented in manipulated context. Being a part of a global fact-checking network is viewed
as an asset for exchanging information. Fact-checkers had mixed advice on the sufficiency of
the tools at their disposal. However, they agreed on the need for new technological tools to
provide context and accurate translations.

 

 

Laurence Dierickx has a professional background in data and computational journalism. She holds a master’s degree in Information and communication science and technology and a PhD in Information and communication science. She is a researcher at the department of Information science and media studies at the University of Bergen and a data journalism teacher at the Université Libre de Bruxelles (Belgium).

 

 

 


MediaFutures: The Future of Mis/Disinformation: The Risks of Generative Models and Detection
Techniques for Countering Them

In this talk, we will explore the current progress of generative models and their
potential in spreading misinformation and disinformation. Since recently, new generative
models including GANs, Diffusion Models, Large Language Models have demonstrated
remarkable progress in generating realistic fake (deepfake) content such as visuals, audios,
and text. We will highlight the potential dangers that come with these powerful models, as
well as provide an insight into the research efforts being devised in order to detect fake
content generated using these models. Also, the challenges associated with detecting such
content and the various efforts being devised to combat it will be briefly discussed. Overall,
the talk will highlight the importance of being vigilant against the spread of
mis/disinformation and the critical role that the detection models might play in mitigating its
impact.

 

 

Sohail is a PhD candidate at MediaFutures and University of Bergen. He holds an MSc in Cybersecurity and Artificial Intelligence from the University of Sheffield, UK. Prior to joining MediaFutures, Sohail worked as a research assistant at Mohamed bin Zayed University of AI, Abu Dhabi, UAE. Before that, he worked as a remote research assistant at CYENS Centre of Excellence, Nicosia, Cyprus.  His research interests intersect deep learning, computer vision and multimedia forensics. Sohail is currently associated with the MediaFutures’ WorkPackage 3, i.e., Media Content Production and Analysis.

 

 

 


MediaFutures: Detecting Cheapfakes - Lessons Learned from Three Years of Organizing the Grand
Challenge

This talk discusses the challenges and lessons learned from three years of organizing a
grand challenge focused on detecting out-of-context (OOC) images. Cheapfakes, which refer
to non-AI manipulations of multimedia content, are more prevalent than deepfakes and can
be created using editing software or by altering the context of a media through misleading
claims. Detecting OOC media is much harder than fake media because the images and
videos are not tampered with.


Our challenge aims to develop and benchmark models that can detect whether a given news
image and its associated captions are OOC, based on the recently compiled COSMOS
dataset. Participants have developed state-of-the-art methods, and we will discuss the
evaluation metrics used in the challenge. We have also learned valuable lessons on the
complexities and nuances of detecting OOC images and the importance of creating diverse
and representative datasets.


Additionally, we will share insights on the interdisciplinary collaboration needed to combat
cheapfakes effectively. The talk highlights the significance of detecting OOC media in news
items, specifically the misuse of real photographs with conflicting captions.

 

 

Duc-Tien Dang-Nguyen is an associate professor of computer science at the Department of Information Science and Media Studies, University of Bergen. His main area of expertise is on multimedia forensics, lifelogging, multimedia retrieval, and computer vision. He is a member of MediaFutures WP3 - Media Content Analysis and Production in Journalism and The Nordic Observatory for Digital Media and Information Disorder (NORDIS). He is the author and co-author of more than 150 peer-reviewed and widely cited research papers. He is a PC member in a number of conferences in the fields of lifelogging, multimedia forensics, and pattern recognition. 

 

 

 


Factiverse/UiS: Explainable AI for Automated Fact-Checking

Automated fact-checking using AI models has shown promising results to combat
misinformation, thanks to several large-scale datasets which are available. However, most models
are opaque and do not provide reasoning behind their predictions. Moreover, with the recent
popularity of LLMs such as GPT-3/4 by OpenAI, Llama by Meta and Bard by Google, there is renewed
worry of misinformation. In this talk, I will enumerate the existing approaches w.r.t XAI for fact-
checking and discuss the latest trends in this topic. The talk will also delve into what makes a good
explanation in the context of fact-checking, and identify potential avenues for future research to
address the current limitations.

  

 

Vinay Setty is a co-founder and the CTO of Factiverse. He is also an Associate Professor at University of Stavanger. His research area broadly includes natural language understanding (NLU), information retrieval (IR) and text mining involving unstructured textual documents as well as structured knowledge graphs. Specifically, automated fact-checking, question answering and coversational search over knowledge graphs. He has won the SR-Bank Innovation Prize for 2020 using the deep neural networks for fake news detection. He has a PhD from University of Oslo and he was a Postdoctoral researcher at Max Planck Institute for Informatics in Germany.

Where

Media Lab, MCB Tower 3, 9th floor

Sign up

Organizer

MediaFutures and Media City Bergen