Through participation in the project AI4Media Sound and Vision aims to address this question as well as provide input to developers of AI-based tooling for use within the humanities. AI4Media is one of four Networks of Excellence supported by the EU that brings together partners across Europe to explore how AI impacts Media, Society and Democracy. AI4Media develops AI-based tooling across the media landscape, ranging from use in journalism, games, filmmaking and humanities. Sound and Vision is a use-case partner, providing user requirements to the technical partners and validating the research results within the context of users from the social sciences and humanities. In part, we build on experience we already have facilitating research in the CLARIAH Media Suite, a research environment built by a team at Sound and Vision. But specifically for AI4Media we’ve also started up a series of semi-structured interviews with various researchers that are potential users for AI-based tooling to expedite their work.
Configurable AI-based tooling
Through these interviews we established a more specific need among scholars working with media, namely to identify, quantify and challenge issues of bias, framing and representation in media. To be able to identify changes over time, and differences between various media sources holds great value for critical reflection on media and society. The challenge with these concepts however is their level of abstraction and the fact that the definition and operationalization of these broad and complex societal issues is itself subject to debate. Researchers will develop their own understanding and identify more specific instances of these broad phenomena, for example the degree to which talk show hosts interrupt female speakers versus male speakers, or the framing of violence in various news channels covering an international conflict. As one can imagine: the specific instances of framing, bias and representation can vary hugely. So in order to facilitate such research, a great degree of flexibility and configurability is required of AI-based tooling.
At the core the tooling should enable researchers to perform basic search operations: detect specific concepts across various modalities within large multi-modal (text, audio and -moving- images) datasets. But the configurability should reach further. Researchers should be able to set parameters for which confidence scores for results are to be used, in which context concepts should occur and (in the case of time-based media) how far apart occurrences can be. In the later stages of the AI4Media-project Sound and Vision will work on a demonstrator, in which various tools developed in the project will be brought together and evaluated with end users.
Preconditions for the use of AI in academics
Another part of AI4Media focuses on the trustworthiness of the developed AI-based tooling. Academic research has the highest standards when it comes to the transparency of methodologies, the fairness of representation in training sets in Machine Learning and the reproducibility of research. Therefore, within our use case will also stress the importance of preconditional requirements for trustworthy AI-based tooling which allows scholars to confidently deploy the tooling.
Please contact us if you have any questions about this project, or if you are a researcher with ideas on how AI can support humanities research focused on media.
Subscribe to the newsletter Research of Sound and Vision and stay informed of all meetings and activities we do to make our collections accessible for research. The newsletter is in Dutch.