Themed Workshop 3 (Morning session)

From Prompts to Play: Creating Game Narratives with Large Language Models

 

Description: This workshop focuses on how Large Language Models (LLMs) can be used to generate narrative content for games and digital storytelling. The workshop focuses on prompting-based narrative generation and asks practical, research-driven questions: What kinds of stories do LLMs produce under different prompting techniques? How consistent, creative, and usable are these outputs for narrative design? To what extent do these techniques introduce bias, stereotypes, or problematic representations? The workshop aims to bring together researchers and practitioners interested in generative narrative systems, narrative evaluation, and responsible AI for creative media. 

 

Session 1, led by Dr. Mustafa Can Gursesli, introduces the workshop’s core framing, focusing on LLM prompting methods for narrative creation. It will discuss commonly used prompting strategies such as zero-shot prompting, few-shot prompting, and structured prompting approaches, as well as how these methods influence narrative quality, coherence, and character stability. This session will also highlight practical limitations of LLM-generated narratives, including inconsistencies, hallucinations, a lack of long-term structure, and the tendency to reproduce biased or culturally narrow story patterns. The goal is to establish a shared understanding of what it takes for a prompting technique to succeed in a narrative context.

 

Session 2, led by Prof. Georgios Yannakakis, is structured as an interactive, panel-style roundtable focused on group discussion and synthesis. This session invites participants to contribute perspectives and examples to consolidate shared insights and build a community-driven research agenda. Moreover, the session explores how LLM-generated narrative outputs should be evaluated in both research and practice, including questions about measurable quality criteria, reproducibility, and methodological challenges. The discussion also addresses bias and fairness in generated narratives and how evaluation frameworks can capture harmful patterns beyond surface-level content. The main goal is to identify key open problems and jointly define future research directions. Participants are encouraged to contribute early-stage ideas, examples of prompting workflows, evaluation methods, case studies, or preliminary findings. The session will conclude with a synthesis of shared priorities and potential collaboration opportunities, which may lead to joint publications and future research initiatives.

 

Chairs: Mustafa Can Gursesli (can.gursesli@tuni.fi; main contact), Georgios Yannakakis (georgios.yannakakis@um.edu.mt)