LLM-Enabled Development: Cascading Workflows Using the Example of a Word Processing Tool#

From spoken to written language – in three consecutive steps

As part of the experimental series on LLM-supported development, a project to create a text processing tool is presented. In this project, focused, cascading prompts led to better results than complex all-in-one approaches.

The problem#

The starting point was a practical challenge: an eight-hour training recording on LLM background information was available as a transcript. However, transcribed spoken language differs significantly from written text – it contains filler words, repetitions, interrupted sentences and colloquial expressions. Manual processing of eight hours of material was not feasible.

At the same time, there was an exploratory learning objective: How can multi-stage LLM workflows be technically implemented? Which patterns prove themselves in text processing?

The technical implementation#

A tool with a three-stage processing cascade was developed:

Stage 1 – Cleaning: Correction of obvious transcription errors, completion of broken sentences, removal of filler words

Stage 2 – Revision: Reformulation of colloquial expressions into factual, professional language, improvement of text structure

Stage 3 – Formatting: Incorporation of headings, outline, Markdown structuring

The technical basis was provided by Gradio for the user interface, Python with asyncio for asynchronous processing, and Gearman for background job management. Longer texts are divided into sections using character-based chunking.

Development effort: Approximately 6-8 hours spread over several weeks, including three hours for prompt optimisation. Code size: 1,100 lines in three files. Approximately ten main iterations.

Key observations#

Initial attempts with a single, comprehensive prompt resulted in severely damaged and truncated texts. The LLM attempted to perform too many transformations at once, resulting in information loss.

The chosen approach: each processing stage was given exactly one focused task. Instead of ‘clean up, revise and format the text’, three separate prompts were used, each with a clear goal. This cascading allowed for careful, precise text editing with minimal information loss.

A transcript such as ‘LLMs never say, I don’t know’ becomes ‘LLMs never say: I don’t know’ through this pipeline – identical in content, linguistically professional.

The development interface as a methodological tool#

One helpful technique was the creation of an extended development interface. While the production version offers a reduced user interface, the development version allowed for:

  • Interactive adjustment of prompts during processing
  • Insight into intermediate results at each processing stage
  • Experimental iteration without deployment cycles

Only after optimising the prompts in the development interface was the final user interface derived.

Opportunities and insights#

One important insight concerns scalability: approximately 1,000 to 1,500 lines of code proved to be a practical limit for developments without more detailed specifications. Larger projects require more structured approaches and more detailed advance planning.

Further insights:

  • Clear, single-purpose prompts showed better results in this project than complex multi-task prompts
  • Prompt optimisation required significant time (here: 50% of development time)
  • Iterative development with experimental environments accelerated optimisation
  • Simple technical approaches (character-based chunking) proved to be sufficient

This is part of a series on methodological insights from LLM-supported development projects. The focus is on transferable patterns and observations about the approach, not on the tools themselves.