LLM-supported development: Text editing without prompt dialogue#

The tool and how it works#

As part of a series of experiments exploring LLM-supported coding, a Gradio-based interface for text editing was developed. The tool differs from conventional LLM interfaces in that it does not use dialogue prompts: users work exclusively with an input and output area, between which curated tools (correcting, translating, summarising, analysing, etc.) operate.

The architecture comprises 12 predefined tools that are activated via buttons. Each tool is based on a curated prompt that is optimised to return only the edited text without meta comments. Users can modify tools with additional instructions or formulate their own commands entirely. A history function stores up to 15 editing steps with automatically generated titles, which are generated by separate LLM calls after each action.

Experimental objective#

The primary learning objective was to evaluate whether text editing tools are practical without explicit user prompts. The background to this was the observation that prompting for everyday tasks is often perceived as time-consuming and error-prone. The tool specifically addresses the most common use cases of general text editing and was designed to clarify whether an artefact-centred approach (input/output area instead of chat) is suitable for recurring tasks.

Technical implementation and development process#

Development followed a specification-first principle: in 90 minutes, a detailed 14-page specification was created through structured interaction with an LLM. The process included describing the required functionalities, discussing implementation options and systematically selecting them. Particular focus was placed on avoiding overengineering, especially in heuristics.

For this project, an ‘Implementation_Status’ document was introduced for the first time, which the LLM updated after each implementation step. This artefact provides a complete overview of the implementation status at any time and allows state-less development: each new prompt interaction can begin with complete context.

The implementation was modular according to the specification. The final code comprises approximately 2,000 lines, distributed across application logic (800), export functions (500) and prompt definitions (200). The total development time was around 3 hours: 90 minutes for specification, 30 minutes for prompt development, and 60 minutes for implementation, deployment and documentation. The specification went through 2-3 iterations over two days.

Key methodological insights#

Specification quality as a success factor: The quality of the specification proved to be important for feasibility. A precise specification significantly reduces implementation iterations.

Implementation_Status artefact: The introduction of a systematically maintained status document has proven to be a workflow-improving element. It enables state-less development and fully contextualised new prompts at any time.

Code structuring: LLMs manage code files well if they remain below 1000 lines (maximum 1500 lines). Clear modularisation proved to be central to LLM-supported coding.

Heuristics as a supplement: Even when using LLMs, heuristics used in moderation are helpful. The combination of prompt engineering (for LLMs with good prompt following) and downstream filtering proved effective in avoiding meta-responses.

Prompt engineering strategy: For LLMs with good prompt following, clear structuring into individual aspects works well. For weaker prompt following, precise continuous text instructions are more effective.

Validation and use#

The tool was passed on to several test subjects. Initial feedback on limitations was resolved through minor adjustments (primarily prompt refinements). The implementation proved to be robust; no significant structural changes were necessary.

The tool is used productively for everyday tasks: translations, text corrections, stylistic adjustments and linguistic analyses. The acceptance of the prompt-free approach confirmed the hypothesis: the advantage lies not in avoiding prompts (this option remains available via ‘Custom command’), but in working with input and output texts instead of dialogical interaction. The tool is particularly suitable for fast, recurring tasks with small to medium text lengths.

Conclusion#

The experiment demonstrates that artefact-centred tools without primary prompt dialogue are practicable and accepted by users for defined use cases. The methodological findings – in particular regarding the specification-first principle, the implementation_status artefact and code structuring – could be transferable to other LLM-based development projects. The short development time of 3 hours with functional robustness illustrates the potential of systematically specification-driven LLM development.