LLM-Enabled Development: From the Democratisation of Coding to Changed IT Practices#

Findings from a Series of Experiments#

Another Wave of Democratisation#

The history of IT tools shows recurring patterns of democratisation: digital music production replaced expensive recording studios, digital photography replaced the darkroom, and digital video editing replaced costly editing suites. Website creation, which used to require HTML knowledge, is now possible with content management systems that do not require programming skills. These developments did not replace the expertise of professional sound engineers, photographers, editors or web developers – they shifted their focus to more complex, demanding projects, while simpler tasks could be handled by a larger group of people.

The possibilities of LLM-supported software development point to a similar development. For the first time, it is possible to develop functional software in a short period of time without having to write code yourself. The central question is not whether professional software development will become obsolete, but rather where the practical limits of these new possibilities lie, which areas of expertise will remain helpful, and how access to software development will expand.

The series of experiments: systematic exploration of the possibilities#

To investigate these questions empirically, a series of development projects of varying complexity was carried out. The tools developed ranged from simple text editing interfaces to multi-agent systems and code analysis tools. Development times ranged from one to seven hours, with code volumes between 800 and 18,000 lines.

Examples from the project results:

  • Chat interface (90 minutes, 800 lines): An interface for local LLMs with history function and automatic title generation
  • Code analysis tool (3 hours, 5,000-6,000 lines): Hierarchical analysis of 1.2 million lines of Java code at file, package, module and system level
  • Presentation tool (2 hours of planning + 30-60 minutes of coding, 3,000 lines): Multi-agent system for structuring presentations based on uploaded documents
  • Text processing tool (6-8 hours, 1,100 lines): Three-stage pipeline for transforming transcripts into professional texts
  • Translation system (6-7 hours, 12,000 lines): Intelligent document translation with context management and structure preservation
  • Migration tool (20 minutes specification + 1 hour development): Content migration support for website relocations

These time specifications contrast significantly with conventional development approaches. The translation system was initially estimated to take 4-6 weeks – but was realised in 6-7 hours. The time savings range from hours to days compared to classic development, although the code quality would possibly be higher if developed by experienced developers. However, it is important to note that all the tools developed proved to be useful for their respective tasks.

Methodological findings#

1. Specification quality as an important success factor

Across all projects, the quality of the specification proved to be a particularly important factor for feasibility. The clearer the functional requirements, technical dependencies and architectural decisions were worked out in advance, the more error-free the implementation was. The investment in specification quality – typically between 20 and 90 minutes – paid off in significantly reduced implementation effort.

The depth of the specification is important: it covers the exact design of the user interface down to individual elements, names the technologies to be used and defines the desired architecture. This precision enables the LLM to implement the requirements much more accurately than vague instructions.

The development process itself is changing: clarifying requirements, making architectural decisions and critically evaluating LLM proposals are becoming important tasks, while implementation details can be delegated to the LLM. However, this does not mean that IT expertise is becoming superfluous – on the contrary.

2. Active control against overengineering is helpful

LLMs often tend towards complex solutions that use code patterns known from their training data. This becomes problematic when the proposed architectures exceed the LLM’s replication capabilities – the model ‘knows’ complex patterns but cannot implement them reliably. Consistently demanding simple approaches (KISS principle: Keep It Small and Simple) therefore proved to be a helpful method during specification creation.

Focusing on simple approaches that can be described completely and clearly in a technical specification was particularly effective. The more precisely the desired solution is formulated in advance, the less room there is for overly complex suggestions from the LLM or excessive heuristics. Without this active control, LLMs often suggest complex solutions that appear elegant in theory but do not work cleanly in practice.

3. Structuring for LLM maintainability

Clear modularisation with size limits per file significantly improved both code quality and maintainability by LLMs. During development, it became apparent that files with 1,000-1,500 lines of code represent a practical limit for good maintainability and extensibility. The application should therefore be broken down into distinct modules from the outset to keep all files below this size. These guidelines are also helpful as development progresses, as LLMs tend to gradually expand existing code files – to a size that they can no longer process carefully or that exceeds the available context.

Greater complexity should therefore be deliberately broken down into manageable parts – a decomposition that LLMs cannot yet reliably perform themselves. A good understanding of software architecture is helpful here: not for coding yourself, but for specifying realistic and controllable solutions and defining appropriate module boundaries.

4. Iterative development with specific patterns

Successful projects typically followed a clear sequence: functional specification → technical specification → implementation → testing in a few iterations (2-5). The use of ‘Implementation_Status’ documents, which were updated after each step, enabled state-less development and fully contextualised re-entry at any time.

Observed limits and scaling#

The experiments show the following patterns:

  • Without detailed specification: Approximately 1,000-1,500 lines of code
  • With good specification: Up to approximately 12,000 lines of code could be achieved
  • With well-thought-out architecture structuring: There are indications that up to approximately 20,000 to 25,000 lines of code can be achieved
  • Beyond that: Further approaches need to be explored. There are indications that even larger applications may be possible with additional supporting documents.

It became clear that projects for today’s LLMs become too extensive above a certain size: The content context, the file-level context, the interaction of the technologies used, and the basic structure of the application must be kept in mind by the developer when the context of the LLM is exceeded. To this end, it must be ensured that the implementation and architecture of the application continue to be understood even as it undergoes successive further development.

New possibilities: The concept of ‘casual code’#

The significantly reduced development times open up new fields of application – and at the same time change the character of software code. Code is no longer exclusively a carefully created and maintained artefact, but is increasingly taking on transient and ephemeral aspects. Code can be created for use over just a few hours, days or weeks – as a disposable prototype, an experimental tool or a temporary solution.

This new category of ‘casual code’ enables:

Rapid prototyping: Ideas can be transformed into functional demonstrators that serve as a basis for informed decisions. The migration tool was initially developed as a one-hour experiment – and is now being evaluated by several teams.

Exploratory development: The question ‘Does this approach even work?’ can be answered by working prototypes, not by theoretical considerations.

Didactic experiments: Simple testing of teaching/learning scenarios is possible directly, without time-consuming development cycles.

Internal tools and special solutions: Development of tools for specific tasks, software testing, implementation of interfaces to applications, web integration or HTML simulators for illustration purposes.

In addition, a new category of use cases is opening up: software development for everyday work tasks that were not previously associated with coding. The consolidation of different data sources, the visualisation of specific information or the automation of recurring processes can now be implemented in a short time – often within an hour. This changes the dynamics of collaboration: ideas no longer have to be discussed theoretically, but can be directly transferred into functioning prototypes and evaluated together. This immediate implementability enables new forms of iterative problem solving in teams.

This means that there will be much more code than before – and much more code with a limited lifespan. This development raises new questions: How do you deal with quality assurance for such code? What standards apply to the deployment and maintenance of temporary code? How do you document this type of code appropriately?

Changing requirements for development expertise#

Experiments show that a good understanding of software architecture, technology landscapes and system design is currently helpful for successful LLM-supported development. IT professionals with experience in system architectures can currently carry out such developments more easily. However, it is to be expected that these opportunities will also become accessible to other groups of people in the future – similar to previous waves of democratisation.

The expertise will be used differently than in classic development, and the following skills will come into focus:

  • Understanding of software architecture for specifying realistic solutions and defining appropriate module boundaries
  • Knowledge of the purposes, advantages and disadvantages of different technologies
  • Understanding of the complexity of different technologies and their areas of application
  • Careful specification coordination to avoid ambiguities
  • Critical evaluation of LLM proposals for appropriateness and feasibility

 

Areas of application and limitations#

The methodology shows its strengths in specific areas:

Suitable for:

  • Rapid prototyping and feasibility studies
  • Internal tools and utilities
  • Analysis tools without security-critical requirements
  • Exploratory development and experiments
  • Didactic scenarios and learning environments
  • Software testing and interface implementations
  • Web integration and HTML simulators for illustration

Not suitable for:

  • Safety-critical production systems
  • High-quality production software with strict quality requirements
  • Complex systems with extensive dependencies (without significant additional manual effort)
  • Projects that require long-term maintainability by different developers

The deployment challenge#

A specific challenge arises during deployment: when working with technologies that are not fully understood, productive deployment becomes significantly more difficult. It is therefore advisable to choose technologies in the specifications that have already been successfully operated and whose effects, advantages and disadvantages in operation are known.

This aspect becomes particularly relevant with the increasing amount of ‘casual code’: even temporary or experimental tools often have to run somewhere – and the question of appropriate deployment strategies for many small, short-lived applications arises anew.

Implications for IT organisations#

The opportunities observed also raise additional questions for IT organisations:

Development processes: How can rapid prototyping with LLMs be integrated into existing development processes? What quality standards apply to exploratory code?

Skills development: Which qualifications are becoming more important, and which are becoming less relevant? How do you qualify subject matter experts to develop specialised tools in their domain?

Deployment and operation: How do you deal with the increasing amount of ‘casual code’? Which infrastructure strategies are appropriate for many small tools?

Governance: Which guidelines apply to LLM-developed code? How do you evaluate its quality and security?

Conclusion#

The series of experiments shows that LLM-supported development is not a substitute for professional software development, but rather a supplement for exploratory phases, rapid prototyping and the quick validation of tool ideas. The significant time savings – from days to hours – are real and reproducible, but currently still require specific methodological skills and an understanding of architecture.

The democratisation of coding follows historical patterns: just as digital music production, photography and video editing made studios and darkrooms more accessible without replacing professionals, LLM-supported coding will enable a broader user base to develop functional tools – while professional development focuses on more complex, critical systems. Currently, IT professionals with architectural experience still have an advantage, but accessibility is expected to continue to increase.

The future lies not in either/or, but in the differentiated use of different development approaches for different requirement contexts. The systematic exploration of these boundaries and possibilities remains an important task for IT organisations – as does the development of appropriate strategies for quality assurance, deployment and governance of the emerging ‘casual code’.