‘No code’ does not mean ‘no engineer’

|

The media is currently filled with stories and announcements that the current Large Language Models (LLM) can write code at such a level that it will replace software
developers and that it’s even able to write itself. Anthropic’s recent Claude Opus 4.6 announcement (February 2026) touts “the world’s best model for coding, enterprise agents, and professional work,” achieving state-of-the-art performance on Terminal-Bench 2.0 for agentic coding. Meanwhile, OpenAI’s GPT-5.3-Codex launch (also February 2026) claims “the most capable agentic coding model to date,” achieving new highs on SWE-Bench Pro and Terminal-Bench 2.0. These stories are not just hype either – LLMs have become very good at writing code and that trend is undoubtedly going to continue. I do not doubt that in the future most software, even critical infrastructure, will be written by AI. But the question for this point in the journey is, “are we there yet?”.


There are a plethora of no-code platforms springing up. Platforms such as Lovable, Bolt.new, Bubble, Replit, and Cursor offer the ability for anyone who can write their desires into a chat prompt to create functional, good looking applications complete with databases, integrations with other applications, and all the features of a modern web application. These are deployed on the internet and within a couple of hours, a non-technical user can create and deploy applications that are seriously impressive. Not that long ago, this would have taken a software developer days or weeks of development time with all the associated costs.

It sounds too good to be true, so where’s the catch?

Well, there are indeed catches, and I would place them in two main categories: governance and technical.

The Hidden Costs of Moving Fast

When people hear the word “governance” they often think about legal documents, dull policies, and compliance. While these things can be true, really when we talk about governance, we’re talking about the process gaining assurance that the things you’re doing meet your organisational objectives. Assurance is a much better word than compliance – compliance is something you would not do otherwise have forced upon you; assurance is something of value that you do willingly.
In this context, good governance is asking questions like: Are the applications created secure? Where is my data stored? Is this company using my data for their own purposes?

Not only are organisations required by data protection legislation to consider these questions, it makes good business sense to!

When AI Writes Code, Who’s Checking Its Work?

The applications that these platforms produce can be tested to see if they provide the requested functionality by the user (does it allow me to see a list of recent orders, can I download a spreadsheet of data, and such like), but designing software is more complex than just the functional requirements that a typical user might define. Most people are not thinking about tenant isolation, user permission boundaries, floating point arithmetic, or the correct usage of boolean logic, to name just a few. As a non-technical user, how can you be sure that the code written by the AI is well architected and secure?


The Reality Check: What I Found

I’ve reviewed the output of some of these platforms, and the pattern is clear – in their current format, these platforms fall short in terms of both governance and security. Regarding governance, many of these platforms have terms that allow them to use any of the non-personal data you upload to them. Your commercial secrets can be utilised by these platforms as they see fit. The location of data processing can also be difficult to understand.

While specific terms vary by platform and change over time, it’s essential to carefully review each platform’s data usage policies before uploading any proprietary or confidential information.


From a security perspective, there are several major concerns. First, is the fact that without very strict guidance and monitoring, the AIs generally write poor code. This is for a number of reasons – to offer value for money, many of these platforms use cheaper models, but more importantly, the AIs are required to be helpful and do what you ask for above all else. If you tell it to make a function, it doesn’t ask the questions that an engineer would (who should be able to use this? how does it work with this other function?) and it doesn’t offer the challenge (this is a high-risk operation that should be handled differently). AI can do those things, but
only if specifically asked and used by a competent engineer. These platforms, when used by a non-engineer, produce generally poor quality code that, in my experience, would fail any security review.


Without careful and ongoing monitoring, there’s no way of telling whether any change made by the AI has smashed a hole through all your security and publicly posted you secrets to the world.


The Path Forward

From those concerns, you would be forgiven for thinking that my recommendation would be to avoid this platforms. However, that’s not the case. These platforms can be used safely and can provide the ability for non-technical users to access the power of custom software development previously only available to engineers.


The key is to ensure that you understand the risks you’re taking and can separate the marketing hype from the reality of protecting your data. This will add friction to any development process and slow things down, yes, but the result is that your applications will be more secure, more robust, and better fitted to your actual needs.


Making It Work Safely

The good news is that these platforms can be used safely and effectively – you just need to approach them with your eyes open. The key is implementing appropriate oversight without killing the speed and accessibility that makes these tools valuable in the first place.

This means ensuring someone technical reviews what’s being built, classifying applications by the risk they carry, conducting proper due diligence on the platform itself, and maintaining basic governance around who’s building what and where data lives. None of this is particularly onerous, but it does require organisations to think deliberately about how they adopt these tools rather than just letting them proliferate unchecked.


I’ve put together a detailed framework that walks through each of these steps practically – what to look for when evaluating platforms, how to classify applications by risk, what technical oversight actually looks like in practice, and how to maintain appropriate records without drowning in bureaucracy. You can find it here.


The pattern is straightforward: no-code platforms can genuinely empower non-technical users to build valuable applications, but that power needs to be balanced with appropriate oversight. Get that balance right and you get the best of both worlds – the speed and accessibility of no-code development, with the assurance that what you’re building is actually fit for purpose.