When you spend a lot of compute on a language model but skip encryption

People often think of encryption as something expensive that should be used only when absolutely necessary. The assumption is that encryption burns a lot of CPU, adds overhead, and risks increasing latency. Because of that, teams sometimes choose to skip extra encryption, even when it would be the smart thing to do.

Public/private key cryptography does use more CPU than sending data in the clear, or than “just” sending everything over HTTPS without any extra layer. Symmetric encryption also has a cost, even if it is usually small on modern hardware. So yes, encrypting content is not free.

But when you compare that cost to what you are already spending to run a language model, the picture changes completely. A typical request that sends data to a model involves tokenization, network transfer, and, most importantly, heavy inference compute on GPUs or other accelerators. That inference step dominates the resource usage by a huge margin.

If a request/response workflow is mostly about sending data to and from a language model, is it really worth dropping encryption to save some CPU cycles? You are already paying for massive amounts of compute to run the model. In that context, the overhead of encrypting a few kilobytes or even megabytes of text is a rounding error.

This is where the usual reasoning breaks down. People worry about “extra CPU” for encryption, and as a result avoid using public/private key technologies or additional encryption layers, even for sensitive prompts and responses. But if the data is important enough to send to a powerful model, it is usually important enough to protect properly on the way there and back.

A more realistic way to think about it is: if you can afford the compute cost of the model, you can almost certainly afford the CPU cost of encrypting the content around it. The trade-off is clear: a tiny increase in CPU usage versus a potentially large improvement in privacy and security.

So when your system is already spending a crazy amount of compute on running a language model, skipping encryption because of CPU considerations is rarely a good argument. Instead of asking “Is encryption too expensive here?”, the better question is: given what we already spend on model compute, is it really worth not encrypting this data?

Is DRY Always a Good Idea?

In programming, we often become obsessed with reuse and with only having one of everything. The ideal is: define it once, reuse it everywhere, and then you only have to change it in one place. But this can easily go too far, to a point where it’s not even smart from a software perspective. Everything ends up hanging together with everything else, and you get a spaghetti of dependencies. In that kind of system, the idea that “you only need to change it in one place” doesn’t really have value anymore, because that one place is connected to so many things that any change becomes risky and complex. So DRY is not generally a good idea in all situations; it always needs judgment and context.

If you look at other areas, like communication, documentation, and getting information across, “repeat yourself” is often a good idea. Repetition helps convey the message. The recipient doesn’t catch everything the first time. Restating key points underlines what is important and makes it more likely that people will remember it. In writing and teaching, never repeating yourself often makes things harder to understand, not easier.

Coming back to program code, things have changed here as well. With language models and other tools, we end up reading more code than we write. That makes the communication aspect of code more important. Code is not just instructions for machines; it is also communication for humans and for agents that need to understand the code. In that light, a bit of repetition or duplication can be useful if it makes the code easier to read and reason about in isolation. Being explicit in several places can be better than hiding everything behind one shared abstraction that connects unrelated parts of the system.

This is why DRY quickly ends up being a principle with limited value on its own. It is one trade-off among many, not a goal in itself. Other aspects are often more important: clarity, maintainability, loose coupling, and how easy it is to understand code when you read it later. Sometimes the right choice is to repeat yourself a little in the code, so that both people and language models can understand what is going on without having to untangle a web of DRY abstractions.

Software is expensive. But the real question is: expensive where?

When people think about the cost of software, they usually picture developers writing code. That’s where the action is: new features, pull requests, tests. But if you look at the total cost of a software system or app over its lifetime, the picture changes. Coding is just one part of a much larger whole – and often not the biggest one.

The total cost is split into different areas. There is the direct build cost: developing the software, writing code, and testing. Then there is deployment or shipping: getting the system onto servers or delivering it to end users as apps. After that come the operational costs: hardware, cloud infrastructure, monitoring, backups, and everything needed to keep it running. Over time, there are also continuous changes: new features, bug fixes, regulatory updates, and adaptations as requirements evolve. Around all of this sits the cost of the organization itself: people, coordination, support functions, management, and the processes to keep everything moving. The list just continues.

If you look at the total cost over time, the pure programming part is actually quite small. The rest – deployment, operations, change, and organizational overhead – quietly dominates. Language models are very good at automating all or parts of the programming work, and many teams already use them for that: generating code, writing tests, refactoring, or explaining tricky parts of the codebase. That is useful, but it only touches a small piece of the total cost.

Deployment and shipping is one area where there is a lot of manual work that could be reduced. Setting up pipelines, handling configuration, managing environments, preparing releases, and communicating changes all take time. A language model can help generate and update deployment scripts, explain existing setups, and create clearer release notes and runbooks based on commits and tickets.

Operations and infrastructure is another big cost center. Running servers and cloud resources, handling incidents, looking at logs and metrics, doing routine maintenance – all of this adds up. Here, a language model can help by turning scattered technical data into understandable summaries, suggesting possible root causes, and drafting or updating operational documentation.

Change over time is often where costs really grow. Every system accumulates history and complexity. People leave, documentation goes out of date, and nobody fully remembers why things were done in a certain way. A language model can help developers understand existing code, answer “where is this implemented?” questions, generate overviews from code and configuration, and support safer refactoring and impact analysis. Making it easier and safer to change a system can be more valuable than simply speeding up initial coding.

Then there is everything around the actual building and running of the system: the organization. Product management, security, legal, finance, HR, customer support, and internal support all contribute to the total cost. A lot of this work is about communication and information: writing and reading documents, reporting status, answering questions, and coordinating between roles. Language models can help by summarizing long threads and reports, drafting documentation and FAQs, assisting support staff with suggested replies, and helping people find relevant information faster.

If you only think of a language model as a “coding assistant”, you automatically limit its impact to a small slice of the cost. A better approach is to first ask: where do we actually spend time and money across the whole lifecycle of our systems? The biggest opportunities are often in repetitive processes, communication-heavy workflows, and areas where knowledge is locked in the heads of a few people.

Programming is just one part of the total cost of a software system. Over its lifetime, deployment, operations, change, and organizational work take up a much larger share. Language models are excellent for helping with code, but they might be even more valuable when used across these other areas that represent a bigger part of the real cost.

Software’s Unique Power of Easy Replication

When organisations talk about software systems, cloud platforms and vendors, the conversation often sounds like we’re dealing with unique, heavy, physical things that are hard to build more of and even harder to move. People say “we’re a [vendor] shop” or “we’re locked in” as if the system were a nuclear power plant or a railway line: a one-off construction that can’t realistically be copied.

But software is not like that. Software has a unique property that most other products do not have: it is easy to create copies. Once a system exists, making another instance is basically a matter of copying bits. You can run the same software in more than one place, and you can move it from one environment to another.

Even the hardware that runs software, while physical, is very different from big physical constructions like nuclear power plants, oil platforms or train lines. Modern hardware and infrastructure are built on well-established standards for servers, storage, networks and so on. This makes it far easier to replicate than large, bespoke physical constructions.

From a technical perspective, this means there is no fundamental reason you must be dependent on someone else’s system. In principle, you could run your own copy, or an equivalent system, if you wanted to. Technically, the system is not unique and immovable.

So where does the dependency come from? In practice it comes from other factors: licenses and ownership, people and competence, and the surrounding organisation. Licenses and contracts can restrict your right to copy, modify or run the software yourself. Intellectual property rules can mean you are only renting access, not owning what you use.

On top of that, there is the human side. Operating complex systems requires personnel, skills and experience. Many organisations do not have the competence to run certain systems themselves, or they do not prioritise building that competence. This makes them more dependent on vendors, not because the software cannot be replicated, but because the people and organisation around it are not prepared to do so.

Organisational structures and processes also play a big role. Workflows are built around specific tools. Risk aversion, “we’ve always used this provider”, and lack of incentives to change all contribute to staying with a given vendor. None of this is about what software can or cannot do technically; it is about how humans and organisations choose to structure things.

Because software and much of the hardware are relatively easy to replicate, it should be entirely possible to achieve real digital sovereignty and ownership. That means owning your own data, having control over your own digital services, and not being completely dependent on a single external provider for critical functions.

Achieving this is mainly about addressing the legal, organisational and competence barriers. Technically, the path is open: systems can be copied, moved and reimplemented. If we stop thinking about software as a unique physical object and start seeing it as something that can be replicated, it becomes much easier to imagine and design for digital sovereignty and true ownership of the digital services we depend on.

Which task to automate

When we think about automation, we often start with the wrong thing: the task that is closest to us. We look at what we do every day and ask: “How can I automate this?” We don’t always ask whether the task actually matters for the bigger goal of the process or the organisation. That means we risk spending time and effort automating work that does not really need to be done in the first place.

Imagine this chain of work. We decide to automate the programming of a program. That program is supposed to automate the sending of an email. That email is meant to remind someone that there is something they need to do. The person who receives the reminder then has a task: they go through a list of possible problems that have occurred in the last month and pick out which ones it is possible to do something about. This list is collected and reported from across the entire organisation.

If we stop here, it looks like the obvious thing to automate is the programming of the program that sends the email. It is close to us, it is visible, and it feels concrete. But that is only one step in a longer chain. Before we decide to automate, we should trace the chain backwards and ask why each step exists.

Why do we need the email reminder? Because otherwise the person might forget to review the list. Why do we need the monthly review of the list? To decide which problems we should act on. Why do we collect a list of possible problems from across the organisation? To have an overview of issues and opportunities for improvement. Why do we need that overview? To improve how the organisation works and support its overall goals.

When we walk the chain backwards like this, we might discover that some links are weak. Maybe the list is reviewed, but almost nothing is followed up. Maybe the same problems appear every month without any action. Maybe the review is something “we have always done”, but it does not actually lead to decisions that change anything important. In that case, automating the programming of the reminder system does not create much value. We are just making it cheaper and faster to do something that might not need to be done at all.

A more useful approach is to start from the goal instead of the task. What are we actually trying to achieve? Better service for customers, fewer incidents, lower cost, less risk? From there, we can ask which decisions support that goal, which information is needed for those decisions, and which tasks are needed to produce that information. Only then does it make sense to ask what should be automated.

Before automating any task, it can help to ask a few simple questions. What concrete goal does this task support? What would happen if we stopped doing it for a month? Is there a simpler way to reach the same outcome? If the task does not clearly connect to a real goal, or nothing would break if we paused it, maybe it should be changed or removed instead of automated.

Sometimes, when you follow the chain all the way back to the organisation’s purpose, you find that a whole series of tasks — collecting lists, reporting them, sending reminders, reviewing them — is not really needed. Then the best “automation” is to avoid building anything at all.

Quality of Knowledge and Information

There are different levels of quality in knowledge and information. Not everything we “know” is equally reliable. Some things are well-checked and stable, other things are based on quick impressions, misunderstandings, or old data. On top of that, different people can understand the same information in different ways.

One simple way to think about quality is to look at the degree of confirmed correctness. In practice, that means asking: how sure are we that this is actually true? Is it something someone just said once, or something that has been checked and confirmed? Everyone has experienced this in communication: what one person meant, what another person said, and what a third person understood are not always the same. Messages can easily be misunderstood or distorted when they are communicated and interpreted.

Because of this, it is important to have an explicit relationship to the quality of the information and knowledge we work with. Instead of treating everything as simply “true” or “false”, we can ask how certain we are, what this certainty is based on, and how likely it is that something has been misunderstood or misinterpreted.

This can be supported with practical tools and habits. For example, using gradings or levels of certainty (“uncertain”, “partly confirmed”, “confirmed”), doing simple checks and controls, and asking clarifying questions about source and context. We can also look at how new or old the information is, and mark recency or freshness (“last updated”, “based on data from…”), because information can lose relevance over time.

It can also help to ask for confirmations when something is important: who has checked this, and how? Has more than one person or source confirmed it? At the same time, we can consider how important the information is: some things can be approximate without causing problems, while other things really need to be correct because they are critical for decisions, safety, or legal reasons.

In everyday work, a small shift can make a big difference: instead of only asking “Is this correct?”, also ask “How sure are we that this is correct, and how do we know?”. By being more conscious of the quality of knowledge and information, we reduce misunderstandings, improve decisions, and communicate more clearly.

The Trial and Error Method

Most of us solve problems in a very down-to-earth way: we try something, see what happens, adjust, and try again. It’s simple, practical, and feels natural. You try, you fail, you get some feedback, you learn a bit, and you try again.

This “trial and error” method works surprisingly well as long as you get good feedback. Most people can work their way towards some kind of solution if they are told clearly when something has gone wrong, and if that feedback comes quickly enough to connect it to what they just did.

The method works best when feedback is fast and catches most of the errors. You see quickly that something has failed, and you understand that it has failed. Even if you don’t know the deep cause, you at least know that this attempt did not work. That makes it possible to adjust and improve over several rounds.

The problems start when feedback is bad, delayed, or incomplete. If feedback is slow, you may not notice that something has failed until much later, and it becomes hard to connect that failure to a specific action. If feedback only picks up some of the errors, or only parts of them, solutions can look correct even though they are actually wrong. You can end up with something that seems to work, because nothing obviously breaks, even though important things are failing quietly in the background.

In these situations, the trial and error method starts to fail. You keep trying and adjusting, but the learning is weak, because the signals you get back are unclear. The method depends completely on feedback, so when feedback is poor, the method becomes unreliable.

This is where good control systems are important. Control systems can help you in several ways: they can tell you that something has failed at all, they can give more detail about what exactly is wrong, and they can provide faster feedback that something is about to fail, not just that it already has.

With simple control mechanisms in place, your trial and error loop gets much stronger. You still work in the same practical way—try, see, adjust—but the feedback is clearer, faster, and more complete. That reduces the risk that you build confidence in a solution that only looks right, and increases the chance that you actually end up with something that is correct.

Complexity and Security Vulnerabilities

Most security incidents do not start with advanced attacks or unknown vulnerabilities. They start with a mistake.

Someone writes code with a bug.
Someone sets a configuration incorrectly.
Someone handles data in an unsafe way.

In practice, the most common cause of a cybersecurity weakness is that the people who develop, manage, and operate an IT system make an error. There are many types of errors: mistakes in code, mistakes in configuration, mistakes in data. The common factor is that the more complex the system is, the greater the chance that someone will do something wrong.

Cybersecurity is often treated as something applied from the outside of systems, after they are built. Security measures are added around the IT system to protect it from assumed weaknesses and threats. Organizations introduce security systems and routines outside the core solution: firewalls, access control layers, monitoring tools, approval processes, and so on. All of this is meant to make the system safer, but it also adds more components, more settings, and more things that can go wrong.

Security requirements are also often defined externally. They come from regulations, standards, corporate policies, or generic best practices. These requirements are not always adapted to the actual context and conditions of the system. The result is that teams build systems to satisfy external demands that may not fit how the system is really used. This can increase complexity without necessarily improving real security.

The idea of built-in security is good. In theory, we should think about security all the way while building an IT system, not just bolt it on at the end. But this can easily become a new driver of complexity. When built-in security is implemented as many extra frameworks, tools, and rules, the security requirements and measures themselves become what increases complexity. More security controls can mean more configuration, more policies, more integration points. That again increases the chance that someone makes a mistake.

This creates a kind of loop: new threats or incidents lead to new security measures, which make systems more complex, which makes it easier for people to make mistakes, which leads to new vulnerabilities and new measures. If we ignore the role of complexity, we risk ending up with systems that have more and more security features on paper, but are harder and harder to understand and operate safely in practice.

To actually improve security, we have to see complexity as a risk in itself. Security measures and requirements should be evaluated not only on how they protect against threats, but also on how much complexity they introduce and how likely they are to cause new errors. Otherwise, we risk building IT systems where the very security controls that were supposed to protect us become part of the problem.

Can Comments Help Language Models Use Code Correctly

For a long time I’ve been deleting a lot of comments, both in my own code and in generated code. Especially the ones that feel obvious when you read the code. If a function name and a few lines of logic make the intent clear, why keep a comment that just repeats it?

Now that I use language models to read, change, debug, and reuse code, I’ve started to wonder if this habit might actually be a disadvantage. Could deleting comments make it harder for a model to understand and use the code correctly when it generates changes, looks for bugs, or plugs the code into other components?

Language models work by statistics. They generate what is most likely given the text they see. That text includes not only code, but also comments. Even a comment that looks “obvious” to a human and mostly repeats what the code says might help the model by reinforcing the intended meaning. Redundancy, which we often try to remove for human readers, can actually be useful as a signal for a model.

Think of it as saying the same thing in two different ways. The code expresses behavior. The comment can restate that behavior in natural language, and sometimes add assumptions and constraints that are not explicit in the code. When both line up, you increase the probability that the model understands how the code is supposed to be used and what must not change when it modifies it.

This can matter in several situations. When you ask a model to generate changes, comments that state assumptions and intent can help it preserve the right behavior instead of “simplifying” away something important. When you ask it to find errors, differences between what the comments say and what the code does can point to potential bugs. When you ask it to reuse a function in another component, a short comment describing expected inputs, constraints, and side effects can reduce the chance of misuse.

If this is true, it might actually be worth spending time to add comments and documentation, either manually or with help from a model. Manually written comments are more likely to capture the real intent and domain rules. Generated comments can be a quick way to bootstrap documentation, as long as you review them and remove or fix anything that is misleading or simply restates the code without adding meaning.

I don’t think the answer is to comment every line; low-value comments are still noise for both humans and models. But comments that clarify intent, constraints, and usage, even when they feel a bit redundant, might be more valuable than they used to be. In a world where language models are active users of our code, those “obvious” comments could increase the chances that the code is understood and used correctly.

Let Language Models Do What Humans Can’t Do Well

People often start with the wrong question:

“Can we replace this person with a language model?”

That is not a very useful way to think about it. A better question is:

“Which parts of this work are humans bad at – and can we let a language model handle those parts?”

Humans are good at some things, and bad at others. Language models are the same: they are good at some things and bad at others, but not on the same things as humans. The point is not to replace people, but to let models do the things humans are worst at.

Humans are good at judgement, context, and dealing with messy situations. We can understand nuance, read between the lines, and make choices when there is no clear right answer. We are also good at empathy, trust, and relationships. We know the people we work with, we feel their reactions, and we adjust our tone and message. And our creativity is tied to lived experience: we connect ideas from our own lives, culture, and values.

Humans are, on the other hand, consistently bad at repetitive and boring work. We lose focus when we have to do the same thing again and again. We struggle with large amounts of information: reading 50 pages of documentation, checking 200 rows in a sheet, or comparing 20 different options. We are not good at perfect consistency over time. And we are often bad at slow, tedious structuring: turning scattered notes into clear text, documentation, or clean summaries.

Language models have different strengths. They are very good at handling a lot of text quickly: reading, summarizing, comparing, and restructuring information. They are good at generating first drafts: emails, outlines, descriptions, and alternative formulations. They can apply clear instructions and formatting rules much more consistently than humans.

They also have weaknesses. A model has no real-world experience. It does not know your team, your history, or your unwritten rules. It can sound confident but still be wrong, so it must be checked. And it struggles when the goal is vague. “Summarize this in five bullet points for a manager” is much easier than “Do something useful with this”.

Because humans and models are good and bad at different things, you should not try to replace humans directly with a model. A job is not one thing; it is a collection of tasks. Some tasks need human judgement and relationships. Some are repetitive and text-heavy. Many can be split: the model does the first version, the human finishes it.

A practical way to think about it is like this: if a task depends on empathy, trust, or difficult decisions, keep it with a human and maybe let the model assist in the background. If a task is boring, repetitive, or full of text, let the model do as much as possible and let the human review. If a task needs a draft, let the model create it and the human refine it.

For example, when writing something, the human can decide what needs to be written, who it is for, and why it matters. The model can turn notes into an outline and a first draft. Then the human edits, adds real examples, and takes responsibility for the final result. The same pattern works for emails, reports, meeting notes, documentation, and many other things.

You can start simply. List the tasks you do in a typical day or week. Mark the ones you find boring, repetitive, or easy to postpone. Those are usually the ones humans are worst at and where a model can help. Ask the model to do the first pass on these tasks: summarizing, drafting, restructuring, or formatting. Then you review and correct. Over time, you will see which parts of your work are better done by a model and which should clearly stay with you.

The mindset shift is important: do not focus on replacing people. Focus on letting the model handle the parts of work humans are bad at, so humans can spend more time on what they are good at. Let the model do the repetitive, text-heavy, attention-heavy tasks. Let humans use their judgement, experience, and empathy.

The goal is not to copy humans with a language model. The goal is to combine different strengths.