Uncategorized

The Trial and Error Method

Most of us solve problems in a very down-to-earth way: we try something, see what happens, adjust, and try again. It’s simple, practical, and feels natural. You try, you fail, you get some feedback, you learn a bit, and you try again.

This “trial and error” method works surprisingly well as long as you get good feedback. Most people can work their way towards some kind of solution if they are told clearly when something has gone wrong, and if that feedback comes quickly enough to connect it to what they just did.

The method works best when feedback is fast and catches most of the errors. You see quickly that something has failed, and you understand that it has failed. Even if you don’t know the deep cause, you at least know that this attempt did not work. That makes it possible to adjust and improve over several rounds.

The problems start when feedback is bad, delayed, or incomplete. If feedback is slow, you may not notice that something has failed until much later, and it becomes hard to connect that failure to a specific action. If feedback only picks up some of the errors, or only parts of them, solutions can look correct even though they are actually wrong. You can end up with something that seems to work, because nothing obviously breaks, even though important things are failing quietly in the background.

In these situations, the trial and error method starts to fail. You keep trying and adjusting, but the learning is weak, because the signals you get back are unclear. The method depends completely on feedback, so when feedback is poor, the method becomes unreliable.

This is where good control systems are important. Control systems can help you in several ways: they can tell you that something has failed at all, they can give more detail about what exactly is wrong, and they can provide faster feedback that something is about to fail, not just that it already has.

With simple control mechanisms in place, your trial and error loop gets much stronger. You still work in the same practical way—try, see, adjust—but the feedback is clearer, faster, and more complete. That reduces the risk that you build confidence in a solution that only looks right, and increases the chance that you actually end up with something that is correct.

Complexity and Security Vulnerabilities

Most security incidents do not start with advanced attacks or unknown vulnerabilities. They start with a mistake.

Someone writes code with a bug.
Someone sets a configuration incorrectly.
Someone handles data in an unsafe way.

In practice, the most common cause of a cybersecurity weakness is that the people who develop, manage, and operate an IT system make an error. There are many types of errors: mistakes in code, mistakes in configuration, mistakes in data. The common factor is that the more complex the system is, the greater the chance that someone will do something wrong.

Cybersecurity is often treated as something applied from the outside of systems, after they are built. Security measures are added around the IT system to protect it from assumed weaknesses and threats. Organizations introduce security systems and routines outside the core solution: firewalls, access control layers, monitoring tools, approval processes, and so on. All of this is meant to make the system safer, but it also adds more components, more settings, and more things that can go wrong.

Security requirements are also often defined externally. They come from regulations, standards, corporate policies, or generic best practices. These requirements are not always adapted to the actual context and conditions of the system. The result is that teams build systems to satisfy external demands that may not fit how the system is really used. This can increase complexity without necessarily improving real security.

The idea of built-in security is good. In theory, we should think about security all the way while building an IT system, not just bolt it on at the end. But this can easily become a new driver of complexity. When built-in security is implemented as many extra frameworks, tools, and rules, the security requirements and measures themselves become what increases complexity. More security controls can mean more configuration, more policies, more integration points. That again increases the chance that someone makes a mistake.

This creates a kind of loop: new threats or incidents lead to new security measures, which make systems more complex, which makes it easier for people to make mistakes, which leads to new vulnerabilities and new measures. If we ignore the role of complexity, we risk ending up with systems that have more and more security features on paper, but are harder and harder to understand and operate safely in practice.

To actually improve security, we have to see complexity as a risk in itself. Security measures and requirements should be evaluated not only on how they protect against threats, but also on how much complexity they introduce and how likely they are to cause new errors. Otherwise, we risk building IT systems where the very security controls that were supposed to protect us become part of the problem.

Can Comments Help Language Models Use Code Correctly

For a long time I’ve been deleting a lot of comments, both in my own code and in generated code. Especially the ones that feel obvious when you read the code. If a function name and a few lines of logic make the intent clear, why keep a comment that just repeats it?

Now that I use language models to read, change, debug, and reuse code, I’ve started to wonder if this habit might actually be a disadvantage. Could deleting comments make it harder for a model to understand and use the code correctly when it generates changes, looks for bugs, or plugs the code into other components?

Language models work by statistics. They generate what is most likely given the text they see. That text includes not only code, but also comments. Even a comment that looks “obvious” to a human and mostly repeats what the code says might help the model by reinforcing the intended meaning. Redundancy, which we often try to remove for human readers, can actually be useful as a signal for a model.

Think of it as saying the same thing in two different ways. The code expresses behavior. The comment can restate that behavior in natural language, and sometimes add assumptions and constraints that are not explicit in the code. When both line up, you increase the probability that the model understands how the code is supposed to be used and what must not change when it modifies it.

This can matter in several situations. When you ask a model to generate changes, comments that state assumptions and intent can help it preserve the right behavior instead of “simplifying” away something important. When you ask it to find errors, differences between what the comments say and what the code does can point to potential bugs. When you ask it to reuse a function in another component, a short comment describing expected inputs, constraints, and side effects can reduce the chance of misuse.

If this is true, it might actually be worth spending time to add comments and documentation, either manually or with help from a model. Manually written comments are more likely to capture the real intent and domain rules. Generated comments can be a quick way to bootstrap documentation, as long as you review them and remove or fix anything that is misleading or simply restates the code without adding meaning.

I don’t think the answer is to comment every line; low-value comments are still noise for both humans and models. But comments that clarify intent, constraints, and usage, even when they feel a bit redundant, might be more valuable than they used to be. In a world where language models are active users of our code, those “obvious” comments could increase the chances that the code is understood and used correctly.

Let Language Models Do What Humans Can’t Do Well

People often start with the wrong question:

“Can we replace this person with a language model?”

That is not a very useful way to think about it. A better question is:

“Which parts of this work are humans bad at – and can we let a language model handle those parts?”

Humans are good at some things, and bad at others. Language models are the same: they are good at some things and bad at others, but not on the same things as humans. The point is not to replace people, but to let models do the things humans are worst at.

Humans are good at judgement, context, and dealing with messy situations. We can understand nuance, read between the lines, and make choices when there is no clear right answer. We are also good at empathy, trust, and relationships. We know the people we work with, we feel their reactions, and we adjust our tone and message. And our creativity is tied to lived experience: we connect ideas from our own lives, culture, and values.

Humans are, on the other hand, consistently bad at repetitive and boring work. We lose focus when we have to do the same thing again and again. We struggle with large amounts of information: reading 50 pages of documentation, checking 200 rows in a sheet, or comparing 20 different options. We are not good at perfect consistency over time. And we are often bad at slow, tedious structuring: turning scattered notes into clear text, documentation, or clean summaries.

Language models have different strengths. They are very good at handling a lot of text quickly: reading, summarizing, comparing, and restructuring information. They are good at generating first drafts: emails, outlines, descriptions, and alternative formulations. They can apply clear instructions and formatting rules much more consistently than humans.

They also have weaknesses. A model has no real-world experience. It does not know your team, your history, or your unwritten rules. It can sound confident but still be wrong, so it must be checked. And it struggles when the goal is vague. “Summarize this in five bullet points for a manager” is much easier than “Do something useful with this”.

Because humans and models are good and bad at different things, you should not try to replace humans directly with a model. A job is not one thing; it is a collection of tasks. Some tasks need human judgement and relationships. Some are repetitive and text-heavy. Many can be split: the model does the first version, the human finishes it.

A practical way to think about it is like this: if a task depends on empathy, trust, or difficult decisions, keep it with a human and maybe let the model assist in the background. If a task is boring, repetitive, or full of text, let the model do as much as possible and let the human review. If a task needs a draft, let the model create it and the human refine it.

For example, when writing something, the human can decide what needs to be written, who it is for, and why it matters. The model can turn notes into an outline and a first draft. Then the human edits, adds real examples, and takes responsibility for the final result. The same pattern works for emails, reports, meeting notes, documentation, and many other things.

You can start simply. List the tasks you do in a typical day or week. Mark the ones you find boring, repetitive, or easy to postpone. Those are usually the ones humans are worst at and where a model can help. Ask the model to do the first pass on these tasks: summarizing, drafting, restructuring, or formatting. Then you review and correct. Over time, you will see which parts of your work are better done by a model and which should clearly stay with you.

The mindset shift is important: do not focus on replacing people. Focus on letting the model handle the parts of work humans are bad at, so humans can spend more time on what they are good at. Let the model do the repetitive, text-heavy, attention-heavy tasks. Let humans use their judgement, experience, and empathy.

The goal is not to copy humans with a language model. The goal is to combine different strengths.

When someone else’s solution becomes your problem

Someone has a problem. A real one. They’re under pressure, they need to deliver, and they don’t have the time or space to explore all the options. So they pick a solution that works for them right now. It’s a bit short-sighted, but it does the job from their point of view.

The catch is that this solution rarely stays with just them. Very often, it quickly becomes a problem for someone else. And that someone else doesn’t only get the original problem in a new form; they also inherit new problems created by the chosen solution. Instead of removing the original issue, it gets buried under extra complexity.

This happens when external and surrounding conditions are ignored. The original decision doesn’t consider who else will be affected, how the solution fits with other tools or processes, or how long it will be around. The focus is on “does this work for us?” rather than “what happens to others when we do this?”

That’s how problems and solutions spread. One team’s convenient workaround becomes another team’s constraint. A custom format, a one-off integration, a parallel process — each local fix reshapes the environment for everyone else. Over time, people start building solutions to other people’s solutions, not to the original problem.

The pattern repeats: someone solves their own problem in isolation, others adapt around it, and each adaptation adds new side effects. You end up with layers of workarounds: decisions made years ago still dictating how things must be done today, even if the original reasons are gone or unclear.

Eventually, what’s left are wicked problems. Not wicked in a theoretical sense, but in the everyday, frustrating way: there are no clean solutions, only trade-offs. Every option has serious downsides. Any attempt to “fix” things means choosing between different bad workarounds. You can’t touch one part of the system without creating pain somewhere else.

At that stage, you’re not just dealing with the original problem anymore. You’re dealing with the accumulated consequences of many short-sighted solutions that never took the wider context into account. And your new decisions risk becoming the next layer in the same pattern.

Breaking this cycle starts with a small shift: when you solve your own problem, think about where your solution ends up. Who will have to live with it? What other systems or people will it affect? Could this become someone else’s problem on top of their existing ones?

You can’t avoid all side effects, and you can’t design the perfect answer. But you can be more deliberate. Treat your solution not just as a fix for you, but as something that enters a shared environment. Otherwise, over time, all that’s left are wicked problems and bad workarounds to choose between.

When Errors Become the Norm, Control Breaks

Much of quality control is based on patterns. We assume we know what “normal” looks like, and we treat deviations from that pattern as possible errors. This applies in many areas: routines in a workplace, data entry formats, how a report usually looks, or typical outputs from a language model. As long as most of what we see is correct, this works reasonably well. Deviations are useful signals.

The problem starts when the error rate gets too high. When mistakes are no longer rare, they stop standing out as deviations. The pattern itself becomes polluted by errors. If you keep relying on “difference from the pattern” as your signal, the whole control system begins to fail. At some point, seeing an anomaly no longer reliably means “something is wrong.”

Once errors are common, something counterintuitive happens: correct behavior starts to look like the deviation. A correct entry in a dataset where most values are wrong looks suspicious. A person who follows the proper procedure in a team that has normalized shortcuts looks like they are breaking the routine. A language model output that is actually correct can appear “off” compared to the wrong but consistent answers everyone has gotten used to. What is right becomes the exception.

At the same time, recurring mistakes can start to look like they follow the pattern. If the same error happens often enough, it stops being treated as an error and becomes “how we do things.” The wrong value, the incorrect process, or the misleading answer becomes familiar. Instead of flagging it, people defend it: “That’s how the system works,” “We’ve always done it this way.” Errors are then perceived as normal.

When this happens, pattern-based quality control doesn’t just weaken; it can invert. The logic quietly shifts from “pattern ≈ correct, deviation ≈ error” to “pattern (including errors) = normal, deviation (often correct) = suspicious.” The mechanism that was supposed to catch mistakes now protects them and pushes back on corrections. The system starts treating the right thing as the problem and the wrong thing as the standard.

To avoid this, you need something more solid than “what we usually see.” That can mean checking samples against clear criteria instead of just visual similarity, comparing current routines to documented requirements, or using trusted reference data or test cases. It also means paying attention when people notice that the same “small” error appears everywhere, or when someone doing the right thing keeps being told they are doing it “wrong” simply because it doesn’t fit the current pattern.

The core point is simple: when error rates get too high, you can no longer trust patterns alone. If you keep using deviations from a flawed pattern as your main signal, you risk flipping reality: errors look normal, and correctness looks like the mistake.

Software for the 20%

Most software is built for the 80%—but the real leverage is in the 20%.

Most software and systems are designed for the majority of users: the simple, common use cases that most people have. Interfaces are optimized for “typical” users, workflows are linear and straightforward, and features are built to cover what most teams need most of the time.

That makes sense commercially. Simple sells. It’s easier to explain, easier to demo, and easier to support. You can get pretty far by solving the basic cases well for most users.

But what about the other 20%?

The remaining 20% are the more advanced and more special cases. These are power users, domain experts, and teams with complex, non-standard workflows. They don’t fit neatly into predefined steps. They hit the edges of the system quickly and are forced into workarounds, spreadsheets, exports, scripts, or even building their own tools.

This is often where the real work happens—and where the real value is.

These advanced users are usually doing the most critical and complex tasks in their organizations. They create outsized value, and they feel the limitations of generic tools much more sharply. When software only covers the simple cases, these users end up doing the most important parts of their job outside the system.

Ignoring this 20% has a cost. They lose time to manual fixes and copy-paste. They make mistakes because the tool doesn’t quite match reality. Over time they may see the product as something that is fine for basic things, but not for serious, high-value work. That perception is hard to change once it sticks.

Focusing more deliberately on this 20% does not mean making the product complicated for everyone. It means keeping things simple for most users, while offering depth and flexibility for those who need it. For example, you can keep default workflows straightforward, but allow advanced configuration, custom fields, automation, integrations, or scripting for specialized needs. The key is to make complexity optional and layered, not forced on everyone.

The advanced and special cases are where tools can move from “nice to have” to truly useful and valuable. By understanding and supporting these users better, you close the gap between how the system works and how the work actually happens. And that is where the real utility and value of software live.

Try Reversing Cause and Effect

We often look at a situation, quickly decide what causes what, and then move on. “This happened because of that.” “She’s stressed because of the deadline.” “The project failed because the planning was poor.” But it’s not always true that we’ve got the relationship right. The idea here is simple: as an alternative way to analyze and think, deliberately try reversing cause and effect. Use it as a way to see new sides of a situation.

We’re used to thinking in straight lines: A causes B. “I’m tired because I slept badly.” “The team is quiet because they don’t care.” Once that story feels right, we build everything on top of it: we design solutions, argue from it, and rarely question it. But often the connection is more complicated. Sometimes B is driving A. Sometimes A and B reinforce each other in a loop. If we’ve misunderstood the direction, we end up misdiagnosing problems and choosing ineffective solutions.

A practical way to use this is to turn it into a small mental routine. First, state your assumption clearly as “A causes B”: “We have low engagement because our meetings are boring.” “I procrastinate because I’m lazy.” “The product isn’t growing because our marketing is weak.” Then flip it: “Our meetings are boring because we have low engagement.” “I feel lazy because I procrastinate.” “Our marketing is weak because the product isn’t growing.” Don’t force yourself to believe the flipped version; treat it as a hypothesis and ask: if this were true, what would that explain? What would I see? What would I do differently?

Take performance and motivation. The usual assumption is: “I perform well because I’m motivated.” That leads to the idea that you first need motivation, and then you can act. Flipped, it becomes: “I’m motivated because I perform well.” That suggests motivation can follow action: small wins and visible progress create motivation. The practical move changes from “wait to feel motivated” to “start with tiny actions and let motivation catch up.”

Or look at meetings and misalignment. The usual thought is: “We have misalignment because we don’t meet enough.” So the solution becomes more meetings, longer agendas, more updates. Flipped, it becomes: “We don’t meet enough because there’s misalignment and unclear ownership.” Maybe people avoid meetings because they feel unproductive or tense. In that case, the real problem is unclear structure and responsibility, not the number of meetings.

Another example is learning and confidence. We often think: “I feel confident because I know enough.” So we wait to speak up or contribute until we feel fully prepared. Flipped: “I learn more because I feel confident.” If that’s partly true, then confidence helps you ask questions, try things, and learn faster. The focus shifts from “I’ll join in when I know enough” to “I’ll learn by joining in earlier in a safe environment.”

This way of thinking is especially useful when you feel stuck, when the same pattern repeats, or when you deal with behavior, motivation, and relationships. It’s not about proving the original assumption wrong. It’s about admitting that we might not always have the relationship right, and using that to open up alternative explanations and options.

There are some limits. Reversing cause and effect doesn’t automatically make the flipped version true. Some things really are one-way, and some problems are structural, not just psychological. The point is not to replace one dogma with another, but to have more than one way of looking at what’s going on.

You can turn this into a habit with a few simple questions: What do I think is causing what here? What if I’ve got the direction wrong? If I flip it, does it reveal anything interesting or useful? Use it in your own reflections, in conversations, in retrospectives. You don’t need to do it all the time—just often enough to catch the places where your first story about cause and effect might be upside down.

The core idea from the original notes is straightforward: try reversing cause and effect as an alternative analysis and way of thinking, to see new sides of a situation. We don’t always have the relationship right. By flipping it on purpose, you give yourself a simple tool to challenge that and discover something you might otherwise miss.

Vibe Coding and the Bloat Trap

Vibe coding is that feeling when development suddenly becomes fast and fluid. You describe what you want, a language model helps generate the code, you tweak a bit, and you’re done. Coding takes less time. Features that used to take days now take hours or minutes.

That feels powerful, but it creates a new kind of pressure. When coding becomes this fast, there’s an expectation—sometimes from yourself, sometimes from others—that you should deliver more features. “If it only takes a short time, why not just build it?” Output becomes the focus. You produce a lot.

The problem is that you also get worse at prioritizing. When things move slowly, you’re forced to choose. You feel the cost of every feature. You ask: is this worth the effort? Is this more important than something else we could do? That natural friction forces you to decide and to say no.

With vibe coding, that friction disappears. When it’s easy to build, it’s harder to stop and ask whether something should be built at all. You end up producing a lot of software that few or no people really need. There are no clear boundaries on how much unnecessary software you can create, because it no longer feels costly in the moment.

The result is bloat. Costs. Trash. Slop. You get more and more features, options, and code paths that don’t provide real value. They clutter the product, make it harder to maintain, and slowly increase the long-term cost of development. What felt fast at the beginning makes everything slower later, because the product is weighed down by things that shouldn’t have been there in the first place.

When you’re constrained by having to move slowly, you’re forced to choose more carefully. You’re forced to prioritize. You’re forced to remove weeds early, or not plant them at all. That’s the hidden benefit of being limited: the garden doesn’t fill up with unnecessary plants just because you had the time and tools to plant them.

Vibe coding isn’t bad in itself. Using a language model to speed up coding can be extremely useful. The danger appears when speed removes the natural limits that used to make us think. If you don’t add any new constraints, it becomes very easy to produce endless amounts of software that nobody really needs.

The core challenge isn’t how fast you can write code anymore. It’s how good you are at choosing what not to build, even when building it would be easy.

Drop of quality in a complex system

A complex system is something that involves many actors, components, or steps working together to produce a result. The more things that are involved, the more complex the system becomes. This could be a technical system, an organization, or a workflow where many people and tools have to coordinate to deliver an outcome.

In this context, quality can be seen as the rate at which errors happen in the system. That can mean the error rate of each individual component or actor, or it can mean the error rate of the system as a whole. A system with high quality has a low rate of errors; a system with lower quality has a higher rate of errors.

A key property of complex systems is that components depend on each other. An error in one actor or component does not stay isolated. It propagates to others that are dependent on it. If one part produces bad or incorrect output, the parts that rely on that output will either also fail or have to spend effort dealing with the consequences.

Because of this, when the error rate in a system increases, even by just a few percent in each part, the overall quality of the entire system can drop significantly. The effect is not just additive; it compounds across all the components involved. For example, in a system with 10 actors or components, a 1% drop in quality in each one might sound small, but together it adds up to roughly a 10% drop in the quality of the whole system.

This is why a small decline in quality in individual parts can feel like a large decline in the result the system produces. Even if each single actor or component is “almost as good as before,” the experience of the full system can be noticeably worse.

Possible ways to address this are either to reduce the complexity of the system or to reduce the quality error rate in the individual parts. Reducing complexity means having fewer actors, components, or steps involved, so there are fewer places where errors can occur and propagate. Reducing the error rate of each part means improving the quality of each actor or component, so that fewer errors enter the system in the first place. Both approaches aim to prevent small local problems from turning into large system-wide drops in quality.