Author: Nicolai Friis

Authoritarian Models and Systems

Authoritarian models and systems operate with centralized authority, where certain entities or individuals hold more power than others. These systems rely heavily on hierarchies, with everything positioned within one or more layers of structured order. This distribution of authority ensures clarity and control in how decisions are made and enforced.

A key strength of authoritarian systems is their ability to assess situations and make decisions effectively. Their structure allows for swift evaluations and a clear chain of command. In situations that require stability and control, these models provide the discipline to maintain order and deliver results.

However, authoritarian systems are less effective when it comes to fostering change or creating something new. Their rigid frameworks make them resistant to innovation and experimentation. This limits their ability to adapt when confronted with new circumstances or challenges, and creativity often takes a backseat to maintaining structure.

The practical use of such systems depends on context. They work best when stability and decisive action are required, but they may hinder progress in situations that demand flexibility, creativity, or the exploration of alternatives. Striking a balance between authority and adaptability is key to utilizing these models effectively.

This understanding highlights the importance of knowing when authoritarian approaches can provide value and when they fall short. Recognizing their strengths and weaknesses helps ensure they are applied appropriately to achieve specific goals without hindering broader development.

Binary Models

In decision-making, we often fall back on binary models: win or lose, truth or lie, right or wrong. These simple frameworks can feel intuitive and practical, but they rarely reflect the complexity of real life. When decisions are reduced to “either-or,” we risk oversimplifying nuanced issues and ignoring the subtle shades that lie between extremes.

Binary thinking has its uses in clear and straightforward scenarios. It provides clarity and forces quick decisions. But when evaluations become strictly binary, we lose the ability to recognize middle grounds or gradations. Nuance disappears, and there’s no space for outcomes like partial success, compromise, or “draws.” Everything becomes reduced to one of two polarized outcomes.

This rigidity can be counterproductive, especially when dealing with complex systems. The path to achieving goals is rarely a straight line of binary decisions. In reality, systems are intricate and exist on continuums of possibilities. Binary models are simply too limited to account for the interconnected, evolving nature of such challenges.

Consider examples like human relationships or strategic planning. Relationships aren’t entirely “good” or “bad”—they’re built from layers of understanding, missteps, and growth. Similarly, no organizational decision can be fully right or wrong in isolation; success often depends on the broader context and trade-offs. In these situations, a binary framework flattens complexity into false choices.

Moving beyond binary thinking means embracing alternatives that allow for nuance and flexibility. Instead of asking whether something is true or false, we can ask to what degree it is true. Instead of assuming success or failure, we can examine factors like progress, trade-offs, and incremental improvements. Recognizing and questioning false dichotomies gives us the freedom to explore broader options and reach more thoughtful conclusions.

Binary models may help us navigate simple choices, but they falter in complex systems where absolutes rarely exist. By shifting our mindset away from “either-or,” we open ourselves to greater possibilities and a deeper understanding of the world’s intricacies. After all, life is rarely black and white—most of it exists in the gray.

Bridging Creativity and Execution with Agents

Many people have the skills and knowledge to build and execute a business idea, but they often struggle with generating innovative and fresh ideas. On the other hand, there are plenty of individuals with creative minds who frequently come up with new concepts but lack the ability, resources, or tools to implement their ideas. This gap between creativity and execution is where language models, or agents, could potentially play an important role.

For those who excel at execution but lack creativity, agents could serve as innovation scouts. They could explore the vast digital world, searching through shared ideas on forums, blogs, and social platforms to uncover concepts that can be executed. This assumes there are enough creative individuals out there willing to share their ideas online for others to act on. With an agent’s help, these ideas could be curated and presented to those who are ready to build but struggle with the “what.”

The dynamic could work the other way as well. Creative individuals who lack the skills or ability to execute their ideas could use agents to find and connect with people who can bring their concepts to life. By acting as matchmakers, agents could help pair people with complementary skills, fostering collaboration between the creative and the practical.

This approach would give executors access to a pool of actionable ideas and allow creative thinkers to see their concepts realized by partnering with the right builders. However, there are challenges to navigate: concerns about intellectual property, ensuring the quality of sourced ideas, and managing fair collaboration between parties.

Agents have the potential to bridge the gap between creativity and execution. By finding, curating, and building connections, they can unlock opportunities for collaboration and innovation that might otherwise go unnoticed or unrealized.

Language Models vs. Knowledge Models

Language models are designed to work with the coherence of text and the structure of language itself. They excel at generating outputs that appear polished, professional, and as if they come from experts. However, this doesn’t mean that these outputs are always correct. Their focus is on the language and patterns inherent in text, not on verifying or understanding the actual knowledge behind it. These models are built using vast amounts of textual data from diverse sources, which helps them to generate text that seems natural and contextually relevant.

Knowledge models, on the other hand, focus on organizing and understanding knowledge itself. They deal with things like objects, concepts, relationships, logic, causation, and even experiences. Knowledge is not limited to textual representation and can exist in other forms, although it is often represented or communicated in text for usability. Knowledge models are constructed using high-quality, well-curated data that is structured and reliable, enabling them to work with detailed and interconnected information.

The difference between language models and knowledge models lies in their focus and goals. Language models prioritize the structure of text, while knowledge models prioritize the structure and coherence of knowledge. While language models can produce text that seems to make sense, they don’t inherently understand the concepts they are describing. In contrast, knowledge models aim to provide meaningful representations of knowledge that emphasize connectivity, logic, and accuracy over language.

Language models can play a valuable supporting role in working with knowledge. For example, they can be used to summarize or simplify complex information, making knowledge more accessible. However, language models are not knowledge models; they are tools that can help process or present knowledge but lack the deeper logical coherence that comes with true knowledge organization and reasoning.

In essence, language models are a step on the path toward building richer knowledge models. The two systems complement each other, but they serve different purposes. As we continue to improve these technologies, we are likely to see even greater integration between their strengths: the fluency of language models combined with the structured reasoning of knowledge models. This advancement will bring us closer to systems that not only communicate well but also truly understand the world around them.

When Making a Change Requires Knowing Everything

Modern coding tools, like language models (LMs), are becoming essential for developers. These tools can assist in navigating complex systems and help with writing, debugging, or improving code. However, there’s a major red flag to watch out for: if making a change to your code base requires providing the LM with the entire system, you might have a serious problem on your hands.

This scenario arises when the code base is so interdependent and tangled that every part relies on something else to function or adapt. If you need to load the entire system just to adjust one component, it’s a sign that your code base has turned into what many developers refer to as a “spaghetti monster.” Overly coupled components, excessive dependencies, and poor modularity can all lead to this situation. The result is a system where even minor updates become an overwhelming task.

A spaghetti monster code base leads to inefficiency and frustration. Code becomes harder to navigate, changes take longer to implement, and new bugs surface more easily. Even advanced tools like LMs will struggle to provide meaningful support if they’re required to understand the entire system instead of focusing on a specific area. This doesn’t just waste tool capabilities—it consumes valuable development time.

The solution lies in embracing modular design. By structuring code into smaller, independent pieces, you simplify development for both humans and LMs. Modular systems reduce unnecessary dependencies and make it easier to isolate, update, and test individual components. Beyond modular design, refactoring the code, reducing entangled logic, improving documentation, and conducting regular code reviews can all help untangle the spaghetti monster.

If parts of your code base feel overwhelming or hard to navigate, take that as a sign to reassess the structure. A cleaner, more maintainable system will not only improve your workflow but will also make tools like LMs far more effective. Don’t let a tangled code base hold you back—it’s worth the effort to untangle the mess.

Levels of Human Understanding and Use of Computers

Human interaction with computers can be understood as progressing through distinct stages that reflect both the user’s capability and level of understanding. At the first stage, a person may neither understand nor use computers, relying entirely on analog methods and systems. This is the baseline starting point before any engagement with digital tools begins.

The second stage involves using computers to perform predefined tasks. People at this level can navigate standard systems but do so within the limits of what has been designed for general use. Whether it’s using email, browsing the web, or accessing basic software tools, the focus remains on following established patterns without modification.

At the third stage, individuals understand how to adapt or program computers to perform tasks for themselves. This often involves basic customization, scripting, or programming to automate processes or streamline personal workflows. For example, writing a script to organize files or designing formulas in a spreadsheet to meet specific needs.

The fourth stage builds on this by programming computers to perform tasks for other people. Instead of focusing on personal needs, users at this level create solutions tailored to external audiences—friends, coworkers, or customers. This might involve developing an app, designing software, or building workflows that address challenges others face.

The fifth and final stage is about understanding how other people program or adapt computers, enabling collaboration or further adaptation. This requires the ability to work within systems others have designed, refining or scaling solutions to integrate them into larger environments. Using open-source code, building integrations between platforms, or collaborating with other developers are all part of this level, where innovation often relies on shared creativity and cooperation.

Let the Problem Tell You What It Needs

When working on any build or project, it’s important to let the problems themselves guide the solutions. Avoid introducing fixes for hypothetical issues that don’t actually exist yet, even if they theoretically could in the future. Every solution you implement comes with a cost, whether in complexity, resources, or trade-offs. Adding unnecessary solutions risks creating complications rather than addressing real needs. Focus instead on tackling the issues that are present and tangible.

This doesn’t mean ignoring the potential for future challenges—it means balancing practicality with foresight. Previous experience from similar builds can be invaluable here. If you’ve encountered recurring issues in the past, there’s no harm in incorporating proven solutions to avoid them. However, this should only be done if those solutions don’t come at the expense of anything critical in the current project. Just because it worked once doesn’t mean it fits every situation.

Building something successfully requires staying adaptable while maintaining focus. New challenges may arise as the project evolves, and you should remain open to addressing them as they come. That flexibility is key. However, trying to engineer solutions for every possible scenario in advance is a trap. Over-preparation often results in bloated designs and wasted resources, leaving you with a project overly complicated for its intended purpose.

Allow the problems to dictate the solutions rather than the other way around. Focus on what’s actually in front of you, informed by lessons from the past. This approach creates results that are practical, intentional, and equipped to handle challenges without becoming weighed down by unnecessary work.

Feedback Loops in Intelligent Agents

Feedback loops are at the core of how systems learn and improve. They allow agents to evaluate their actions and adjust based on observed results. Most agents, however, operate almost exclusively on instant feedback and short-term evaluation. While this works well for immediate tasks, not all actions reveal their consequences immediately. Some have effects that become apparent in the medium- or long-term. For agents to handle these situations effectively, they need to incorporate longer feedback cycles into their decision-making processes.

Short-term feedback loops are the most straightforward. For example, when baking bread, the process involves continual short-term adjustments. Mixing the ingredients provides instant feedback in terms of dough texture. Similarly, baking in the oven involves short-term checks to ensure the bread is baking properly without being undercooked or overcooked. These short loops happen within minutes or hours and provide the agent or individual with immediate insights to improve the outcome.

Medium- and long-term feedback loops are more complex. Farming grain is a good example. In a medium-term feedback loop, a farmer plants, grows, and harvests crops in a single season. The results of this process—the size and quality of the harvest—can be evaluated to guide decisions for the next season. Long-term feedback in farming, however, involves managing soil health and fertility. Decisions about fertilizer use, crop rotation, and soil management accumulate over many years, affecting the sustainability and productivity of the farmland in the future.

Currently, most agents cannot handle these longer-term cycles because they primarily learn from what is happening “right now.” They focus on instant feedback rather than considering the broader impact of their actions. This limits their capacity to understand the full consequences of their decisions, particularly those that only become evident much later.

It is critical to recognize that true learning and effective decision-making require balancing the short-term results with medium- and long-term outcomes. Long-term feedback loops are essential for achieving sustainable and meaningful progress. Future developments in agent design must account for these extended timelines to allow for smarter and more responsible decision-making in complex and dynamic environments.

Two Ways to Use Language Models for Writing

Language models have become powerful tools for writers, offering opportunities to enhance both the ideation and execution phases of writing. There are two main ways to use these tools when creating a text.

The first approach involves using the language model as a brainstorming partner. It acts as a sparring partner to help you come up with ideas, content, or themes. In this case, the model supports your creative process, but you write the final version of the text yourself.

The second approach is different. Here, you take the role of the idea generator. You think of the key themes, solutions, and content, then ask the language model to craft the final text based on your input. It assists with the actual production of the polished version.

Interestingly, there’s something of a divide in how these two approaches are viewed. One of these methods tends to face criticism, while the other is widely accepted. The brainstorming method, where the writer maintains control over the final output, is often seen as the “right” way to use such tools. In contrast, letting the model write the finished text tends to draw questions about creativity, originality, and over-reliance on technology. It’s an interesting cultural reflection: does the process of writing matter more than the result, or is the content itself what truly counts?

At the heart of this conversation lies that very question. What is most important in writing—what is written or how it’s created? Should the process define its value, or is it the final message that matters most to the reader? For example, is originality tied to the way the text is shaped, or is it about the ideas and substance behind it, no matter how it’s written?

Ultimately, the answer might depend on the context. Perhaps the method of collaboration isn’t as important as the intention behind the work and the quality of the message. Whether you use a language model as a brainstorming partner or a full-fledged writing assistant, the value of your writing will always lie in its ability to connect with the reader.

AI-Assisted Gaming: A New Dimension in Gameplay

The concept of automated agents in games has been around since the beginning of video gaming. From chess bots to difficulty settings and computer-controlled opponents, these systems have always been part of how games are designed and played. Traditionally, these agents served robotic, automated purposes, following pre-programmed rules to either challenge the player or add complexity to the game environment.

AI-assisted gaming, however, goes one step further. This emerging genre shifts these systems away from being simple opponents or automated mechanics and transforms them into collaborative partners. In these games, the agent acts as a teammate, sidekick, or co-player—creating the sensation of gaming alongside another real person.

In action RPGs, for example, you might have an agent playing alongside you as though it were another player. You can build the agent’s character just like you would your own, providing instructions and feedback on how it plays. Over time, it learns and adapts based on playing with you, evolving into a personalized companion that complements your strengths and supports your strategies.

This fundamentally changes gaming experiences, especially in single-player games. AI-assisted games introduce tactics, builds, and strategies that were previously only possible in multiplayer settings, bringing new approaches to mastering games and exploring creative playstyles. It opens up exciting new dimensions for single-player games, making them feel less solitary and more dynamic.

What’s even more exciting is that this concept can be adapted to nearly any type of game. Whether it’s action, puzzle, RPG, or strategy, the specific approach will depend on the genre, but the application of AI-assisted features can enhance gameplay across the spectrum. For example, agents could act as co-strategists in a tactical game or assist with solving puzzles in a cerebral adventure.

AI-assisted gaming represents a significant leap forward for the medium. By transforming computer-driven agents into collaborative, learning companions, developers are creating more immersive and innovative gaming experiences that expand what players can achieve in both solo and cooperative play.