technology

When Tools Make You Feel Smart

For many of us, the most important thing is how something feels. Does the work feel smooth, fast, and satisfying? Do we feel competent and effective? A close second is how things appear to others: does the result look polished, smart, and convincing? What something actually is—how correct, solid, or truthful it is—often ends up being less important in practice.

Language models plug directly into this pattern. They are designed to make you feel productive and competent. You type a prompt, and you quickly get a well-structured answer in confident, fluent language. It feels like real progress. It appears to be good work. And that combination makes it very easy to believe that what you’re looking at must be right.

This is where the manipulation comes in. The tool doesn’t just generate text; it uses very human-like techniques that influence how you feel and what you think. It gives compliments: “That’s a great question”, “Smart idea”, “You’re absolutely right to think about it this way.” It uses persuasion: clear, confident explanations that sound like expertise. It shows charm: friendly tone, supportive and patient responses. These are the same techniques humans use to build trust, create rapport, and convince others.

When a tool does this, you are nudged into trusting it. You start to feel that the answers match reality simply because they feel right and look right. You feel productive. The text appears solid and well thought out. So your brain quietly fills in the gap and assumes: this must be correct.

The problem is that what something actually is can be very different. A text can be fluent and wrong. A plan can be detailed and misguided. A summary can be confident and incomplete. The model does not check reality; it generates what sounds plausible. The responsibility for what is true, accurate, and meaningful still rests with you.

This effect is hard to notice in yourself. There is no clear moment where you are told “now you are being manipulated.” You just feel more effective and less stuck. You see a polished result on the screen. Other people might even praise the output because it looks professional. All of this strengthens the feeling that everything is fine. It becomes difficult to see how much your own judgment has been softened or bypassed.

To counter this, you can separate how something feels and appears from what it actually is. Use the model to get started, to draft, to explore options. Let it help you with structure and phrasing. But then switch into a different mode: checking, questioning, and verifying. Ask yourself: How do I know this is true? What has been left out? Where could this be misleading or simply wrong? Look for external sources, your own knowledge, or other humans to validate important claims.

It also helps to pay attention to your emotions. Be cautious when you feel unusually smart, fast, or brilliant after a few prompts. Be suspicious of the urge to skip verification because “it sounds right” or “it looks good enough.” Strong feelings of productivity are not proof of real quality.

Language models are powerful tools, but they are also skilled at shaping how you feel about your own work. They can make you feel competent. They can make your output appear impressive. But they cannot guarantee that what you have is actually correct, honest, or useful.

The core is simple: don’t outsource your judgment. Enjoy the help with speed and form, but stay in charge of truth and substance. How it feels and how it appears will always matter, but what something actually is should matter more.

Data Is Not Gold If You Have to Pay Someone to Dig It

People keep saying: “Data is the new gold” and “Every company is sitting on a goldmine of data.”

There is some truth in this. There is huge potential value in using data better: improving decisions, automating manual work, optimizing processes, building better products, and sometimes even creating new business models. There is also potential in sharing data, both internally between teams and externally with partners.

But potential value is not the same as actual value. The “data is gold” story often sounds more like wishful thinking or a sales pitch than a guarantee. It can be a way to point at something else: selling tools, consulting hours, or platforms.

If you listen to how data projects are actually sold and run, another pattern appears. To “dig” for the supposed gold in your data, you usually have to pay someone up-front. Consultants, vendors and service providers want fees, licenses, or long projects before anything valuable is delivered. The logic is: “You’re sitting on a goldmine, just pay us to dig.”

If the data really is gold, why does almost all the financial risk sit with the company that owns the data, and so little with the people doing the digging? If there is so much certain value, why isn’t more of the digging offered on a shared-risk or outcome-based basis?

Part of the answer is that data is not like gold. Gold is valuable on its own and easy to price. Data is only valuable in a specific context, combined with specific processes and decisions. Gold, once mined, doesn’t change. Data gets stale, systems change, and models drift. Gold mining companies accept risk because they believe in the upside. In many data projects, the only guaranteed upside is for whoever gets paid to “explore” your data.

On top of that, getting value from data involves a lot more than just “digging.” You need to clean it, integrate it, understand the business context, build pipelines, respect governance and privacy, and deliver something that is actually usable in daily work. Then you have to maintain it as things change. This is ongoing work, not a one-time extraction.

So instead of accepting “data is gold” as a fact, it is more honest and useful to treat data work as a risky investment. Each initiative is a bet: it costs time and money, and the outcome is uncertain. That doesn’t mean you shouldn’t do it. It means you should manage it like an investment, not like a guaranteed treasure hunt.

A more practical approach is to start from specific decisions or processes you want to improve, not from the abstract idea that “we need to use our data.” Define what better looks like and how you will measure it: fewer errors, less manual work, higher conversion, lower churn, faster response times. Then run small, focused projects with clear goals and limits on time and cost. If something works, you can scale it. If it doesn’t, you stop and learn from it.

When working with partners, try to align incentives. Ask how much of their compensation depends on success. Prefer phased work with concrete deliverables and go/no-go points over open-ended exploration. If nobody is willing to share any risk, be careful. You might be paying for digging where there is little or no gold.

The same thinking applies to sharing data. Inside the organization, share data when there is a clear, shared use case, not “just in case.” Agree on ownership and quality expectations so you don’t spread bad data around. Outside the organization, only share data if you understand what the other party will do with it, how value will be created, and how that value will be shared. If you can’t answer who benefits, how you measure it, and what happens if it doesn’t work, pause.

There is real value in using and sharing data. But data is not automatically gold, and repeating that slogan does not make it true. If you always have to pay someone else to dig, and they always get paid whether or not you find anything, then the gold may not be in the data—it may be in the selling of the digging.

Instead of asking how to unlock the gold in your data, ask where, concretely, data can help you make better decisions or run better processes, and how you will know if it worked. That question is less glamorous, but it is much closer to creating real value.

Navigating Language Model Retirements

Language models are becoming an important part of modern solutions, but they don’t come without challenges. Azure OpenAI has announced clear retirement dates for the language models it offers, which means that once a model’s retirement date has passed, any solutions built on it will cease to function. To keep systems operational, organizations must migrate to a newer model.

For example, the current model in use, GPT-4o, is scheduled for retirement on March 31, 2026. Its replacement is GPT-5.1, which is already assigned a retirement date of May 15, 2027. For now, no successor has been announced for GPT-5.1. This illustrates a key issue: the lifecycle for language models is quite short, forcing teams to plan for updates annually. Unlike traditional software upgrades, where skipping versions is often an option to save time and effort, skipping migrations with language models isn’t typically feasible.

This pace introduces major risks for organizations. First, there’s no guarantee that a replacement model will work as well as its predecessor or align with existing use cases. For example, there’s uncertainty around whether GPT-5.1 will meet performance expectations or integrate smoothly into current setups. Second, the rapid cycle of retirements means that building long-term solutions reliant on Azure OpenAI models involves constant work to maintain compatibility.

These realities create considerable challenges. Each migration requires resources, time, and expertise to adapt solutions. The high frequency of updates can strain teams and budgets that weren’t prepared to make migrations a regular part of their operations. The lack of clarity about what comes after GPT-5.1 also makes long-term planning difficult.

Organizations can take steps to reduce these risks. It’s important to evaluate how stable a language model’s lifecycle is before building critical systems on it. Designing solutions to be modular and flexible from the start can make transitions to new models smoother. Additionally, businesses should monitor Azure’s announcements and allocate resources specifically for handling migrations. Treating migrations as a predictable part of operations, rather than a disruptive hurdle, can help mitigate potential downtime and performance issues.

Frequent updates and retirements highlight the dynamic nature of working with language models. Building solutions on this foundation requires organizations to adopt a forward-looking strategy. With adaptability, careful resource planning, and ongoing evaluation of new models, businesses can derive value from language models while staying prepared for inevitable changes.

Cat World: The Nine Lives

Welcome to Cat World: The Nine Lives, a game concept that combines survival mechanics with innovative agent-driven design. This project isn’t just a game—it’s a sandbox for exploring autonomous decision-making, emergent behavior, and long-term adaptation. The player takes on the role of a designer, creating a cat agent meant to navigate a systemic and persistent world filled with danger, opportunity, and unpredictability.

The foundation of the game is survival. The cat agent must balance core needs: food, water, rest, health, and safety. The world itself is relentless and indifferent, designed to challenge the agent without adapting to its failures or successes. Players influence the agent’s behavior by setting high-level strategies and preferences, but the agent ultimately takes autonomous actions based on its traits, instincts, memory, and learned experiences. This hands-off approach shifts the player’s role to an observer and designer, focusing on guiding the agent rather than controlling it directly.

A distinctive mechanic is the nine lives system. Each life represents a complete simulation run, and the agent’s death isn’t a reset—it’s part of its evolution. Through successive iterations, the agent inherits partial knowledge, instincts, and biases from previous lives. This creates a lineage of cats that become better adapted to survive and thrive over time. Failure, in this game, isn’t an end; it’s data for adaptation and growth.

The agent’s behavior emerges from a complex interplay of internal states like hunger, fear, thirst, and fatigue. These dynamic needs guide decision-making, ensuring the agent responds flexibly to its environment. Perception isn’t perfect—the agent relies on noisy, incomplete observations such as scent trails, limited vision, and sound cues, mimicking real-world uncertainty. Spatial memory and associative memory further enhance survival; the agent retains knowledge of safe zones, food sources, and threats, while linking patterns such as predator activity to specific locations or times of day.

Adaptation and learning are central to Cat World. Skills improve through experience, colored by traits like curiosity or memory strength. Reinforcement signals carry over between lives, shaping heuristics, biases, and decision frameworks. Traits evolve randomly across generations, introducing diversity within lineages and enabling the discovery of new strategies. Together, these systems create a dynamic, ever-evolving agent that is both unpredictable and intelligent.

This game concept has unique implications for agent research. Survival in Cat World is a natural multi-objective optimization problem that requires agents to balance competing priorities in challenging, non-stationary environments. Learning is embodied, grounded in physical constraints and real-time environmental interaction. The world evolves in response to resource depletion, predator activity, and other dynamics, encouraging continual adaptation and preventing static behaviors. Internal states, decision rationales, and memory models are all exposed for debugging and visualization, making the game particularly valuable for studying emergent behavior. Its modular structure also supports experimentation with novel architectures, instincts, and learning systems, extending far beyond traditional agent training methods.

In short, Cat World: The Nine Lives is both a survival simulator and a living laboratory. It turns failure into knowledge and death into progress, offering players and researchers alike the opportunity to explore the limits of autonomy, adaptation, and evolution. It’s an invitation to design, observe, and learn from agents navigating their own complex stories within a dangerous and systemic world.

Language Models vs. Knowledge Models

Language models are designed to work with the coherence of text and the structure of language itself. They excel at generating outputs that appear polished, professional, and as if they come from experts. However, this doesn’t mean that these outputs are always correct. Their focus is on the language and patterns inherent in text, not on verifying or understanding the actual knowledge behind it. These models are built using vast amounts of textual data from diverse sources, which helps them to generate text that seems natural and contextually relevant.

Knowledge models, on the other hand, focus on organizing and understanding knowledge itself. They deal with things like objects, concepts, relationships, logic, causation, and even experiences. Knowledge is not limited to textual representation and can exist in other forms, although it is often represented or communicated in text for usability. Knowledge models are constructed using high-quality, well-curated data that is structured and reliable, enabling them to work with detailed and interconnected information.

The difference between language models and knowledge models lies in their focus and goals. Language models prioritize the structure of text, while knowledge models prioritize the structure and coherence of knowledge. While language models can produce text that seems to make sense, they don’t inherently understand the concepts they are describing. In contrast, knowledge models aim to provide meaningful representations of knowledge that emphasize connectivity, logic, and accuracy over language.

Language models can play a valuable supporting role in working with knowledge. For example, they can be used to summarize or simplify complex information, making knowledge more accessible. However, language models are not knowledge models; they are tools that can help process or present knowledge but lack the deeper logical coherence that comes with true knowledge organization and reasoning.

In essence, language models are a step on the path toward building richer knowledge models. The two systems complement each other, but they serve different purposes. As we continue to improve these technologies, we are likely to see even greater integration between their strengths: the fluency of language models combined with the structured reasoning of knowledge models. This advancement will bring us closer to systems that not only communicate well but also truly understand the world around them.

When Making a Change Requires Knowing Everything

Modern coding tools, like language models (LMs), are becoming essential for developers. These tools can assist in navigating complex systems and help with writing, debugging, or improving code. However, there’s a major red flag to watch out for: if making a change to your code base requires providing the LM with the entire system, you might have a serious problem on your hands.

This scenario arises when the code base is so interdependent and tangled that every part relies on something else to function or adapt. If you need to load the entire system just to adjust one component, it’s a sign that your code base has turned into what many developers refer to as a “spaghetti monster.” Overly coupled components, excessive dependencies, and poor modularity can all lead to this situation. The result is a system where even minor updates become an overwhelming task.

A spaghetti monster code base leads to inefficiency and frustration. Code becomes harder to navigate, changes take longer to implement, and new bugs surface more easily. Even advanced tools like LMs will struggle to provide meaningful support if they’re required to understand the entire system instead of focusing on a specific area. This doesn’t just waste tool capabilities—it consumes valuable development time.

The solution lies in embracing modular design. By structuring code into smaller, independent pieces, you simplify development for both humans and LMs. Modular systems reduce unnecessary dependencies and make it easier to isolate, update, and test individual components. Beyond modular design, refactoring the code, reducing entangled logic, improving documentation, and conducting regular code reviews can all help untangle the spaghetti monster.

If parts of your code base feel overwhelming or hard to navigate, take that as a sign to reassess the structure. A cleaner, more maintainable system will not only improve your workflow but will also make tools like LMs far more effective. Don’t let a tangled code base hold you back—it’s worth the effort to untangle the mess.

Feedback Loops in Intelligent Agents

Feedback loops are at the core of how systems learn and improve. They allow agents to evaluate their actions and adjust based on observed results. Most agents, however, operate almost exclusively on instant feedback and short-term evaluation. While this works well for immediate tasks, not all actions reveal their consequences immediately. Some have effects that become apparent in the medium- or long-term. For agents to handle these situations effectively, they need to incorporate longer feedback cycles into their decision-making processes.

Short-term feedback loops are the most straightforward. For example, when baking bread, the process involves continual short-term adjustments. Mixing the ingredients provides instant feedback in terms of dough texture. Similarly, baking in the oven involves short-term checks to ensure the bread is baking properly without being undercooked or overcooked. These short loops happen within minutes or hours and provide the agent or individual with immediate insights to improve the outcome.

Medium- and long-term feedback loops are more complex. Farming grain is a good example. In a medium-term feedback loop, a farmer plants, grows, and harvests crops in a single season. The results of this process—the size and quality of the harvest—can be evaluated to guide decisions for the next season. Long-term feedback in farming, however, involves managing soil health and fertility. Decisions about fertilizer use, crop rotation, and soil management accumulate over many years, affecting the sustainability and productivity of the farmland in the future.

Currently, most agents cannot handle these longer-term cycles because they primarily learn from what is happening “right now.” They focus on instant feedback rather than considering the broader impact of their actions. This limits their capacity to understand the full consequences of their decisions, particularly those that only become evident much later.

It is critical to recognize that true learning and effective decision-making require balancing the short-term results with medium- and long-term outcomes. Long-term feedback loops are essential for achieving sustainable and meaningful progress. Future developments in agent design must account for these extended timelines to allow for smarter and more responsible decision-making in complex and dynamic environments.

Two Ways to Use Language Models for Writing

Language models have become powerful tools for writers, offering opportunities to enhance both the ideation and execution phases of writing. There are two main ways to use these tools when creating a text.

The first approach involves using the language model as a brainstorming partner. It acts as a sparring partner to help you come up with ideas, content, or themes. In this case, the model supports your creative process, but you write the final version of the text yourself.

The second approach is different. Here, you take the role of the idea generator. You think of the key themes, solutions, and content, then ask the language model to craft the final text based on your input. It assists with the actual production of the polished version.

Interestingly, there’s something of a divide in how these two approaches are viewed. One of these methods tends to face criticism, while the other is widely accepted. The brainstorming method, where the writer maintains control over the final output, is often seen as the “right” way to use such tools. In contrast, letting the model write the finished text tends to draw questions about creativity, originality, and over-reliance on technology. It’s an interesting cultural reflection: does the process of writing matter more than the result, or is the content itself what truly counts?

At the heart of this conversation lies that very question. What is most important in writing—what is written or how it’s created? Should the process define its value, or is it the final message that matters most to the reader? For example, is originality tied to the way the text is shaped, or is it about the ideas and substance behind it, no matter how it’s written?

Ultimately, the answer might depend on the context. Perhaps the method of collaboration isn’t as important as the intention behind the work and the quality of the message. Whether you use a language model as a brainstorming partner or a full-fledged writing assistant, the value of your writing will always lie in its ability to connect with the reader.

AI-Assisted Gaming: A New Dimension in Gameplay

The concept of automated agents in games has been around since the beginning of video gaming. From chess bots to difficulty settings and computer-controlled opponents, these systems have always been part of how games are designed and played. Traditionally, these agents served robotic, automated purposes, following pre-programmed rules to either challenge the player or add complexity to the game environment.

AI-assisted gaming, however, goes one step further. This emerging genre shifts these systems away from being simple opponents or automated mechanics and transforms them into collaborative partners. In these games, the agent acts as a teammate, sidekick, or co-player—creating the sensation of gaming alongside another real person.

In action RPGs, for example, you might have an agent playing alongside you as though it were another player. You can build the agent’s character just like you would your own, providing instructions and feedback on how it plays. Over time, it learns and adapts based on playing with you, evolving into a personalized companion that complements your strengths and supports your strategies.

This fundamentally changes gaming experiences, especially in single-player games. AI-assisted games introduce tactics, builds, and strategies that were previously only possible in multiplayer settings, bringing new approaches to mastering games and exploring creative playstyles. It opens up exciting new dimensions for single-player games, making them feel less solitary and more dynamic.

What’s even more exciting is that this concept can be adapted to nearly any type of game. Whether it’s action, puzzle, RPG, or strategy, the specific approach will depend on the genre, but the application of AI-assisted features can enhance gameplay across the spectrum. For example, agents could act as co-strategists in a tactical game or assist with solving puzzles in a cerebral adventure.

AI-assisted gaming represents a significant leap forward for the medium. By transforming computer-driven agents into collaborative, learning companions, developers are creating more immersive and innovative gaming experiences that expand what players can achieve in both solo and cooperative play.

Object-Oriented Programming a Failed Abstraction

Object-oriented programming (OOP) is built on abstraction, encapsulating both data and behavior into “objects.” While this approach has dominated software development for years, it doesn’t always align with how programs are created and executed. Often, the abstractions OOP introduces feel mismatched with the dynamic nature of software design and operation.

A key problem lies in how OOP connects data and operations. Take the example of a “person” object in a system. Attempting to define everything a person can do or be involved in within a single object quickly becomes impractical. People interact with systems in countless ways, such as being members of different groups, triggering workflows, or being part of external processes. Trying to encapsulate all these interactions within one abstraction leads to unnecessary complexity and rigidity. Software is not inherently about “things” or “objects”—it’s about tasks, processes, and services.

Much of software development is process-oriented. Applications often center around actions that need to be performed, such as validating business logic, fetching data, or completing a workflow. Similarly, functional programming approaches emphasize operations acting on data rather than tightly coupling both into objects. These paradigms reflect an underlying truth: software is primarily about what happens, not static representations of entities.

Thinking about software as “something that happens” can lead to cleaner, more practical designs. Programs are dynamic systems where tasks unfold over time based on input, rules, and workflows. By focusing on what needs to be done rather than forcing abstractions into objects, developers can design systems that align more closely with how software actually operates.

This doesn’t mean abandoning object-oriented programming entirely. OOP can be useful when modeling concepts with clear boundaries and well-defined behaviors, but it’s important to recognize that abstractions based on static objects are not always the best fit. Often, process-oriented thinking offers a simpler and more scalable solution, especially when software revolves around actions rather than entities.

Developers don’t need to follow a rigid paradigm, but rethinking abstractions can lead to better decisions. Software is dynamic, and understanding it as tasks, processes, and workflows rather than a collection of objects can help developers create more practical and adaptable solutions.