Author: Nicolai Friis

Automating a Business from Day One

Starting a business begins with a clear focus on delivering products and services that create value for customers. This focus is the foundation upon which everything else is built. To ensure resources are directed toward your core objectives, tasks that are not part of the core business should be automated from the very first day. By adopting a mindset where automation is the default choice for solving challenges, you create a lean and efficient operation from the outset.

Automation should also be considered within the core business itself. The first step is always to explore whether a task can be handled by automated systems. Only when automation is not feasible should manual solutions be examined. If neither automation nor manual solutions meet your needs, hiring someone to perform the task becomes the last resort. This progression ensures that staffing decisions are made carefully and only when absolutely necessary.

The result is a business with few or no employees, relying instead on efficient systems and processes. For tasks that cannot be effectively handled internally, a strong emphasis should be placed on using external partners or services. This helps maintain a streamlined structure while addressing specialized needs without overburdening internal resources.

An automation-first approach lowers operational costs, reduces dependency on human labor, and makes your business more scalable and adaptable. By choosing automation as a guide for decision-making from the very beginning, you create a business designed to thrive with efficiency and focus.

Navigating Levels of Abstraction in Knowledge Work

In knowledge work, understanding the level of abstraction you’re operating on can make all the difference. A famous example helps illustrate the concept: “This is a picture of a painting of a painting of a pipe.” At the simplest level, we have the concrete—the pipe itself. Beyond that, we move to representations: the painting of the pipe, which is one level removed, and then the painting of the painting, which adds yet another layer of abstraction.

The concrete level is the easiest to grasp—it’s direct and tangible. However, the higher levels of abstraction, those that deal with representations of representations or broader conceptual thinking, are harder to understand and often tricky to apply appropriately. Knowing when and how to move beyond the concrete level is a skill, one that isn’t always intuitive.

Humans and language models alike face challenges in handling abstraction. We can easily mix up the layers, treating abstract representations as if they were concrete objects. Some individuals and systems struggle to work beyond the concrete level at all, sticking only to the simplest, most tangible concepts. This can lead to oversimplified results when the task or concept at hand requires more nuanced thinking across abstract layers.

These difficulties also create challenges for building tools that support knowledge work. Tools must be designed to navigate and present information at multiple levels of abstraction, making them both accessible and capable of handling complexity. This is especially vital when creating systems or agents intended to work alongside humans, as their ability to handle abstraction impacts their usefulness and relevance in knowledge-intensive tasks.

Understanding levels of abstraction isn’t just a theoretical exercise—it’s an essential skill for working smarter. By recognizing these layers and the challenges they bring, we can design better tools, make better decisions, and approach problems with greater clarity. Mastering abstraction enables us to connect the concrete with the conceptual, leading to more effective knowledge work overall.

Human Capacity for Synergy with Advanced Tools

Learning or applying knowledge without access to a language model or advanced agent will soon feel like an unnecessary handicap. It’s a bit like flipping through physical books or handwriting information—technically possible, but hardly practical in a world where faster, more intelligent tools are readily available. As these tools become more integrated into everyday life, working without them will seem out of place, limiting our efficiency and ability to keep pace.

In the coming years, education systems, exams, tests, and even job interviews are likely to revolve around collaboration with tools. The focus will shift from simply testing what an individual can accomplish alone to what they can achieve in partnership with advanced systems. These tools are no longer just add-ons; they’re becoming central to how we learn, apply knowledge, and grow professionally. The key question in this future won’t be whether someone can scale knowledge on their own, but how well they work alongside systems that amplify their potential.

Success will hinge on our ability to collaborate effectively with agents and tools rather than solely relying on individual effort. These systems scale our capabilities, allowing us to process information, solve problems, and innovate in ways that are unattainable alone. The challenge will be learning how to navigate this partnership—understanding how to use these tools wisely and creatively to reach new heights. Those who embrace this synergy will unlock greater possibilities than ever before.

When Automation Reveals What Doesn’t Need to Be Done

As organizations work on automating tasks using tools like language models, it often becomes clear that many of these tasks don’t actually need to be done. The process forces us to step back and question why certain workflows exist, and whether they truly add value. Sometimes, the answer reveals that these tasks were never necessary in the first place.

When tasks are identified as unnecessary, they don’t need to be automated. There’s no benefit in optimizing something that shouldn’t exist. Instead, organizations can choose to stop performing these tasks entirely. Eliminating them frees up time, energy, and resources that can be focused on work that matters.

This realization naturally impacts employees. Some roles may no longer be needed when tied to redundant tasks, leading to difficult decisions like letting go of staff. However, employees can often shift their focus to more relevant work instead of being sidelined. This reallocation allows the organization to preserve talent while ensuring efforts align with meaningful goals.

Automation isn’t just about improving existing workflows—it’s about challenging the assumptions that created them. By focusing on what truly matters, organizations can streamline their operations and build a foundation for lasting value.

Balancing Resource Allocation Between Platform Stack and App Domain

When building software, a key question often arises: how much of your development resources should go towards maintaining the tech and platform stack versus focusing on features and functionality for the app domain? Ideally, only about 10% should be spent on work outside the app domain, while 90% should be dedicated to creating improvements that directly benefit your users. This balance ensures that the product evolves in meaningful ways, addressing user needs and maintaining competitiveness.

However, reality often looks different. Teams sometimes allocate the bulk of their attention to the platform stack, neglecting app functionality. Features the users care about may be delayed or underdeveloped because too much time is spent on infrastructure and internal systems. This tendency can create a disconnect between the system’s technical brilliance and what users experience.

The rise of DevOps has also played a role in these misaligned priorities, as merging development and infrastructure responsibilities has blurred traditional boundaries. While the intention is to streamline workflows, it can lead to a drain on resources. Developers are often absorbed by platform stack tasks that become overly complex and expensive without necessarily adding proportional value to the users.

The temptation to over-engineer is a common pitfall. Teams often ask themselves if the stack needs to be sophisticated or costly, but many times the answer is no. A simpler, lean stack can be more maintainable and sufficient for current needs while leaving room for future adjustments. Overcomplicating things usually comes at the cost of focus on app functionality.

An important factor to consider is the psychological comfort developers find in tech-only tasks. Working in the platform stack feels safer, as mistakes are often less visible than in user-facing areas. The app domain, on the other hand, can feel risky. Errors in the app can directly impact users, making developers more hesitant to focus their efforts there. Yet, this avoidance often leads to resource misallocation and slower progress where it matters most.

To restore balance, teams should prioritize user value when allocating resources. Metrics that emphasize feature use and customer satisfaction rather than internal stack achievements can help shift focus back to user needs. Collaboration between developers working on the tech stack and those focused on app functionality can ensure both areas remain aligned. Additionally, fostering a culture where developers feel safe tackling user-facing challenges can ease the fear of mistakes and encourage innovation.

Finding the right balance is critical. The stack should empower the app domain, not overshadow it. By dedicating the majority of resources to building and improving features that matter to the users, teams can create products that deliver real impact while keeping the underlying infrastructure lean and efficient.

Does the Quality of Language Model Output Reflect User Competence?

Language models are becoming widely used for tasks like writing, brainstorming ideas, or solving problems. However, the quality of their output can vary significantly from user to user. Some report achieving results that are almost perfect and consistently reliable, while others experience chaotic, low-quality attempts that fall far short of their expectations. This wide range of outcomes raises an interesting question: does the competence of the user directly impact the quality of the outputs from a language model?

The competence of a user seems to be an important factor. A skilled user often knows how to craft a clear and precise prompt, provide enough context, and refine the model’s responses until they reach their desired result. In contrast, less experienced users might write vague, unclear, or overly broad instructions, which can lead to results that feel like a poor imitation of what they wanted. Essentially, the quality of the output often reflects the probability of the user producing high-quality results on their own.

Getting reliable and consistent results requires understanding how to effectively interact with a language model. Competent users tend to excel because they treat the model like a collaborator, shaping and guiding its output step by step. For example, they might clarify their request with specific formats, break complex tasks into smaller parts, or offer examples of what they’re looking for. Meanwhile, users who don’t have this approach sometimes struggle to communicate their needs or expect the model to “read their mind.”

The good news is that any user can improve their ability to work with language models over time. By becoming more deliberate in their process—rewriting prompts for better clarity, breaking tasks into smaller steps, or providing examples—they will often see improvements in the results. Experimentation is key, as is the patience to refine the interaction instead of expecting perfection on the first try. Whether experienced or not, the effort users put into guiding the model can ultimately make all the difference.

The relationship between user competence and output quality highlights that these tools are bridges rather than shortcuts. They are only as effective as the guidance they are given, and even users starting at a low level can learn to achieve better outcomes with practice. Familiarity with how to communicate with language models unlocks their true potential, allowing anyone to move closer to achieving near-perfect results.

Revolutionizing Software Development with Application Specification Language (ASL)

Application Specification Language (ASL) introduces a higher-level approach to software development, focusing on describing what an application does instead of how it is implemented. Unlike programming languages, ASL operates at a level above traditional coding, providing a formal abstraction of software that is technology-agnostic. It offers a structured way to represent applications, allowing developers to define functionality without being tied to specific frameworks or languages.

ASL can be generated from existing code or derived directly from human-readable requirements and specifications. This flexibility allows it to bridge the gap between conceptual design and implementation. By capturing the software’s high-level concepts, ASL becomes the foundation for creating new implementations in any programming framework or language. It enables both upgrades and migrations, making it an effective tool for transitioning legacy systems to modern platforms.

Once ASL code is generated—either manually or by using a language model—it serves as a formal specification of the software system. From this specification, actual code can be created within diverse technologies. This abstraction dramatically simplifies development processes, as the focus shifts from implementation details to broader system functionality.

The practical benefits of ASL are significant. Its technology-neutral nature ensures that applications can be adapted, migrated, or upgraded without requiring extensive rewrites. It enables productivity by automating code generation and minimizes errors that come with manual coding efforts. Additionally, it future-proofs applications by decoupling them from specific languages or frameworks, making them more adaptable to emerging technologies.

ASL is particularly useful for migrating legacy systems, prototyping new software, and developing applications for diverse ecosystems. It streamlines workflows by automating repetitive tasks and ensures a consistent structure throughout development cycles. While ASL has many advantages, human oversight is still critical to ensure accuracy, especially when generating specifications from complex systems or existing codebases.

By revolutionizing how developers approach software design, ASL simplifies the development process and ensures smoother transitions between frameworks and technologies. Its focus on high-level abstraction empowers developers to innovate and create adaptable, sustainable systems for the future of software engineering.

Unlocking New Opportunities Through Enhanced Productivity in Software Development

The growing capabilities of advanced language models have led to significant improvements in software development, making the process faster, more affordable, and more efficient. These changes aren’t limited to software alone—they apply to other fields as well, enabling a wide range of disciplines to benefit from enhanced productivity. This raises important questions about how this shift will impact the demand for developers and professionals.

Some worry that increased productivity will reduce the need for skilled workers. After all, if more can be accomplished with less, wouldn’t the demand for people decline? However, a closer look suggests the opposite. Instead of decreasing the need for professionals, the efficiency brought by language models opens the door to digitalize and automate tasks that were previously considered too expensive or impractical. Areas where software development wasn’t cost-effective or valuable enough in the past now have the opportunity to flourish.

Lower costs and higher productivity allow industries to explore new solutions that were once out of reach. This change creates countless opportunities, from tackling previously unprofitable areas to empowering smaller businesses and underserved sectors. Developers and experts won’t become obsolete; instead, they’ll have the capacity to accomplish much more. The focus shifts from competing with technology to leveraging it as a powerful collaborator.

What’s clear is that these advancements amplify, rather than replace, human potential. Language models help reduce time spent on repetitive tasks and free up space for creativity and innovation. They serve as tools that magnify what professionals can achieve, whether that’s creating affordable solutions, addressing global challenges, or driving digital transformation across industries.

In the end, greater productivity isn’t about doing more with less—it’s about expanding what’s possible. Developers and experts remain essential to this process, unlocking new horizons as technology empowers them to achieve extraordinary results. Far from reducing opportunity, this transformation points toward a future where human ingenuity and technological collaboration redefine what can be accomplished.

Scaling the Use of Organizational Knowledge

Knowledge is one of the most valuable assets in any organization. It lives in different places—within data, processes, and people—and serves as the foundation for decision-making, efficiency, and innovation. But how do organizations effectively scale the use of this knowledge? By understanding where it is found and taking practical steps to make it accessible, reusable, and impactful, companies can unlock its full potential.

Knowledge resides in three main areas. First, it exists in the data and information within an organization: the structured and unstructured content stored in systems, ranging from databases to emails. This data holds significant value, but only when it is well-organized and easy to access. Second, knowledge is embedded in the systems and processes an organization uses to perform its work. These workflows and methodologies reflect accumulated experience and best practices. Finally, and perhaps most importantly, knowledge exists in the minds of people. Employees bring expertise, creative problem-solving, and critical insights grounded in their experience and skills.

Scaling the use of knowledge means finding ways to capture, share, and apply it across the organization. To start, data and information should be structured and centralized so it can be easily searched and retrieved. Systems and processes should be designed not only for consistency but also for adaptability, ensuring that they can evolve with the organization’s needs. Knowledge that resides in people can be scaled through collaboration, mentoring, and cultivating a culture of openness and knowledge-sharing.

Technology can play a significant role in making knowledge more accessible at scale. Tools such as language-based models and other digital systems can help extract, summarize, and organize information, allowing employees to focus on more creative and strategic tasks. However, scaling knowledge shouldn’t solely rely on technology—it’s equally about empowering people and creating an environment where expertise can flow freely.

In short, the key to scaling knowledge lies in understanding where it lives, finding ways to unlock it, and building systems that ensure its usefulness grows along with the organization. By bridging the knowledge found in data, systems, and individuals, companies can create a powerful foundation for growth, resilience, and innovation.

Testing Thought Processes with a Language Model

There’s an interesting way to experiment with reasoning by using a language model. Start by breaking your thought process into clear points. For example, you can structure your reasoning with a beginning, middle, and conclusion. Then, remove one part—whether it’s the start, middle, or end. Once you’ve removed a section, you can test the model to see if it guesses or reconstructs the missing part.

The fun of this approach lies in seeing how the model handles incomplete reasoning. It might surprise you by filling in the blanks with unexpected ideas. You could end up gaining new insights or learning what the majority might think is the “correct” continuation based on patterns it has learned. This isn’t just about creativity; the results might even suggest what you should have thought, statistically speaking, by leveraging the shared logic of many thought processes.

If you’re curious about sharpening your reasoning or getting inspiration for new ways of thinking, this method is worth a try. Structure your ideas, remove a section, and let the model fill in the gaps. You never know—you might walk away with a better understanding of your own thought patterns or discover something entirely unexpected.