Author: Nicolai Friis

Building a Platform for Automating Control Processes

Control processes are essential for ensuring consistency, compliance, and reliability in business operations. Automating these procedures can save significant time, reduce errors, and improve overall efficiency. This guide outlines the key elements needed to design a platform that supports the creation, execution, and testing of control processes.

The platform should allow users to log in securely and create new control processes. A straightforward setup makes it easier for users to begin their workflows. Once logged in, users can start by selecting the subject of the control process. Subjects might include physical or digital objects, individuals, documents, data, or even larger business processes. Providing flexibility in subject selection ensures broad applicability across different use cases.

From there, the platform should help users identify relevant rules, laws, and regulations connected to the selected subject. This step is crucial for ensuring compliance. Whether referencing internal rules or external regulations, users need tools to locate these frameworks easily.

Defining control outcomes is an essential step. Outcomes represent the end goals of the control process and ensure the work done has measurable and actionable results. Success might be defined as confirming compliance, identifying gaps, or achieving specific metrics like reduced error rates or improved processing times.

The workflow design, or control flow, forms the core of the control process. It maps out how tasks are executed—from sequential steps to branching pathways that account for conditional outcomes. This structure needs to be intuitive but flexible, allowing users to adapt workflows as requirements evolve. Supporting templates and visual design tools can simplify the creation process further.

Before deployment, control processes should be tested and validated within a simulation environment. Simulation helps identify weaknesses or inaccuracies while ensuring the process handles typical use cases and edge cases effectively. Users can iterate on their workflows based on test results, reducing the risk of issues when processes go live.

Building and refining automation for control processes is an ongoing effort. A well-constructed platform empowers users to create robust workflows while maintaining compliance and improving efficiency. Following these steps lays the groundwork for a system that evolves with organizational needs while consistently delivering value.

Automating Control Processes Using Language Models

Many industries rely on control processes to ensure operational accuracy, maintain quality, and comply with regulations. Common examples of these processes include deviation control, quality control, compliance checks, fraud detection, and documentation control. These checks often happen at different stages, such as pre-controls, post-controls, or through mapping workflows. Traditionally, these processes have been done manually, which can be time-consuming and prone to errors.

Language models offer a new way to automate control processes without needing to specify or code every detail explicitly. Instead of relying on predefined rules, language models work by identifying patterns. This makes them effective at detecting deviations or irregularities on their own. Specialized versions of these models can be fine-tuned to focus on specific tasks, such as fraud detection or anomaly identification, making them powerful tools for modern automation.

To automate control processes using language models, it’s helpful to take a step-by-step approach. First, identify what needs to be controlled, what data is required, and where this data resides in systems and processes. This involves close collaboration with domain experts such as lawyers, engineers, or healthcare professionals, depending on the field. It’s important to focus on areas with high potential for improvement, where automation can have the greatest impact.

Next, determine which control steps and processes are suitable for automation. Processes where there are large data volumes, significant manual effort, or readily available data are often good candidates. Once areas for automation are identified, the next step is to test with a proof of concept. Starting with simple examples in a secure sandbox environment helps validate the model’s capabilities. Testing different language models is essential to finding the best fit for specific needs.

If the proof of concept shows promise, the next step is to run a limited pilot program. A subset of real-world data can be used to experiment with automated controls while comparing different approaches. The results should be carefully analyzed to assess whether automation delivers measurable improvements. Pilots should function as separate processes to avoid disrupting ongoing workflows while testing scalability and reliability.

When automated controls prove valuable in pilot testing, the final step is scaling up for full production. Successful solutions can be integrated into live systems to streamline workflows and handle larger data volumes. Monitoring and refinement are critical during this stage to ensure continued effectiveness and adaptability.

While automating control processes offers significant advantages, practical challenges need to be addressed. Collaboration with subject matter experts ensures that automation captures all critical requirements. Reliable, accurate datasets are key to achieving good results. Additionally, building trust among stakeholders is crucial to gaining buy-in and ensuring that automated controls are accepted. Finally, successful implementation relies on starting small, testing thoroughly, and scaling gradually.

The potential for automating control processes with language models is immense. By reducing manual workload and improving accuracy, organizations can increase efficiency and build smarter workflows. Starting with smaller tests and scaling gradually provides a clear path to unlocking these benefits while maintaining quality and compliance.

Training a Language Model for Text Comparison

Text comparison represents a unique and challenging use case for language models. Unlike tasks such as question answering, searching for information, or generating content, text comparison focuses on analyzing and identifying subtle differences and patterns between two or more pieces of text. This process is geared towards detecting how one text deviates from another, whether in structure, tone, or meaning.

The model’s focus is not on answering questions but rather on recognizing patterns of deviation—an area that traditional models often overlook. These deviations can reveal meaningful insights and are particularly useful in contexts where precision and detail matter. For instance, a text comparison model can identify subtle linguistic shifts, rephrased sections, or even structural differences between similar documents.

This use case stands apart from typical applications like chat, search, and writing assistance. While those tasks focus on interaction, retrieval, or generation, text comparison prioritizes subtle analysis. Detecting nuances often requires a tailored approach, one that emphasizes detail over generalized functionality.

The training process involves equipping the model to capture and interpret these patterns effectively. This requires specialized datasets where textual pairs highlight similarities and differences. Examples might include rephrased paragraphs, altered clauses in contracts, or variations in translated content. Training the model to identify these deviations ensures it is uniquely suited for tasks like plagiarism detection, legal document review, or content consistency verification.

Applications for this type of specialized model are vast. In academia, it can help detect cases of paraphrased plagiarism. In the legal field, it ensures that slight shifts in agreement wording don’t go unnoticed. For content creators working across languages or platforms, the model can maintain consistency with the original material while catching deviations in tone or meaning.

By training a language model specifically for text comparison, we can address challenges that generalized systems struggle to handle. This tailored approach ensures accuracy, reliability, and meaningful insight for industries and tasks that rely on precision. The development of such focused use cases underscores the potential for innovation in language modeling and opens up exciting opportunities for problem-solving in critical domains.

Give me a problem

“What’s your biggest problem right now?” This question can be one of the most powerful tools for selling, whether it’s yourself, a technology, or a service. The idea is simple: by directly addressing a real and concrete issue the recipient is struggling with, you immediately demonstrate your value, engage them emotionally, and create a productive dialogue.

The key is to work with an actual, specific problem the recipient cares about. This isn’t about vague concepts or abstract ambitions—it’s about something tangible and relatable. A real problem resonates because it’s something the audience can logically understand and emotionally feel. It should be specific enough that it can be broken down into actionable details, allowing you to dive into solutions that matter.

Here’s where choosing the right level of detail is crucial. For example, saying, “I want to cure cancer” is too broad and lofty to be actionable. However, if you refine it to, “I want to cure cancer, but I’m struggling with researchers spending too much time writing reports to secure funding,” it becomes a problem you can tackle. Similarly, “The company needs to increase revenue” is too generic, whereas “We need to boost revenue, and several consultants currently don’t have ongoing projects” is grounded and solvable. The same applies in tech: “The development team isn’t delivering all the functionality users want” is abstract, but “The team spends most of their time on non-functional tasks and can’t deliver new features quickly enough” identifies specific challenges and barriers.

Starting with the recipient’s biggest problem allows you to demonstrate your skills in problem-solving, analysis, and creativity. By breaking the problem into smaller components, you can sketch alternatives, evaluate solutions “on paper,” and engage them in a focused conversation about possibilities. It’s not just about providing answers—it’s about encouraging clear thinking and collaboration.

This method has wide-ranging applications. In a job interview, you can show your ability and expertise by addressing the company’s key challenges. In sales, you can frame your product or service as the solution to their major pain points. During demonstrations, you can showcase your capabilities by working with relatable, real-world examples.

At its core, this approach builds credibility and trust. It’s a way to prove your ability to navigate and tackle tough challenges while focusing on what truly matters to the recipient. By connecting with someone’s most pressing issue, you encourage collaboration, deepen the dialogue, and position yourself as someone who delivers results.

So next time you’re pitching yourself, a product, or an idea, try starting with this question: “What’s your biggest problem right now?” It’s a way to cut through distractions and focus on creating real value. Sometimes, the simplest approach is the most impactful.

Overcoming the Pitfalls of Generic Domain Models

Domain models are essential tools for structuring complex systems, ensuring they align with the needs of their domain and are usable for developers, users, and stakeholders. However, a frequent problem arises when these models become overly generic, favoring abstract, universal representations over specific, meaningful ones. While high abstraction might seem appealing for flexibility, it can introduce significant challenges in usability, implementation, and maintenance.

When the abstraction level is too high, everything starts to blur into vague concepts like “data” or “thing.” These overly generic representations fail to capture the unique aspects of the domain and offer little guidance for users or developers. The model becomes hard to learn, difficult to interpret, and impossible to understand without extensive documentation. The lack of clarity means that users cannot rely on the model to glean its intended use or functionality—they have to refer to external instructions instead. It sacrifices usability for flexibility in a way that ultimately benefits neither.

In implementation, generic domain models also create complexity. Developers struggle to map abstract concepts to concrete systems, which can lead to errors. Without clear distinctions and definitions, data often gets mixed up or incorrectly applied, increasing the risk of bugs and reducing the system’s reliability. These models also tend to be incompatible across implementations, as different teams interpret them in varying ways, resulting in inconsistencies in how they’re applied.

Striking the right balance between generalization and specificity is critical. If the abstraction is too high, the model becomes generic and error-prone. If it’s too low, it becomes overly tailored to a single use case, losing the flexibility necessary for broader applications. The goal is to find an abstraction level that captures the essence of the domain, while remaining intuitive and adaptable.

To find this optimal balance, clarity should always be prioritized. It’s important to identify the central concepts and relationships unique to the domain and avoid reducing them to overly vague terms. Collaboration with domain experts, users, and developers is also key. These stakeholders can help validate the model’s design, ensuring it meets the needs of its intended audience and aligns with real-world use cases.

Refining the model should be an iterative process. Feedback from users and testing in practical scenarios can reveal whether the abstraction level provides sufficient guidance while maintaining flexibility. Real-world validation ensures that the model works in practice, not just in theory.

Overgeneralized domain models undermine their own purpose. They create unnecessary complexity, increase the risk of errors, and fail to guide developers and users. A well-crafted domain model finds the middle ground—specific enough to provide meaningful structure, yet flexible enough to grow alongside evolving requirements. Thoughtful design and collaboration are the keys to making domain models both effective and intuitive, ensuring they act as strong foundations for long-lasting systems.

Exploring Model Context Protocol (MCP) and kontekst.cloud

Model Context Protocol (MCP) is an open protocol designed to standardize how applications connect with language models (LMs). Think of MCP as being similar to a USB-C port, not for hardware, but for AI-driven systems. It provides a structured way for applications to interact efficiently with data sources, workflows, and tools. The three main features of MCP are resources, prompts, and tools. Resources consist of context and data that the user or model can utilize. Prompts are templated messages and workflows that guide interactions. Tools are functions that a language model can execute to complete specific tasks. This standardized approach makes MCP useful for integrating applications in a clear and repeatable way.

The concepts in MCP have noticeable similarities with kontekst.cloud, a platform that organizes systems around the central concept of “context.” Most features in MCP align directly to kontekst.cloud’s terms. Resources in MCP correspond to content in kontekst.cloud. Tools translate to actions, and prompts could align with agents or actions. However, prompts are tricky to define in kontekst.cloud since they are used differently. One suggestion is to treat them purely as templated messages and separate workflows as their own distinct concept. Unlike MCP, kontekst.cloud introduces threads that capture logs and process information, extending beyond the limited technical logging seen in MCP. This ability to store execution histories helps define workflows and track processes in greater detail.

Some challenges exist with terms like “resources” and “data,” as they are too broad and often end up encompassing everything. Kontekst.cloud has made efforts to be more precise by splitting features into content, process data, and actions. The platform uses an endpoint called /data to store all information related to features, but alternatively, /resources could be used. However, the generic nature of these terms still poses some risk of overlap between concepts. Despite this, the flexibility built into kontekst.cloud allows substantial customization, which makes implementing MCP on the platform relatively straightforward.

Kontekst.cloud’s design also enables support for alternative protocols like SOLID or other semantic web technologies. By adding a compatible layer, the platform can easily integrate standards like MCP while retaining the ability to work with other options. This adaptability positions kontekst.cloud as a versatile tool for building interoperable systems. Whether working with structured standards like MCP or experimenting with decentralized architectures supported by protocols like SOLID, kontekst.cloud provides the foundation for highly flexible implementations.

An important distinction between MCP and kontekst.cloud lies in the concept of context itself. In kontekst.cloud, context operates as the central organizing principle and can be seen as the “server” that ties together content, actions, workflows, and threads. MCP lacks this central concept and instead ties resources and tools to individual servers. To bridge this gap, kontekst.cloud could represent each context as its own independent server, assigning a root URL to each. This modular approach enhances scalability and allows workflows to be tied directly to user-specific or application-specific contexts, creating a more personalized experience.

Although MCP excels as a standardized integration protocol, kontekst.cloud takes these concepts further by emphasizing context as the foundation for organizing data and processes. This focus enables richer workflows and simplifies the design of reusable systems. With its ability to support MCP and other protocols, kontekst.cloud isn’t limited by any single system but instead embraces interoperability as a core strength. By combining the standardization provided by MCP with the context-driven modularity of kontekst.cloud, developers can build more scalable and flexible applications tailored to diverse needs.

A Practical Solution for Local Shopping and Service Searches

Finding specific products in local stores can often feel like an impossible task. Whether you’re looking for a niche item like black, 34 cm round chair cushions or something slightly out of the ordinary, there’s no straightforward way to figure out which store has exactly what you need nearby. Despite its dominance, search engines like Google have generally been poor at helping users discover local products and services—and they were never particularly good at it to begin with.

What’s missing is a dedicated service that makes searching for local products simple and efficient. Imagine a tool where every store could upload their inventory into an easily manageable catalog system, which they could keep updated with minimal effort. This would allow people to search for specific items and instantly see where they can find them locally, instead of wasting time going from store to store or trawling through irrelevant online results.

Such a service would be especially helpful for precise shopping needs that aren’t based purely on price or broad functionality. Sometimes you need something highly specific, like a particular size or design, and the usual tools simply don’t deliver these results. A focused local search tool could address this gap, empowering shoppers to save time while highlighting the unique offerings in their surrounding area.

It would also benefit local businesses tremendously. For smaller shops that might struggle to stand out against big retailers, this tool could be a game-changer. By connecting their product catalog directly to nearby customers searching for specific items, they could increase foot traffic and reach people who might otherwise have overlooked them.

In essence, the ability to connect people with local goods and services quickly and effortlessly would strengthen both shopping experiences and community ties. It’s time for a better way to search and shop locally—one that prioritizes both convenience and personal connection.

How Thinking in Reverse Unlocks Hidden Insights

Some connections can be difficult to understand because we aren’t looking at them from the right angle—or, more specifically, the wrong way around. What we see as cause and effect might actually be reversed, with the effect influencing the cause in ways we hadn’t considered. When approaching situations, challenges, or questions, our usual forward-thinking mindset might miss something crucial simply because it follows familiar paths.

A useful way to uncover these hidden connections is to reverse your thought process. By intentionally flipping events or sequences—whether in terms of time, cause-and-effect relationships, or other types of order—you force yourself to view things from a new perspective. This shift can reveal ideas or explanations that might otherwise remain unnoticed.

Looking at things backwards isn’t just about questioning assumptions; it opens up new possibilities for problem-solving and understanding. Reversing your thinking could lead to entirely different explanations for events or inspire alternative ways of approaching challenges. What seems like a clear sequence might turn out to be something completely different when viewed in reverse. Letting go of linear thinking and embracing the opposite can lead to breakthroughs that redefine how we understand the world around us.

Understanding Task Completion Through Communication

Why do we get different responses when we ask someone to “do something,” “finish something,” or “completely finish something”? The phrasing of a request can drastically change how people interpret and approach it. Saying “do something” often implies starting the task without necessarily focusing on completion. For example, asking someone to “clean the living room” might only result in tidying up visible clutter. “Finish something” shifts the focus to completing the task, but the level of thoroughness can vary. If the instruction is to “finish cleaning the living room,” one person might vacuum and dust, while another might only consider the task done once everything is spotless. The most explicit phrasing, “completely finish something,” leaves little room for misinterpretation. It clarifies that the task should be done thoroughly and to the highest standard—whether that means scrubbing every corner or polishing every surface. These distinctions show how subtle wording changes create different expectations about what “finished” actually means.

Context adds even more depth to how we interpret tasks. When you ask someone to “clean the kitchen,” the details of the situation influence understanding. Are you expecting a quick wipe-down before guests arrive, or do you need a deep clean of every cabinet and appliance? Without context, the task may result in a completely different outcome than intended. Clear communication benefits from specifying what “finished” looks like in a given scenario. Context also provides purpose—it lets people know why the task is important and what role completion plays in the larger picture, whether it’s preparing for an event or meeting a deadline.

The way we phrase requests also has a significant impact on how we think about and approach tasks. Slight adjustments in wording can shift focus. Asking someone to “start vacuuming” emphasizes getting the task underway, while asking them to “finish vacuuming thoroughly” sets an expectation of both progress and thoroughness. This isn’t just about clarity—it’s also about psychology. The phrasing we use creates mental cues that guide our actions, whether it’s a simple reminder to begin something or a directive to wrap it up completely.

By being mindful of the words we choose, the context we create, and the clarity we aim for, we can reduce misunderstandings and align expectations more effectively. When asking someone to complete a task is as clear as possible, collaboration flows more smoothly, and everyone involved can focus on what truly needs to be done.

Stories – A Way to Transfer Knowledge

Since the dawn of civilization, storytelling has been our primary way of sharing and preserving knowledge. From oral traditions filled with myths and legends to written texts, films, and interactive media, stories have shaped how we understand the world.

But why are stories so effective? Because they create experiences. Instead of just presenting isolated facts, they embed knowledge in a context, making it easier to understand, remember, and apply. This principle isn’t just useful for humans—it can also transform how we train language models.

How Stories Shape Learning

Stories are more than just entertainment. They act as cognitive frameworks, helping us connect new information to what we already know. Think about how we learn history—not through a list of dates and events, but through narratives about the people who lived them. The same applies to scientific discoveries, moral lessons, and even problem-solving strategies.

By structuring knowledge within a story, we make it relevant and engaging. A well-crafted narrative provides context, emotion, and meaning, making learning a natural and immersive experience.

Using Stories to Train Language Models

The way we train language models today often relies on vast amounts of structured and unstructured data. But what if we approached this process more like teaching a human?

Instead of feeding language models disconnected data points, we can frame information within meaningful stories. This method allows the model to understand not just words and syntax but also the deeper relationships between concepts. Context-rich learning could lead to more intuitive and adaptable language models, capable of reasoning and responding in more human-like ways.

A Future Built on Narrative Learning

Imagine a world where language models learn through carefully curated stories—absorbing knowledge in the same way we do. This could revolutionize fields like education, research, and communication.

By embracing storytelling as a core method for training, we’re not just improving language models. We’re reinforcing the fundamental truth that knowledge, when placed in the right context, becomes something more than just data—it becomes wisdom.