Author: Nicolai Friis

How to Catch Hidden Assumption Errors in Your Code—And Can a Language Model Help?

Every developer has encountered a bug that “shouldn’t have happened.” Often, these bugs stem from hidden assumptions in the code.

Take the example of a system handling substitute employees. It assumes that every substitute is assigned to replace someone. But in reality, substitutes may exist because no one currently holds the position. This faulty assumption leads to a null pointer exception, and a database constraint failure makes things worse.

These issues could have been caught earlier. But how? Could a language model (LM) help uncover such flawed assumptions before they break production?


Understanding Assumption-Based Bugs

An assumption-based bug happens when code is built on an unchecked belief about how the system works.

In the substitute employee example:

  • The system assumes every substitute has a direct assignment.
  • In some cases, no one holds the position, making the assumption false.
  • This leads to a null pointer exception and a database constraint failure.

Such bugs are common because assumptions often go unchallenged during development.


Can a Language Model Help Detect These Issues?

LMs could assist by:

  1. Extracting assumptions from code and documentation.
  2. Identifying weak spots, comparing them to past failures.
  3. Suggesting fixes, pointing out missing cases or alternative logic.

While today’s LMs aren’t perfect at reasoning, they can help detect patterns and highlight potential problem areas.


Practical Ways to Reduce Assumption-Based Bugs

Even without an LM, there are ways to catch these issues early:

  • Document Assumptions – Clearly state system assumptions and challenge them.
  • Use Static Analysis Tools – Linters and type checkers can catch logic inconsistencies.
  • Implement Defensive Programming – Always check for null values and validate inputs.
  • Explore AI-Assisted Code Review – Emerging tools can help flag logical inconsistencies.

Conclusion

Many software bugs come from flawed assumptions rather than syntax errors. While LMs may assist in uncovering them, developers can take proactive steps today: document assumptions, use static analysis tools, and test for edge cases.

AI-Powered Blog Writing Agent

In today’s fast-paced digital world, consistently creating high-quality blog content is a challenge. Whether you’re a business, a marketer, or an individual looking to publish thought-provoking articles, writing takes time, effort, and creativity.

Our AI-powered blog writing agent is designed to automate, enhance, and optimize content creation—from ideation to publication—so you can focus on what truly matters: sharing valuable insights with your audience.


Key Features

Basic Capabilities

Our AI agent simplifies blog creation with intelligent automation:

  • Effortless Content Generation – Provide an idea, rough notes, an example post, a concept, or a vision, and the AI will generate a structured, well-written blog post. It can even include relevant images.
  • Flexible Writing Styles & Formats – Choose from different styles, including short teasers, long-form articles, press releases, visionary thought pieces, or technical breakdowns.
  • Smart Meta-Tagging – Automatically generates relevant tags, categories, and metadata to optimize searchability and SEO performance.
  • Seamless Publication – Offers full or semi-automated publishing, so you can review and approve or let the AI handle the entire workflow.

Advanced Capabilities

For those looking for more control, variation, and optimization, our AI offers powerful content evaluation and management tools:

  • Multi-Proposal Generation & Evaluation – Instead of producing just one draft, the AI creates multiple versions of a post and evaluates them, allowing you to select the best for publication.
  • Diverse Input Pipelines – Generate completely different blog posts around multiple topics and ideas, providing a variety of content options. This ensures you always have fresh and diverse articles to choose from, making it easier to maintain a dynamic publishing strategy.
  • Authoring Process Archive – Keep a detailed record of revisions and iterations for each post, enabling easy tracking of changes, content strategy insights, and future repurposing.
  • Smart Trash-Can Feature – Discarded posts aren’t lost—they go into a special archive where they can be analyzed for trends, revisited for future use, or evaluated for performance improvements over time.

Why Use This AI Agent?

With this AI-powered writing tool, content creators can:
✔ Save time and effort on writing and editing
✔ Generate multiple blog post options across different topics
✔ Improve content quality through AI-driven evaluation and selection
✔ Maintain full control over the publishing process

Whether you’re a business scaling content marketing, a tech writer streamlining production, or a visionary sharing new ideas, this AI agent enhances creativity, productivity, and content quality like never before.

Ready to Revolutionize Your Content Creation?

Try our AI-powered blog writing agent today and experience faster, smarter, and more effective content generation. 🚀

Design Alternatives for Using Path, HTTP Methods, and Actions

When designing an API, choosing how to structure endpoints and model the interaction between client and server is a critical design decision. The three alternatives outlined – data-driven, object-oriented, and action/process-driven – represent different approaches with distinct strengths and weaknesses. The choice of approach should be based on both technical and business needs, as well as user expectations and workflows.


1. Data-Driven Approach

Description

This approach focuses on data as the primary entity in the API. Clients perceive the API as a system for storing and retrieving data, without directly interacting with actions or processes. Business logic and processing happen invisibly on the backend, and clients only see the results through the data produced.

Characteristics

  • Clear separation between data and processes.
  • Clients interact only with resources (e.g., submissions) and their lifecycle.
  • Process statuses are represented as fields in the data.
  • Resembles a CRUD (Create, Read, Update, Delete) approach.

Advantages

  • Simple for clients – they only retrieve and store data without needing to understand domain logic.
  • Fewer endpoints with a consistent URL structure.
  • Well-aligned with REST principles.

Disadvantages

  • Business logic can be difficult for clients to understand and discover.
  • Risk of logic being spread across clients if the API does not provide enough guidance.
  • Less suitable for complex processes involving multiple steps or data types.

Example

GET /data/submission
GET /data/submission/1199930
POST /data/submission
PATCH /data/submission/1199930

When is this approach suitable?

  • For simple systems where processes are not highly complex.
  • When clients primarily work directly with data (e.g., case handlers).
  • When minimal coupling between clients and domain-specific business logic is desired.

2. Object-Oriented Approach

Description

In this approach, each resource is treated as an object that has both data and associated operations (methods). Clients can not only retrieve and update data but also trigger specific actions on each resource. This makes business logic more explicit in the API.

Characteristics

  • Each resource has its own set of operations/actions.
  • Clients must understand domain-specific concepts and processes.
  • The approach resembles object-oriented systems, where objects have methods.

Advantages

  • Clearer process support – clients receive explicit signals about available actions.
  • Easier for clients to navigate business logic.
  • Well-suited for resources with many specific actions governed by business rules.

Disadvantages

  • Can lead to an explosion of endpoints when multiple resources have multiple actions.
  • Maintaining a consistent structure across various object types can be challenging.
  • Can become cumbersome if many actions are not resource-specific but apply across multiple resources.

Example

POST /data/submission/search
POST /data/submission/submit
POST /data/submission/1199930/selectPractice
POST /data/submission/1199930/cancel

When is this approach suitable?

  • When resources have specific, business-related operations.
  • When it is important for clients to understand the processes around resources.
  • When the API is part of a larger domain application with domain-oriented users.

3. Action/Process-Driven Approach

Description

This approach explicitly separates actions from data. Clients retrieve and manage data in one way, while business processes and operations are modeled as separate process resources or services. This allows actions to involve multiple data types simultaneously and handle more complex workflows.

Characteristics

  • Clear distinction between data and processes.
  • Processes have dedicated endpoints that handle multiple resources and complex logic.
  • Suitable for larger, cross-cutting processes.
  • Often inspired by Command-Query Responsibility Segregation (CQRS).

Advantages

  • High flexibility in modeling business logic.
  • Easier to version or modify process logic without changing data models.
  • Well-suited for systems with complex, multi-step workflows.

Disadvantages

  • Can create uncertainty about which data the processes operate on.
  • Requires more documentation and client adaptation.
  • May result in an artificial separation of data access and process handling, even when logically connected.

Example

POST /process/submitReimbursementClaim
POST /process/updateReimbursementClaim
POST /search

When is this approach suitable?

  • When processes involve multiple different data types.
  • When processes have high complexity and multiple steps.
  • When processes should function as “black box” operations with clear input and output.
  • When supporting both manual and automated workflows via the same interface.

Summary Evaluation

ApproachClient SimplicityFlexibilityProcess SupportSuitable for Complex Domains
Data-Driven✅ Very simple❌ Limited❌ Weak❌ Not well-suited
Object-Oriented⚠️ Moderate⚠️ Moderate✅ Good⚠️ Partially suitable
Action/Process-Driven⚠️ Requires learning✅ High✅ Very good✅ Highly suitable

Recommendation

Choosing an approach should be based on:

  • The complexity of the domain.
  • How self-sufficient clients need to be.
  • How clearly processes need to be defined for clients.
  • Whether the API is primarily a CRUD interface or a process-driven system.

In many cases, a hybrid model may be the best solution, where basic data is managed using a data-driven approach, while more complex workflows are exposed via process-driven endpoints. This provides both simple data handling and flexible process support.

Knowledge-Augmented Model Training (KAMT)

Knowledge-Augmented Model Training (KAMT) is a structured approach to transforming a Foundation Language Model (FLM) into a Specialized Language Model (SLM) by incorporating domain-specific knowledge. This process leverages Knowledge Packs (KPs)—curated datasets containing expert-level information—to enhance the model’s proficiency in targeted areas.

By systematically integrating structured knowledge, KAMT ensures that AI models maintain their foundational language capabilities while gaining deep expertise in specific fields. This makes it a powerful strategy for organizations looking to build high-performance AI systems without the need to train models entirely from scratch.

Key Components of KAMT

1. Foundation Language Model (FLM)

At the core of KAMT is the FLM, a pre-trained general-purpose language model with broad linguistic knowledge. This model serves as the starting point and provides strong baseline capabilities in natural language understanding and generation. However, its general nature means it lacks deep expertise in specialized areas.

2. Knowledge Packs (KPs)

Knowledge Packs (KPs) act as modular data units containing structured domain-specific information. These are designed to systematically enhance the FLM’s knowledge in a particular field. A KP may include:

  • Industry-Specific Literature – Research papers, textbooks, whitepapers
  • Technical Documentation – Manuals, software documentation, engineering specifications
  • Expert-Curated Datasets – Annotated corpora, structured knowledge bases
  • Real-World Data – Case studies, financial reports, patient records (where applicable)
  • Interactive Feedback – Human-in-the-loop refinements and reinforcement learning

3. Specialization Training Process

KAMT involves a structured fine-tuning process that adapts the FLM using the KPs. The key steps include:

  • Supervised Fine-Tuning – The model is exposed to high-quality labeled data to refine its accuracy in a given domain.
  • Reinforcement Learning with Human Feedback (RLHF) – Expert reviewers evaluate and adjust the model’s outputs to improve reliability.
  • Knowledge Injection Techniques – The model learns to integrate structured knowledge without erasing its foundational understanding.
  • Task-Specific Optimization – The SLM is fine-tuned for specialized applications such as legal analysis, medical diagnosis, or scientific research.

4. Specialized Language Model (SLM)

The result of KAMT is a Specialized Language Model (SLM)—a version of the FLM that is finely tuned for a specific domain. The SLM offers:
Enhanced Accuracy – Greater precision in handling complex domain-specific queries.
Deep Context Understanding – Improved comprehension of industry terminology and specialized concepts.
Task-Specific Adaptability – Optimized for use cases such as research assistance, legal document processing, medical diagnosis, or financial modeling.
Scalability and Continuous Learning – Additional KPs can be integrated over time, keeping the model up to date with new knowledge.

Why Use KAMT?

KAMT provides a scalable, cost-effective, and modular approach to AI specialization. Instead of building models from scratch, organizations can leverage pre-trained FLMs and enhance them with domain knowledge, resulting in a faster, more efficient, and adaptable AI solution.

Use Cases

  • Healthcare & Medicine – Specialized AI for medical diagnostics, patient data analysis, and research.
  • Law & Compliance – AI systems that understand legal language, contracts, and regulatory requirements.
  • Finance & Trading – AI-driven market analysis, risk assessment, and fraud detection.
  • Engineering & Technology – Enhanced AI assistants for software development, manufacturing, and automation.
  • Education & Research – Custom AI tutors and academic research assistants.

Conclusion

Knowledge-Augmented Model Training (KAMT) is a powerful paradigm for AI specialization, bridging the gap between general-purpose language models and expert-level AI systems. By leveraging KPs and targeted training processes, organizations can rapidly develop domain-specific AI models that offer superior accuracy, contextual understanding, and adaptability in real-world applications.

European Union Launches OpenEU-LM: The First Truly Open and Efficient Language Model Matching the Best in AI

Here’s a vision of a press release for the announcement of OpenEU-LM:


FOR IMMEDIATE RELEASE

European Union Launches OpenEU-LM: The First Truly Open and Efficient Language Model Matching the Best in AI

Brussels, [Date] – The European Union today announces the first release of OpenEU-LM, a groundbreaking large language model (LLM) that rivals industry leaders such as GPT-4, Gemini, and DeepSeek while setting new standards in openness, adaptability, and efficiency.

Developed as part of the EU’s commitment to technological sovereignty and transparency, OpenEU-LM is the first fully open-source language model where the entire development process—including tools, code, and training data—is publicly available. Anyone can not only access the model but also reproduce its training from scratch, ensuring maximum transparency and fostering innovation across Europe and beyond.

Key Advantages of OpenEU-LM:

  • Truly Open Source: Unlike proprietary models, OpenEU-LM allows researchers, businesses, and developers full access to its architecture, datasets, and training methodologies.
  • Domain-Specific Adaptability: The model can be customized for specialized domains—such as healthcare, law, and finance—without requiring a full retraining process.
  • Unprecedented Efficiency: OpenEU-LM’s training process demands just 1/1000th of the hardware and energy consumption compared to other state-of-the-art LLMs.
  • Minimal Compute Requirements: Once deployed, OpenEU-LM can run on 1/10,000th of the hardware resources typically needed for similar AI models, making it an ideal choice for edge computing and energy-efficient applications.
  • Enterprise Cloud Service: To support businesses and public institutions, OpenEU-LM will also be offered as a secure, high-performance cloud service across the EU.

A Milestone for AI in Europe

OpenEU-LM represents the EU’s commitment to ethical, sustainable, and inclusive AI development. By eliminating reliance on closed-source, resource-intensive AI models, OpenEU-LM empowers governments, startups, and enterprises with a transparent and customizable alternative that aligns with Europe’s digital strategy.

“OpenEU-LM is more than just a language model—it is a declaration of technological independence and innovation,” said [EU Official]. “With this initiative, we are ensuring that AI in Europe is open, accessible, and built to serve the public good.”

Availability and Next Steps

The first release of OpenEU-LM is available today at [website/repository link], where developers, researchers, and enterprises can access, test, and contribute to its continuous improvement. Enterprise cloud solutions will be launched in Q3 2025.

For more information, visit [official EU AI page] or contact [press contact details].


Full App State

full_state_app_1In many Apps the full user-interface state is only kept for what is currently being displayed and the App code handles navigation from page to page or view to view. Opening, closing and re-creating views, state and data as needed.

In the Kontekst Apps we keep the full App state as if all pages and views are open at the same time. This allows us to easily adopt the Apps to different devices and an increasing available display surface. From the minimal view of a mobile phone to a full-blown multi widescreen desktop setup.

The App state is separated from the display code and hooked in using event streams where each stream is the current data for a page or part of a view. When a change occurs the display code listens to the stream and updates with the data from the event. When the user interacts with the display an event is created and sent to a sink (input to a stream). The state code handles incoming events on sinks and creates outgoing events on streams according to the business logic of the App. A single incoming event can result in multiple outgoing events updating different parts of the user-interface.

Take an App for accessing multiple e-mail accounts at the same time. On mobile it would consist of a list of inboxes where the user can select to open one. The App would navigate to a list of e-emails in that inbox and further on to open one of the e-mails. When the App runs on devices with bigger display the view would change to display the list of e-mails and an open e-mail at the same time. Even expanding to show the list of inboxes, e-mails and an e-email.

 

One-time key encryption of communication

cryptcom1

Kontekst uses a form of one-time key encryption for communication between users, agents and Apps. A new encryption key is used for each message and once the message has been received the key is destroyed. If someone were to monitor or intercept messages sent and received they would have to gain access to the key in almost real-time to be able to decrypt the message. Even if they gained access to one key, it would only unlock a single message.

There are two case variations on the one-time key encryption concept. (Authentication, assuring the identity of A and B is left out)

Agent A wants to send a message to agent B (push)

  1. Agent A sends a request for a one-time key to agent B.
  2. Agent B generates a one-time key consisting of a private key, a public key and a GUID. Then sends the public key and GUID to agent A and stores the private key and GUID.
  3. Agent A encrypts the message using the public key and sends it to agent B along with the GUID.
  4. Agent B receives and decrypts the message with the private key belonging to the GUID.
  5. Agent B destroys the private key.

Agent A requests information from agent B (pull)

  1. Agent A generates a one-time key consisting of a private key, a public key and a GUID. Then sends a request for information to agent B, including the public key and GUID.
  2. Agent B encrypts the reply message using the public key and sends it to agent A along with the GUID.
  3. Agent A receives and decrypts the message with the private key belonging to the GUID.
  4. Agent A destroys the private key.

In both cases the private key that can decrypt a message never leaves the receiver and only exists during the message exchange. A time-to-live value can be set on a key to further reduce the risk of interception. In reality gaining access to the private key would mean having access to the device the agent is running on, which most likely would mean being able to read the decrypted messages directly.