When we build systems of agents based on language models, we often start with a simple idea: split a big problem into smaller parts, give each agent a clear task, and let them work together. But each agent will still optimize for something, and what it optimizes for matters a lot. If goals are set wrong or unbalanced, the overall system will miss its main purpose, even if each agent “does its job.”
Agents have different objectives and make choices and evaluations based on those. Some agents have very local goals that are limited to their own specific task. They focus on short-term, concrete outcomes like “summarize this document,” “classify this ticket,” or “extract these fields.” Other agents have broader goals and consider a larger whole, not just a single subtask. They might care about things like “improve user satisfaction” or “help the user solve their problem effectively.”
For a system of agents to function and reach an overarching goal, you need a combination of these local, constrained goals and more composite, system-level goals. The local goals give clarity and focus, while the global goals ensure that the system is moving in the right direction as a whole. Together, they should form the basis for the evaluations and decisions the agents make.
If you only have agents with narrow, local, short-term goals, the system easily becomes unbalanced. Those agents will tend to reach their local goals: tasks will be completed, short-term metrics will look good, and each agent can claim success. But the overall outcome is often worse. You can end up with fast but unhelpful answers, lots of extracted data that is not actually useful, or content that is technically correct but misses what the user really needs. The main goal of the system is not reached, even though each agent hits its own target.
The opposite imbalance also creates problems. If you only prioritize global, overarching goals like “maximize user value” or “ensure project success,” agents may make poor evaluations and decisions in practice. Global goals are often abstract and not clearly connected to local, short-term realities. When an agent has only a broad mission, it may not know how to act in a specific situation: Should it be brief or detailed? Strict or flexible? Conservative or creative? Different agents might interpret the same global goal in different ways, and decisions become inconsistent and hard to control.
The key is to connect local and global goals explicitly. Each agent should have a clear local objective that defines its own task: what it is responsible for, when its task is “done,” and under what constraints. At the same time, that local goal should be designed so it supports the overarching purpose of the system, and is limited by constraints that come from the global goal.
For example, in a support system, a triage agent might have the local goal “classify and route tickets accurately,” while a response agent has “provide a clear, actionable answer.” Both of these local goals should be grounded in a higher-level goal such as “resolve user issues effectively without unnecessary delay.” That global goal can add constraints: routing should prioritize correctness over speed when in doubt, and responses should prioritize resolving the issue over being as short as possible.
If the system becomes unbalanced, you get predictable patterns. With too much focus on local, short-term goals, every part looks fine, but the overall result is poor: the system reaches local targets but fails the main purpose. With too much focus on global, abstract goals, decisions become vague and ungrounded: agents struggle to translate the overarching aim into good local decisions, and the connection between what they do now and what the system should achieve later is unclear.
Designing a good agent system means thinking about both levels at the same time. You define the overarching goal of the system, and then design local goals for each agent that clearly contribute to this goal and are consistent with it. This balance between local and global goals is what allows many agents with different responsibilities to work together and move the whole system toward its intended outcome.