When we run a process or a traditional program, we expect certain things. We expect a fast answer. We expect an accurate answer. And we expect that the same input will always give the same result. If a calculator takes several seconds to answer 2 + 2, or sometimes says 5, we don’t think “close enough” – we think it’s broken.
This is how we normally relate to software: as tools that should be deterministic and reliable. A banking app should always show the correct balance. A ticket booking system should clearly confirm or reject your order. Speed, precision, and consistency are the baseline expectations.
Something interesting happens when we start giving programs more human-like qualities and call them “agents”, often powered by a language model. We stop thinking of them only as tools and begin to relate to them more like we relate to people. We “ask” them for help. We say they “didn’t understand” or that they “misinterpreted” something.
With that small shift in language and framing, our expectations change. It suddenly feels more acceptable that the agent takes a bit longer to respond, as if it is “thinking”. We tolerate that it might be less precise, giving approximate or partial answers. And we accept that it does not always give exactly the same result every time, even for the same question.
In other words, when we see something as a traditional program, we expect fast, accurate, and consistent answers. When we see it as an agent built on a language model, we open up for slower, more imprecise, and less consistent behavior. The core functionality might not have changed that much, but the way we talk about it and think about it makes delay and inaccuracy easier to accept.