I recently tried to build an intelligent app to leverage LLM's increasing
intelligence. I found one of the key principle for writing an intelligent app is to
use LLM in the most flexible and dynamic way possible.
The idea is simple: to maximize the utilization of intelligence, instead of trying to
think about how to control the behavior of LLM, think about how to not get in
the way of LLM. The loose structure is almost always more intelligent than
a tighter structure because loose structure makes less assumptions about
what users' requests will be.
The more explicit rules we put on LLM, the more deterministic
the behavior of
the LLM becomes, and the less intelligent it
becomes. The same principle was
also used in LLM training:
"just try to get out of the model's way, the model just
wants
to learn." In some sense, our explicit direction and bias
become the
barrier for models to reach their full potential.
This is also from the bitter lesson: focus on the most general aspect. The right
approach for training LLM is not to teach LLM explicitly, but to set up a
trajectory for LLM to succeed.
Somehow, traditionally when we program SaaS products, the user experience
was very well defined. But in intelligent apps, it's the opposite; the app's
behavior is completely non-deterministic. We need to align the results agent
delivers at the end.