Today's logic of AI development is actually very similar to that of policy
makers. Many of the ideas are borrowed from system engineering. For example,
policy makers need to consider how to make a system better, which means they
cannot simply optimize local parts in isolation. At the same time, the way
policy makers think is about making as few changes as possible, and as simple
changes as possible, to make the system more optimized. Policy makers also think
about how to make the broader environment suitable for an ecosystem to thrive.
This is very similar to building a solid foundational environment for LLMs,
allowing LLMs to succeed on their own. Why is this the case? Perhaps it's
because of the "bitter lesson": to make model capabilities scale well, you
can't do too much hand-tuning. This idea is very similar to what Jason posted
today.
In the future, the most important thing for domain experts is to build the
infrastructure that allows LLMs to fully demonstrate their potential. Once this
is achieved, each iteration of the model can effectively scale with the LLM's
underlying compute and search abilities.