I think you're right and the future is not so much "AGI" as it will be "what does X system solve?"
More generally, I imagine we'll see a few paths and I'm not sure which (or maybe all) will endure. One path is finding the places where LLMs fit well. I think today they are vastly overestimated, but the hype should recede as they settle into specific use cases.
Second should be growth in other AI approaches such as conceptual / symbolic / "ontological" AI. Those are more difficult than the grab-and-train black box approach of LLMs but are also more plausibly powerful. What is unclear is when and whether they will achieve large-scale success.
A third category is various mash-ups between the two, which are already happening. Such as LLMs for generality, plus targeted symbolic AI for depth in some area or other.
Kat Kime
Software Engineer @ LinkedIn
Beautiful. Love it Such a thorough explanation of why AGI is not here, and organized in such a way that I can easily share with colleagues when hype interferes with reason.
A comment I once saw (on Twitter) was that, we don't need AI to actually do these things, we just need it to be convincing enough to be useful (i.e. Turing test you already mentioned, but rates against each item you listed).
Do you think a "convincing enough" AGI is in the future?