• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.