1 Comment

I suspect LLMs are not the model you want here; I have yet to see an LLM with a feedback loop with the real world (beyond the Mechanical Turk level of having lots of human beings paid to fine-tune them), or any hint of developing a reasoning capability. Deep learning gets much more interesting when it has those feedback loops, like the time Google used it to optimize the cooling in a data center. Something like that might use an LLM front end to generate human speech and summarize it, but I don't think the LLM would be doing the heavy lifting, and the hallucinations could lead to things getting really bumpy along the way.

Expand full comment