Friday, August 2, 2024

Do LLMs reason or think?


In a posting on "Eight to Late", the question is posed: Do large language models think, or are they just a communications tool?

The really short answer from Eight to Late is "no, LLMs don't think". No surprise there. I would imagine everyone has that general opinion.

However, if you want a more cerebral reasoning, here is the concluding paragraph:
Based, as they are, on a representative corpus of human language, LLMs mimic how humans communicate their thinking, not how humans think. Yes, they can do useful things, even amazing things, but my guess is that these will turn out to have explanations other than intelligence and / or reasoning. For example, in this paper, Ben Prystawksi and his colleagues conclude that “we can expect Chain of Thought reasoning to help when a model is tasked with making inferences that span different topics or concepts that do not co-occur often in its training data, but can be connected through topics or concepts that do.” This is very different from human reasoning which is a) embodied, and thus uses data that is tightly coupled – i.e., relevant to the problem at hand and b) uses the power of abstraction (e.g. theoretical models).



Like this blog? You'll like my books also! Buy them at any online book retailer!