Why talking to LLMs has improved my thinking – Vallified

Searching 512I am more surprised by one aspect of the use of large language models than any other.

They often put into words things that I have understood for a long time, but have not been able to write clearly. When this happens, it feels less like learning something new and more like getting recognized. Kind of a “yes, this is it” moment.

I haven’t seen much discussion on this effect, but I think it matters. I also think that’s how it’s improved I Thinking.

Most of what we know is silent

As programmers and developers, we develop a lot of understandings that are never clear.

You know when a design is wrong even before you explain why. You realize the bug before you can reproduce it. You recognize a bad abstraction quickly, even if it takes an hour to explain the problem to someone else.

This is not a failure. This is how experience operates. The brain compresses experience into patterns that are efficient for action, not speech. Those patterns are real, but they are not stored in sentences.

The problem is that thinking, planning, and teaching all require language. If you can’t express an idea, you can’t easily observe or improve it.

llm are good at inverse problem

Large language models are built to do exactly this – turn ambiguous structure into words.

When you ask a good question and the response resonates, the model is not inventing insights. It is mapping a latent structure to language in a way that aligns with your own internal model.

That alignment is what creates a sense of identity. I already had the shape of the idea. The model provided a clean oral look.

Putting things into words changes thinking

Once an idea is written down, it becomes easier to work on it.

Vague intuitions turn into named distinctions. Underlying assumptions become visible. At that time you can test them, reject them, or refine them.

This is not a new thing. That’s what writing has always done for me. What is different is the speed. I can check half-baked ideas, discard bad formulations, and try again without much trouble. It encourages a type of thinking that I would otherwise have abandoned.

Feedback loops matter

When you see a good expression of an idea, you start thinking with the same language style.

Over time I have noticed that I now do this even without an LLM. Can I explain in precise language what I am thinking, feeling, believing in this moment and why?

In that sense, the model is not directly improving my thinking. It is improving the interface between my thinking and language. Since reasoning depends heavily on what one can present clearly, that improvement may feel like a real increase in clarity.

The more I do this, the better I get at paying attention to what I’m actually thinking.



<a href

Leave a Comment