srinivas raghav blog's

on pseudoism

large language models have grown far more intelligent in the past, this is due to various ideas like incorporating reinforcement learning and scaling laws in action and improvements in post training and some better architectural decisions.

but the improvement of large language model, look like a lens of improving but in the broader sense of things, a lot of the intellectual pain that helps improve our ability to be more intellectual is completely fading.

this is due to the fact that, we humans are delegating the tasks that required reading and writing to models. in return, the thinking or the clarity developed through the process of the reading and writing is being lost.

in being lost, i mean to say that the models are in a sense doing the hard stuff that the human must do. in so doing the emergence of improved cognition t develops into less mindlessness and more meaningful.

llm can't be declined, there is so much economics shift happening in the world. that the delegation of core cognitive skills are being transferred to large language models.

using llm as helpers for enabling critical thinking and not being too stuck in something should be the goal.

that is if you face a problem and you know the problem solution is trivial and it is a lack of skill and a llm can solve that in a single pass, then it would be better for us to at least spend 20-30 mins on it before passing.

we should think, and grow the ability to think in that sense, that would still keep our neurons active and improve but also not burn our self's out in that order.

patience should be real important in this process.

writing and reading will become scarce in a way. and that would be somehow considered to be different and novel to some or stupid to others in the age of llms.

the main concept is of emergence.

reading a lot of books. slowly and carefully and understanding can give a human being a lot of information, this further in a long chain of process creates something like a emergence, where there is a sudden switch of burst of high understanding and knowledge. it is crucial to think deep and rationalize and have a opinion on the own. having opinion would become important, not in a left or right wing. but in general about history, math, physics and science and all.

and if a llm generates a output, then the nature of opination severely depends on how bias the model is in some way to some concepts. this would raise too many pseudo intellectuals and that would be the an end of deep thinking and more mindlessness. but in some sense it's really amazing but in a more sense of how human nature is, it will only be a delegation for comfort and not originality and development. i am sure tools will exist for such a practice, but still working in the original way is way different. cause some things are to best in that way, though they can be done smarter but only when you have ai that disagrees and is opiniated and that is not the goal of alignment.