dcn. jay quinby's scribbles &c

The Social Edge

Bright Simons points out that the miracle of an LLM is less a measure of its own ‘smartness’ and more a reflection of the civilization that produced it, because it’s the civilization’s output which trained it.

What all this implies for AI is straightforward. Every token in a training corpus is a fossil of social interaction—a trace of negotiation, argumentation, institutional meaning-making, or cultural transmission. The intelligence that AI systems exhibit was never individual to begin with. It was forged in the spaces between people.

And if those spaces are allowed to shrink due to over-dependence on human-machine interaction, we have trouble. If the interactions that generate rich language become rarer, shallower, or more homogeneous, then the intelligence that depends on them will slowly degrade. We will not hear any bangs, true, but we will notice a gentle, almost imperceptible narrowing over time. The machines to which we are fast entrusting the future of discovery will slow down when it matters most.

The more we cognitively offload to the LLM, the more our own competence erodes. We stop producing some of the higher-end tasks to the AI and so eventually we run out of stuff to feed it.

This intersects very nicely with Reclaiming Human Agency, mentioned below, reaching several of the same conclusions by purely secular paths.