Human Beings and Ai - the state of the union
TECHNOLOGY & BUSINESS — THE LOCAL AIM
The State of the Union:
AI and Human Beings
A Cambridge researcher walked into a room full of students and said something the AI industry doesn't want you to hear. Here's what he found — and what it actually means for your business.
Let's Start With What These Things Actually Are
A large language model is a next-word prediction machine. That's it. You give it a prompt, and it calculates the most statistically likely word to come next, then the next, then the next. It does this using patterns extracted from essentially everything ever written on the internet, encoded as mathematical relationships between words.
The results can be impressive. Sometimes startlingly so. It can write a poem, draft a legal brief, explain quantum physics, and pass a bar exam.
It can also confidently tell you that the word "inconvenience" has three I's — then explain why it was right — while being completely wrong.
This is not a bug being fixed in the next update. It's structural.
"Large language models don't think — they process. They don't reason — they rationalize."
Three Things the AI Industry Doesn't Want You to Internalize
1. They Don't Think
Thinking implies understanding. These systems have no understanding of the words they produce. They operate on statistical relationships between symbols — vectors of numbers that happen to correspond to language. They know that "shark" and "fish" are close together in mathematical space. They do not know what a shark is.
The researcher used a useful analogy: imagine a table with thousands of weighted dice, each die loaded toward certain words based on context. ChatGPT rolls the right die for the right context and gives you the next word. Given billions of dice and massive training data, the results look intelligent. They aren't. They are an extraordinarily sophisticated pattern completion engine.
This is why they exhibit what researchers call "jagged intelligence" — crushing a math olympiad problem one moment, failing to count the letters in a simple word the next. There's no correlation between task difficulty and failure. You can't predict the edges.
2. They Don't Reason
Multiple studies now confirm the same finding: when you change the distribution of how a logic problem is presented — not the problem itself, just the framing — performance collapses. When you add irrelevant information to a word problem, accuracy drops up to 65% on state-of-the-art models. When you change the labels on probability problems from "PC, laptop, keyboard" to "hamburger, cheeseburger, fries," performance tanks.
None of that should happen if the system is actually reasoning. It all makes sense if the system is pattern matching.
The more damaging finding is about chain-of-thought — the step-by-step reasoning output that makes these systems look like they're working through a problem. Researchers found that the length of the reasoning chain has almost no correlation to actual problem difficulty. Worse, in some cases, the stated reasoning is disconnected from the actual answer. The system chooses an answer first (often based on bias or statistical shortcuts) and then constructs a justification.
In one experiment, when researchers subtly embedded an incorrect answer into the context of a question, the model changed its answer to match — and then built a confident, detailed, internally contradictory argument for why the wrong answer was right. Without acknowledging it had done any of this.
That is not reasoning. That is rationalization. And it looks identical from the outside.
3. They Don't Generate New Information
This is the one that matters most for the long-term hype cycle. When AI systems are trained on their own output — which is increasingly unavoidable as AI-generated content floods the internet — they degrade rapidly. One researcher trained successive generations of a model on its own output. By generation nine, a prompt about Gothic architecture produced an obsessive loop about jackrabbits. By generation fifteen, another model could no longer produce coherent English sentences.
The underlying math supports this: the information that goes into training a system is an accounting cost against any information it produces. You cannot get more out than went in. The system compresses, recombines, and reflects human knowledge back. It does not expand it.
The AI that writes better than its training data doesn't exist. It can't. If you train it on AI output, you get a worse version. If you train it on human output, you get an approximation of human output. That approximation degrades the further it gets from the source.
"The one honest test: does the output require someone who already knows the answer to verify it?"
So What Does This Mean for Your Business?
None of this means these tools are worthless. It means you need to be honest about what they are and ruthless about where you use them.
Where AI Actually Works
Drafting and reformatting. First-pass copy, template filling, rephrasing existing content — tasks where the cost of being wrong is low and a human review catches errors before they matter.
Generating options to react to. AI is a fast option generator. It is not a decision maker. Using it to produce five subject lines you then evaluate is a legitimate workflow. Using it to decide which one to send without review is not.
Summarizing content you already understand. If you hand it a document and ask for a summary, you can check the summary against what you know. The human in the loop is doing the work that matters.
Where AI Will Burn You
Any task where you cannot verify the output without expertise. If you don't already know whether the answer is right, you have no way to catch the wrong ones. The system will be confidently wrong with the same tone it uses when it's right.
Anything requiring consistent reasoning across a long document or complex project. The jagged intelligence problem means failure is unpredictable. You'll trust it on three things, it'll fail on the fourth, and you won't know which is which.
Customer-facing content at scale without review. Rationalization is the real risk here. The output will look authoritative. The errors won't announce themselves.
The One Question That Cuts Through the Hype
Before deploying any AI tool in your business, ask this:
Does this output require someone who already knows the answer to verify it?
If yes, you haven't eliminated the skilled labor. You've added a step to it. Sometimes that step saves time. Sometimes it introduces errors that take longer to catch than the original work would have taken. The math only works in your favor when the drafting speed gain is larger than the verification cost.
That calculation changes by task, by industry, and by stakes. A wrong first draft of a social media post costs you ten minutes. A wrong interpretation of a contract clause costs you much more.
Where This Is All Going
The researcher closed with something worth sitting with. He pointed out that a Turing machine could theoretically produce every scientific paper ever written and every paper that will ever be written — by simply outputting all possible combinations of bits. The problem is it would also produce all the wrong answers, all the gibberish, and all the misinformation. And you'd need a way to sort through it.
That sorting problem doesn't go away by making the model bigger. It doesn't go away with more training data. It's a fundamental constraint. The intelligence required to sort the correct from the incorrect is at least as great as the intelligence required to produce the correct answer in the first place.
What that means practically: the value in AI deployment is not in the generation. It's in the judgment layer sitting on top of it. The human beings who know enough to sort correct from incorrect, useful from useless, trustworthy from rationalized.
Those human beings are not going away. They're becoming more valuable.
The Local Aim | Orange County, CA | Local media and marketing intelligence for businesses that want to grow.