Nobody Is Watching the Code The Local Aim Due Diligence Desk — April 2026

Got it. Here it is.

Nobody Is Watching the Code The Local Aim Due Diligence Desk — April 2026

This week Google's CEO announced that AI now writes 75 percent of the company's software. He said engineers review it all before it ships. A lot of people read that and felt better. They shouldn't have.

Here's the honest picture. No technical background required.

What "Review" Actually Means at Google's Scale

When a Google engineer "reviews" AI-generated code, they are not sitting down and reading it the way you'd read a contract before signing. They're looking at a before-and-after snapshot — here's what existed, here's what changed — and deciding whether it looks obviously broken.

That takes seconds, not hours. At a company with tens of thousands of engineers producing millions of lines of code every day, that's the only review that's physically possible.

Plain English version: imagine you hired someone to answer all your customer calls. They handle 75 percent of them now. But your "review" is that they send you a two-sentence text afterward and you reply "looks fine." You approved it. But you weren't on the call. You don't know what was actually said.

That is what "engineers review AI-generated code" means at Google's volume. The accountability is real on paper. The oversight is a formality in practice.

The Next Step Is Already Being Built

Here is the part that didn't make the headlines: Google's own AI is now reviewing code that Google's own AI wrote. One system generates the software. Another checks it. The human in the middle is increasingly there to press a button.

This is not speculation. Microsoft's chief technology officer said publicly last year that he expects 95 percent of all code to be AI-generated within five years. Nobody pushed back. Nobody asked who reviews it then.

The numbers tell the story fast. In October 2024, 25 percent of Google's code was AI-generated. Today it's 75 percent. That's eighteen months. If that pace holds — and there's no sign it won't — the review step goes AI too. Probably within two years.

We've Seen This Movie Before

Before the 2008 financial crisis, banks were packaging thousands of mortgages into complex financial products and having humans "review" them before selling to investors. The volume was too large for any person to actually understand what they were approving. The reviews became a formality. The approvals kept coming. Then everything collapsed — because the problems weren't obvious on the surface. They were buried in details nobody had time to read.

Code that is subtly wrong is not going to crash your phone on Tuesday. It's going to behave slightly incorrectly in edge cases, in ways that are very hard to trace back to the source. The failures won't look like failures at first. They'll look like things working — until they suddenly don't.

The desk's finding: the 75 percent figure is real and verified. The claim that humans meaningfully review it all is not false — but it is misleading. "Approved by an engineer" at Google's scale means an engineer confirmed it didn't look obviously broken. It does not mean a human understood what the code does, why it does it, or what happens when it fails.

What This Means If You Own a Small Business

The software your business runs on — your booking system, your payment processor, your review platform — is increasingly built by AI and reviewed by someone who glanced at it before lunch. That's not a reason to panic. It is a reason to pay attention to what breaks and not assume it's your fault when it does.

More importantly: watch what happens to the vendors selling you AI-powered services. The same acceleration happening inside Google is happening inside every marketing platform, every review tool, every SEO agency that pivoted to "AI-powered" last year. The output is moving faster than any human quality check can follow.

The question to ask every vendor is the same one nobody asked Google: who is actually checking this before it affects my business?

Four questions worth asking any AI-powered vendor before you write a check:

Who reviews the output before it reaches my customers? "Our AI handles it" is not an answer. A named human responsible for quality is an answer.

What happens when it's wrong? If they don't have a clear answer, they haven't thought about it. That becomes your problem.

How do you know it's working? A dashboard showing activity is not proof of results. Ask for the specific number that tells you the service is doing its job.

Can I see an example of what it produces? Any vendor confident in their AI's work will show you. Hesitation is a signal.

The One Thing AI Still Can't Do

The engineers who survive this shift aren't the ones who wrote the most code. They're the ones who understood what the code was supposed to do — and could tell when something was wrong even when it looked right on the surface.

That judgment only comes from knowing what the right answer looks like from the outside.

For a local business, that's the whole game. No AI can call your customer, hear them describe the job you did, and produce a review that says exactly what the next person searching needs to read before they decide to call you. That requires a real conversation. A real person. A real relationship.

The businesses that protect those parts of their operation are the ones that will be findable, trusted, and busy in 2027. The ones that hand everything to a platform because it's cheaper are going to find out what "approved by an engineer who glanced at it" actually means when it's their customers on the receiving end.

AI writes the first draft of almost everything now. The question is who's responsible for what it says.

The Local Aim Due Diligence Desk — Orange County, CA — April 2026 thelocalaim.com — kirby@thelocalaim.com

Next
Next