Lorin Hochstein
Nobody knows how the whole system works
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to: First, Simon W…
Lorin Hochstein:
This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.

The issue with not understanding how AI generated systems work has nothing to do with not reviewing code an AI generates. It’s the total abdication of responsibility in favor of productivity. Yes, I don’t know how the telephone network works. That’s part of what I pay AT&T, a company of ~140,990 people for every month. They understand how most of it works, and what parts they don’t, somebody at some point in history did. I implicitly trust that person’s understanding because I make calls and don’t think about how they get connected. It’s the same reason why I don’t think twice about the cold chain when I’m at Aldi, or the inner workings of the FAA when I’m boarding an airplane.
These systems have earned my trust, and that is in part because I believe the people responsible for developing and operating them, throughout their entire history, have understood what they were doing. What has been forgotten, experts can rediscover. At no point was magic involved.
AI is magic because nobody understands its inner workings. We don’t know why it makes decisions, if it has biases or ulterior motives, and that scares me.
I’m not worried about AI messing up your React frontend. I’m worried about AI developing critical systems that people rely on without the people who are responsible for those systems understanding how they work.
This quote from an internal IBM training in 1979 sums up the entirety of my point, and what I think responsible commentators on AI generated systems should always understand. If you are building something, you are entirely responsible for it. AI cannot be responsible, so by not understanding how the system works, you are staking your reputation and assuming all liability for the actions of a black box.
That is fine for your blog, but is that okay for an Electronic Health Records (EHR) management platform? Let’s not kid ourselves that nobody would ever cut corners and use AI to generate sensitive code or processes. When the big industrial accident inevitably happens, will “I didn’t know” be a good enough excuse?