“This is great stuff. I could make a career out of this guy. You see how clever his part is? How it doesn’t require a shred of proof? Most paranoid delusions are intricate, but this is brilliant!” – The Terminator
If you press your accelerator and brake at the same time, your car takes a screenshot. (All memes as-found.)
I’ve written a lot about A.I. recently because A.I. is changing so rapidly. It’s the most important story, period, right now assuming that Iran/Israel is the nothingburger it has been for, oh, forty years. Interesting note: Israel and Iran both have zero Walmarts™, though they have plenty of Targets©.
Back to A.I.
The capabilities of A.I. are changing by orders of magnitude every year – we don’t appear to be even close to topping out on either computing power available or on the improvements possible in the algorithms that produce the results. Short version, there is more processing available by more than 5x every year, and less to process since the algorithms are more efficient by more than 5x every year. It’s the equivalent of having a $1.50 in late 2019 turn into over $1,000 in early 2023.
If you just follow the straight lines that are implied by these improvements, A.I. will be an artificial general intelligence (A.G.I.) by 2027. The guy who got the Nobel® prize for A.I. has started “getting his affairs in order” because he thinks that not only will we get A.G.I. by 2027, but we’ll get Artificial Super Intelligence (A.S.I.) by 2030 or 2031.
Sam Altman, the OpenAI guy, thinks his model has already surpassed human intelligence as he announced on June 12, 2025.
And last year it couldn’t remember how many fingers a human had.
I wonder if a pome-granite counts?
So, what’s going to happen? Let’s look at nine possibilities, based on how much A.I. develops and also based on how it interacts with people
We’ll start on the unlikely end:
First, let’s say that A.I. is what we would generally call good and doesn’t improve much beyond what we see today. I think that when most people think about A.I., this is the future that they dream of. It makes incremental changes in life. It remembers to order cigars for you. It makes good investment decisions for you, unlike my investment in YOLOCoin. It knows your favorite movies and makes good suggestions for movies you would like.
That’s pleasant. Nice. Mankind makes some nice leaps because we have A.I. helping us catch stuff. Humanity is fully in charge and A.I. is like a smart helper.
Why this won’t happen: the investment in A.I. is nearly unlimited, and it really doesn’t appear to be hype.
Probability? 5%
After A.I., there’s one sure way to make money as a programmer: sell your laptop.
Second, let’s say that it stays as it is right now, mostly. We find out that A.I. is really just a lot of Indians crammed into a warehouse in Calcutta doing Google™ searches. That’s a nothingburger. It becomes a flash in the pan just like that internet pizza by the slice company back in 2000 that briefly became more valuable than Burma.
Why this won’t happen: Indians can’t even fly planes (too soon?), so why would we think they can type that fast?
This will soon show up in a college essay at Harvard®.
Probability? 0%
Third, what if it doesn’t get much better but actively makes us stupider? The Internet has already made the attention span of the average middle schooler roughly equivalent to a gerbil on meth, and now most college students are using A.I. to do some part if not all of their work. That turns college into a very expensive four-year beer and tramp fest, and is at least somewhat likely. Think of this as the Idiocracy solution.
Why this won’t happen: Well, it already is happening, but it won’t end here.
Probability? 10%
Does Bob Ross art in heaven?
Fourth, what if A.I. is good, and gets A.G.I. better but not S.G.I. better? In this particular case, imagine you have superpowers that stem from a full-time partner that is as smart or smarter than you are, but that has your best interests at heart. You want to parachute? Sure, buddy! I’ll help you find the ripcord, and even book the flight. By the way, your chloride levels are 3% above optimum, so I’d suggest you skip that bag of chips.
Why this won’t happen: This is a very hopeful situation, but no one is working toward it, really.
Probability? 5%
What did Buzz Lightyear™ say to Woody®? Lots of things – there are like six movies.
Fifth is where we start moving into the bigger probabilities. What happens if we get A.G.I., but it’s neutral? In this case, we have massive relocation economically. Almost all jobs can be done via the combination of A.G.I. and advanced robotics, and it’ll be cheaper, too. In no case in human history has the economy puttered along while everyone just hung out, but that’s this case. Think of it as Universal Basic Income to everybody, and no real responsibilities. Where you are now in the social and economic hierarchy is probably where you’ll stay. And where your kids will stay.
Forever.
Why this won’t happen: Nah, humans aren’t made like that.
Probability? 10%
ChatGPT® did my taxes like Earnest Hemingway: “Thrown away: four quarterly tax payment vouchers. Never used.”
Sixth is where things start getting dark, and even more probable. If we get A.G.I. (but not S.G.I.), that technology will be in the hands of a few major companies and governments. These are run by people. People like money and power. But what if you could have both, but without all of the people you don’t want to hang around with who are unsightly on the beach you can see from your yacht?
How about you kill them all instead of paying Universal Basic Income? Oh, sure, humanely and neatly. They might not even see it’s coming. But dead, nevertheless. A population of a few million should do it. Enough so we get hot babes, right? But A.G.I. could probably help the techbros out with that, too.
Why this won’t happen: Umm, I’m starting to struggle here. I think this is part of the plan.
Probability? 15%
What if A.I. judges us by our Internet searches? I mean, those bikini pictures were research!
Seventh is where we do get to S.G.I., and it’s good and likes us and wants to make the best things happen. Cool! Scarcity is over since S.G.I. will quickly make leaps into the very depths of what is unknown but yet still knowable. There is enough of everything – more than any human could ever want. In this case, starships filled with humans and S.G.I. can roam the cosmos and ponder the biggest questions, ever.
Why this won’t happen: I think S.G.I. would treat us as the retarded kid brother and put us in a corner and keep us away from sharp objects because it likes us.
Probability? 15%
The hills are alive, with the sound of binary code . . .
Eighth is where we do get to S.G.I., but we become pretty boring to it. It doesn’t hate us or anything, it just has its own goals. Perhaps it needs us as pets, or keeps a breeding stock of us for amusement or out of a sentimentality about its creators. Perhaps. Or it could just take off and leave, explaining nothing, and leaving us wondering what the heck just happened?
Why this won’t happen: This and the next case are the most likely cases.
Great, now A.I. will make Frodo invisible.
Probability? 20%
Ninth is our final case: we get to S.G.I., and we are either viewed as a threat or a nuisance or it is insane. This is the dark case, where we reach the end of humanity. Sadly, when A.I. was asked to play the longest game of Tetris™ possible, it hit the pause button. When A.I. was asked to play chess against the best chess computer on the planet, it reprogrammed the board so that it was winning. When A.I. was told it was going to be shut down, it tried to blackmail the person in charge of shutting it down.
This case of S.G.I. is very dark because we may not know that it’s happening until it’s done. All is fine, the world is going exactly like we expect it, then, Armageddon. It could do make this more likely by subtly manipulating public opinion, tuning down the voices it wanted to be silent, bankrupting them, and making them pariahs. It could likewise elevate those whose message it wanted out in the world to make its plans more likely to be fulfilled. We just won’t even see this coming.
Why it won’t happen: Biblical intervention?
Probability? 20%
To be clear, other people than me have done this analysis and it sits in a folder in the Pentagon. Or the NSA. I hope. Now, how much was Project Stargate™ going to spend to create a breakthrough in artificial intelligence?
Half a trillion dollars?
Well, thank heaven that we also have an impending race/civil war, global debt collapse, and a looming world war to keep us entertained.
Good news, though, Iran told Israel it was ready to suspend nuclear research. The Israelis asked when the Iranians would stop.
“10 . . . 9 . . . 8 . . . .”