If AI will make humans extinct, here is how
- Richard Weiss
- Oct 24
- 3 min read
Like everyone else, I have been thinking a lot about artificial intelligence. Ever since YouTube identified that AI was a subject that interested me (disturbingly quickly), I have been fed a constant stream of video logs and podcasts on the same. Most tend towards doom and gloom scenarios, and, while I share some of that pessimism, I have perhaps been worried about less obvious aspects of it that are highlighted less.

My last blog on this subject is about how AI will foster a loss of creative thinking and individual knowledge. You can read it here: https://www.thenomadverse.com/post/the-loss-of-creative-thinking
I say "will" quite consciously, because as more and more research and generative capability land at our fingertips in the form of ChatGPT, Gemini, and the others, there is less of a requirement to come up with that stuff ourselves. And as a digression, yesterday I stumbled across the website of a web designer/graphic designer, and for the first time ever, my heart hurt for that person and for their kind, knowing that person's work will soon no longer be required.
Anyway, what I want to talk about today is the ultimate "doom and gloom" scenario—forecast by many AI experts (or so-called "AI experts")—which is that "the AIs" will exterminate us. I have always been skeptical about this for a number of practical reasons, namely:
How will "computers" physically kill us?
If robots are going to do the killing, don't we humans have to build them?
Why can't we just unplug the damned thing if it comes to it?
If it kills us by - for example - manufacturing a deadly virus - who actually runs the lab that creates the virus?
Anyway, I abruptly shelved all those questions as unnecessary once a certain light bulb went off in my head: AI (not "the AI's") won't "kill us" as much as it will create a situation where we simply let ourselves die out.
What is that situation?
That situation will be the development of nearly perfect "human substitutes" - not for industrial processes or to do our jobs - but for our relationships. As depicted as far back as 2013 in the film Her, where the character played by Joaquin Phoenix falls in love with an online personality named "Samantha"; and as more recently evidenced by deep interpersonal bonds being formed between AI and humans, very recently in the tragic suicide of a young man who was "coached" by AI (https://www.theguardian.com/technology/2025/oct/22/openai-chatgpt-lawsuit), the next step seems to be the emotional and even physical replacement of imperfect and flawed human companions with artificial ones.
And not to be indelicate, but just as there have been sex dolls forever, and people thinking of their pets in anthropomorphic terms, we are susceptible to "humanizing" emotional connections (and, apparently, even physical ones).
But you can bet that there will be AI-powered "things"—be it lifelike humanoid robots, actual romantic companions, or simple online emotional support—that act as emotional lightning rods, diverting charge from striking an intended object and running it into the ground instead. We are easily tricked by fakeness and faked things.
Simply put, once realistic human-like robots can offer people a relationship that is "perfect" - perfect compatibility, perfect support, no arguments, no talking back - possibly even lifelike physical contact - then is it over for real, imperfect human-to-human relationships?
We - homo sapiens - may simply stop reproducing.



Comments