The AI existential doom debate echos a bit of the 9 Billion Names of God. It’s a short story by Arthur C Clarke. Its about a 5 minute read so if you don’t want the ending spoiled, heres a link:
https://urbigenous.net/library/nine_billion_names_of_god.html
In the story a sect of monks have a belief that by listing all possible combination of 9 character words in a specific alphabet they will achieve the purpose of the human race will be achieved what it was created to do and that the universe will end.
They hire out a firm to sell them one of their Automatic Sequence Computer machines and lend them two engineers for a few months to run this script. Every day it prints out every combination of letters from “AAAAAAAAA” to “ZZZZZZZZZ”. The monks cut out the printouts and add it to a book.
To the surprise of the two engineers helping the monks, their quixotic belief was accurate and the story ends with the stars in the night sky, calmly winking out one by one.
It’s a well written story and if you haven’t read it I recommend doing so just for the subtle and elegant prose.
In 1953 when this story was written we had only just begun to understand what computers could do. Ultimately, like the Mark V from the 9 Billion Names of God is barely just a glorified calculator with a printer attached, but underneath it all there was this lurking existential fear of what we might do with a technology in an application we are uncertain of. Or what someone we have an unclear understanding of might do with that technology.
However, looking back, this short story and similar apocalyptic technology tales from the cold war feel a little silly in retrospect. But for the people of the era they were perfectly rational fears to have. “What if you could do something you couldn’t do before”.
In the current AI discourse there's a similar undercurrent of uncertainty of a technology we don't fully understand and are, with some understanding, are afraid of.
However it seems silly to think that this current iteration of “AI” constitutes an existential threat. I think there’s some form of automated ML system out there that could constitute an existential threat to the future of humanity, but it most certainly is not the stochastic parrots people are hyping up this cycle.