Subtitles for subtitles Doom Bible Published by the AI’s extinction prophet Eliezer Yudkowsky and Nate Soares is “Why Superman AI kills us all.” But this really should be “Why Superman AI kills all of us” because even the co-authors don’t believe that the world will take the necessary steps to stop AI from eliminating all non-teenage humans. The book is dark, like notes read randomly in a dim prison cell the night before the execution of dawn. When I encounter these self-appointed cassava, I ask them directly if they believe they personally achieve their goals through a degree of superintelligence. The answers quickly emerged: “Yes” and “Yes”.
I wasn’t surprised because I’ve read the book – by the way, the title is If someone builds it, everyone dies. Still, it was shocking to hear this. For example, one thing to write about cancer statistics, and another is talking about things related to fatal diagnosis. I asked them what they thought the ending would be. Yudkowsky avoided the answer first. “I won’t spend a lot of time imagining my demise, because it doesn’t seem like a useful psychological concept to solve this problem,” he said. Under pressure, he endured it. “I thought I’d fall all of a sudden,” he said. “If you want a more accessible version, about the size of a mosquito, or maybe the size of a dust mites that land on the back of my neck, that’s it.”
The technology of the fatal blow he imagined by AI-powered dust mites is out of reach, and Yudowsky believes it is worth it to figure out how to work. He may not understand anyway. Part of the argument in this book is that superintelligence will come up with something scientific, and what we can’t understand is the microprocessor people imagine. Co-author Soares also said he imagined the same thing would happen to him, but added that like Yudkowsky, he didn’t spend much time in the details of his demise.
We don’t have a chance
It is strange to be unwilling to visualize their personal demise, and I heard from someone who just wrote the whole book Everyone’s Destruction. For the doomsday lovers, If someone builds it It’s a date reading. After the melting of this book, I do understand the ambiguity of the way nailing AI to end our lives and all human life thereafter. The author did speculate a little. Boiling ocean? Covering the sun? All guesses can be wrong because we are locked in the mindset of 2025, and AI will think ahead.
Yudkowsky is the most famous apostate in AI, who has transformed from a researcher to a Grim Reaper a few years ago. He even finished TED Talk. After years of public debate, he and his co-authors have an answer for every rebuttal initiated by the terrible prophecy against them. For beginners, it seems counterintuitive that our days are numbered by LLM, which usually stumble on simple arithmetic. The author said, don’t be fooled. “AIS won’t stay stupid forever,” they wrote. If you think super smart AIS will respect human boundaries, forget it, they say. Once the model starts to learn to be smarter, AIS develops “preferences” on its own, which doesn’t match what we humans want them to like more. Ultimately they don’t need us. They are not interested in us as conversation partners or even pets. We will be an annoying thing and they will set out to eliminate us.
This battle will not be fair. They believe that at first, AI might need human assistance to build its own factories and labs, which is done by stealing money and bribing people to help it. Then it will build things we cannot understand, and these things will end us. “One way, the world fades into black,” the authors wrote.
The author considers the book a shocking treatment that can make humans complacent and take the enormous measures needed to stop this unimaginable bad conclusion. “I hope to die from it,” Soares said. “But, the battle isn’t over until you actually die.” Too bad, the solution they suggests to stop the destruction seems far-fetched than the idea of software murdering all of us. It all boils down to: step on the brakes. Monitor data centers to make sure they don’t develop super-intelligence. Bombs those who don’t follow the rules. Stop publishing and accelerate the process of thinking to superintelligence. I asked them, would they ban it? Transformer Paper 2017 This begins the generated AI movement. Oh, yes, they will respond. They want Ciao-GPT, not Chat-GPT. Good luck to stop this trillion-dollar industry.
Play odds
Personally, I can’t see my lamp stinging around my neck with some super aggressive dust. Even after reading this book, I don’t think AI might kill us all. Yudksowky used to get involved Harry Potter Fan Novelshis spinning fantasy extinction scene was too strange to accept for my insignificant human brain. My guess is that even if Super Intelligence does want to get rid of us, it will accidentally develop its genocide plan. AI may be able to whip humans in battle, but I will bet on the fight against Murphy’s Law.
However, the theory of disaster does not seem to be Impossibleespecially because no one has really set an upper limit for changes in smart AI. Research also shows that Advanced AI has acquired many annoying attributes of humans, even Consider ransomware Avoid retraining in an experiment. It is equally disturbing that some researchers who have been building and improving artificial intelligence throughout their lives believe that the worst possibility may happen. A survey states Almost half of AI scientists have a fixed chance of species elimination at 10% or higher. If they think it’s crazy, they’re working every day to make Agi achieve it.
My intuition tells me that Yudkowsky’s scene and Soares Spin are too weird to be true. But I can’t certainly They were wrong. Every author dreams that their book is a lasting classic. These two are not that many. If they are right, no one will read in the future. Only a lot of broken down bodies once had a slight nip on the back of the neck, the rest were silent.