Between the mass euthanasia efforts underway in Canada and the rising investment of Big Tech investment into questionable endeavors, it is abundantly clear that the ultra-elitists globalists are deeply interested in continuing the planet’s depopulation.
After all, the manmade virus from China clearly wasn’t enough to make a huge dent in the world’s population.
Which makes Big Tech’s investment in an artificial intelligence company run by a potentially pro-depopulation individual all the more frightening.
Indeed, Google has committed $2B to Anthropic, while Amazon has invested a staggering $4B in the company.
Predictably, the CEO of Anthropic has a less than rosy outlook on the future of humanity with the rise of AI.
In fact, Anthropic’s CEO and co-founder, Dario Amodei, effectively gives AI up to a “one in four chance” of destroying humanity entirely, and its rather distressing that both Google and Amazon alike are keen to invest in a company run by such a character.
“My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 percent and 25 percent,” Amodei brayed.
Ah, that’s nice. That’s a rather bold prediction.
After all, who would venture outside if they have a “25 percent chance” of being struck by lightning, or swim in the sea if they have a “25 percent chance” of being mauled by a shark?
Likely not too many.
Yet here Big Tech is, dumping collective billions into a company that may well contribute to such 25 percent odds.
“That means there is a 75 percent to 90 percent chance that this technology is developed, and everything goes fine,” Amodei added cheerfully, as if “everything going fine” is a remotely acceptable reason to tolerate such a high risk in the first place.
After all, Boeing 737 MAX planes were yanked from commercial airlines after two fatal crashes, despite the fact that nowhere near 25 percent of the Boeing planes ultimately ended in a fatal crash.
Yet, for whatever, reason, it’s apparently totally acceptable to consider the risks from AI acceptable.
Needless to say, Amadei gave exactly zero scientific reasoning for his “10 to 25 percent” prediction, which means that Big Tech may have, in reality, invested large sums into a rather technical depopulation scheme that has a reasonable chance of succeeding.
Perhaps most perversely of all, Amadei clearly considers AI to be almost akin to a deity of sorts.
“If we can avoid the downsides then this stuff about curing cancer, extending human lifespan, solving problems like mental illness… I don’t think it’s outside the scope of what this can do,” Amadei boomed.
Wow. Just wow. Talk about deifying AI, claiming it can “extend human lifespan” as well as “solve mental illness.”
An especially hysterical remark caused by mass division initiated by horrific social media algorithms in the first place.
Needless to say, Amadei’s prediction contrasts quite strongly with that of Musk, who sees hope, yet also possible destruction, in the new technological innovation.
“There’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity …
Hopefully, that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong,” Musk remarked cautiously.
So, Musk hopes the chance is “small,” though he knows it certainly isn’t zero.
The guy who believes AI has up to a 25 percent chance of “catastrophically” impacting humanity is the one who happens to get the most billions from largely amoral Big Tech.
A rather troubling trend, and hardly indicative of any form of “equity,” which myriad online comments revealed.
“It is interesting that a small group of billionaires get to decide if the general population survives…,” one individual mused.
Indeed, it is.
Something quite wicked is afoot in the “state of Denmark,” to borrow a rather relevant phrase from Shakespeare’s ever-relevant political tragedy, Hamlet.
Author: Ofelia Thornton