I think we can all agree, we should put people like this in the wood chipper and unboubtedly the world would be a better place.
Agree. AGREE?!
Well, firstly, I want to know are you a namby pamby “make it quick, put them head first in to the woodchipper” person.
Or a steely eyed “make the parasites suffer and go feet first and make them watch it happen” type.
If we are going to agree on anything then we need to agree on these fundamentals!!Why don’t we compromise and put them in arm first?
Head first, but the line up to the top has a really good view.
Dick first
What do you call a private jet full of billionaires crashing into a mountain?
A good start.
They’d never agree to share the jet.
That’s actually a pretty good idea. What if we started putting tech billionaires through the wood chipper? It could be like the American guillotine
There’s is nothing too awful to happen to sundar Prichai.
No horrible fate could befall that man that would not cause me delight.
The things to remember is that these CEOs have made a whole living out of not knowing what they are doing, but being insufferably confident in whatever vomit of words they spew, whether they know anything or not, while ultimately just saying the most milquetoast blatantly obvious stuff and pretending it’s very insightful. All this while they believe and the money proves that are the most important people in the world.
So naturally it’s easy for them to believe LLM can take all the jobs, because it can easily take theirs.
This type of stuff is exactly why I am moving all of my accounts away from Google. Google is now as bad or evenworse than Microsoft.

Google CEO fuckwad confirmed
That ascribes far too much agency to AI. It’s people like him who are putting society through the woodchipper.
It actually ascribes too much agency to individuals.
The technology is out there, there’s no shoving this back in the box. No matter what individuals decide to do, only laws will regulate this, and only laws that can be enforced.
Even if the US decides “no AI” a bunch of other countries are going to laugh at them, and then use AI to beat America at pretty much anything that can take advantage of AI to improve it.
It would be no different than a country refusing to use electricity or automobiles.
If AI is going to take our jobs then UBI is absolutely necessary.
deleted by creator
AI isn’t going to take your jobs, though. AI is going to take over management of the economy, which is a very different thing.
The jobs will still exist, because manual labor continues to be far cheaper to produce and deploy than machine labor. The conditions of employment will get worse over time, as computer management tools prioritize “efficiency” (aka margin of profit) over quality of life and ecological sustainability.
Eat dick, robber baron.
Doesn’t this situation call for companies that could decide to block AI and double down on the human workforce? And those companies who do would be rewarded by all of us who hate AI and they would succeed by the supposed rules of the free market. Why isn’t any company stepping up to compete against AI run companies? Wouldn’t it be an amazing opening to compete and win?
Also, it wasn’t talked about in the article, but one of the big arguments for why this AI thing has to be so inevitable is that we have to compete with China. They think we have to start this race with China, to try to win AI.
First of all, I think we might have already lost the race. Second of all, even if you don’t agree that we’ve already lost. What if by embracing AI, China and all the other countries are destroyed by it? What if it just makes so many mistakes and errors that it just destroys their economy and destroys their country?And then the countries who were cautious about AI would be fine.We’d be the winners, not having succumbed to this ridiculous urge to use everything AI.
People always forget that anything and everything hooked up to a network is hackable. I’ll say it again. Everything hooked up to a network is hackable.Including this shitty AI stuff. If we put everything into AI, even if we win, another country could just hack us. And screw everything up. The bottom line is.There is a space to say no to AI and succeed.
I know I’m not that super articulate about this, but I would love to see somebody else write about these ideas with more finesse than I have, so that we could all start talking about this more and stop letting this inevitable push to AI just keep going without pushing back.
Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race
Not to mention, we sold China their surveillance tech which has given them the upper hand in this “race” we never agreed to participate in.
The data collecting capabilities of an authoritarian surveillance state (which we created and sold to China) are allegedly what we will have to accept because it’s necessary to win this imaginary race with China…
I know not everybody is a “crazy conspiracy theorist,” but does that logic not seem the
slightest bitobviously fucked? How much of a “conspiracy” is it really to just acknowledge there are wealthy people in positions of power who don’t have our best interest in mind when they talk about making America great and beating China?US tech companies enabled the surveillance and detention of hundreds of thousands in China
US government allowed and even helped US firms sell tech used for surveillance in China
Detailed findings from AP investigation into how US tech firms enabled China’s digital police state
American tech companies to a large degree designed and built China’s surveillance state, playing a far greater role in enabling human rights abuses than previously known, an Associated Press investigation found. They sold billions of dollars of technology to the Chinese police, government and surveillance companies, despite repeated warnings from the U.S. Congress and in the media that such tools were being used to quash dissent, persecute religious sects and target minorities.
For me, this is the most depressing part.
So many of the technically competent individuals I know are just gleefully throwing their competency and credibility into this ‘AI’ grift chipper with utter abandon.
I can’t see a viable path through it, and whenever they’ve articulated what they see on the other side, it is beyond repugnant and I truly don’t see any benefit in existing in that world if it ever manifests.
We’re cooked. Gotta fight back
I just don’t get how so many people just start by it. Every time I set my expectations lower for what it can be useful at, it proceeds to prove itself likely to fail at that when I actually have a use case that I think one of the LLMs could tackle. Every step of the way. Being told by people that the LLMs are amazing, and that I only had a bad experience because I hadn’t used the very specific model and version they love, and every time I try to verify their feedback (my work is so die-hard they pay for access to every popular model and tool), it does roughly the same stuff, ever so slightly shuffling what they get right and wrong.
I feel gaslit as it keeps on being uselessly unreliable for any task that I would conceivably find it theoretically useful for.
I’ve had similar experiences. Try to do something semi-difficult and it fails, sometimes in an entertainingly shit way at least. Try something simple where I already know the answer? Good chance there’s at least one fundamental issue with the output.
So what are people who use this tech actually getting out of it? Do they just make it regurgitate things from StackOverflow? Do they have a larger tolerance for cleaning up trash? Or do they just not check the output?
“No job is safe … but I’ll still be a billionaire”
He’s a massive piece of shit.
I’ll suffer if he goes through the woodchipper
I’d love for him to have to suffer through… everything, really, if I’m being honest, given what he’s helping to do to humanity.





