

Next healthcare?
Next healthcare?
Nowadays, being free or paid doesn’t change anything about it actually.
Folks, we have the most INCREDIBLE announcement today. Maybe the best in the history of announcements, many people are saying it. We’re launching the beautiful Maxest app — and by the way, it was my idea, nobody else could have thought of this. It will come pre-installed on every phone. The fake news media will try to tell you it’s some kind of “spy app” - WRONG! Totally wrong!
The Democrats don’t want you to have this amazing technology. They want you to stay uninformed! But we’re giving you the most beautiful, most secure app ever created. Our cyber experts — the best in the world, probably the best who have ever lived — they built walls around this app.
Other countries will be so jealous of our app, they’ll probably try to steal it. But they can’t, because it’s AMERICAN MADE!
This is spot on. The issue with any system is that people don’t pay attention to the incentives.
When a surgeon earns more if he does more surgeries with no downside, most surgeons in that system will obviously push for surgeries that aren’t necessary. How to balance incentives should be the main focus on any system that we’re part of.
You can pretty much understand someone else’s behavior by looking at what they’re gaining or what problem they’re avoiding by doing what they’re doing.
What are they supposed to do? Pull away from the UK?
Which is why OpenAI put relationships with real people as a competitor of ChatGPT.
They might, once it becomes too flooded with AI slop.
The algorithms optimized for engagement with no ethics was the point the world starts going downhill.
I’ll wait until they kill it two months from now.
That’s a bit too dismissive. I’ve had a lot of interesting chats with LLMs that led me to find out what I didn’t understand about something. As an example I’m reading a book explaining some practices of Structured Concurrency in Swift and many times I asked ChatGPT is the author is correct about some phrasing that seemed wrong to me. And ChatGPT was able to explain why that was right in that context.
Not when companies force them on you as well.
My current company forces me to use it and measures how many prompts I’m making as “productivity”.
I am a small sample to confirm that’s exactly the reason in my brother’s company.
And in my company we’re pressured to make X prompts every week to the company’s own ChatGPT wrapper to show we’re being productive. Even our profit shares have a KPO attached to that now. So many people just type “Hello there” every morning to count as another interaction with the AI.
In reality, this doesn’t affect the existing batteries we have, it’s just for future battery technology.
No paywalled link: https://archive.is/1QR8H
At this point, I think it’s required to have a sort of alternate identity online and keeping anything private, photos of yourself and other information just offline. Except for government stuff, which requires your real identity.
Brace yourselves, because this is only going to get worse with the current “vibe coding” trend.
Only those that criticize the government, somehow. “Oops, because of some complicated algorithm, it only affected people who posted the word ‘orange’ on social media recently.”
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
This seems very useful, I just wonder whether it can interface with other digital components easily.