• 6 Posts
  • 672 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle
  • On a technical level, that makes zero sense.

    AI “agents” are basically just fancy prompts with a tool calling harness. They are infinitely replicable, at zero cost, with no intrinsic value; the cost comes from the generic CPU host, and the API calls to GPU servers, databases, or whatever else that are all centralized anyway.


    Wanna hear a dirty secret?

    “AI” cost is going to zero.

    Model capabilities aren’t scaling, but inference efficiency is exploding, thanks to more resource-constrained labs and breakthroughs in papers. The endgame of the current bubble is mediocre but useful tools anyone can host themselves, dirt cheap. Maybe a bit more reliable and refined than what we have now, but about as “intelligent.”

    And guess what?

    Microsoft can’t profit off that. None of Big Tech can.

    Point being, this exec is either delusional, or jawboning to “maintain the curtain” so the world doesn’t realize that.







  • To illustrate what I mean more clearly, look at the top comments/replies for the NASA Artemis posts, as an example.

    …It’s basically all conspiracy theorists, and government skeptics.

    Twitter’s focusing the Artemis posts on them because it’s what they want to see, and most engaging for them.

    In the EFF’s case, I’m not just talking about Musk’s influence. The algorithm will only show the EFF to users who would be highly engaged by it. E.g., angry skeptics who wouldn’t be swayed by the EFF anyway, or fans who already agree with the EFF. It’s literally not going to show the EFF to people who need to see it, as Twitter’s metrics would show it as unengaging.


    This is the “false image” I keep trying to dispel. Twitter is less and less an “even spread” of exposure like people think it is, like it sort of used to be, more-and-more a hyper focused bubble of what you want to hear, and only what you want to hear. All the changes Musk is making are amplifying that. Maybe that’s fine for some orgs, but there’s no point in the EFF staying in that kind of environment, regardless of ethics.












  • From my perspective in the local LLM scene:

    They’re getting better at being dumb tools doing mundane things. Formatting for MCP, tool use and stuff is all getting trained in now.

    …They aren’t getting much smarter or more reliable, though.

    This is especially true with the big US AI houses. There are all sorts of incredible papers that come out weekly to address things like errors from sampling, power use, the one-way serial autoregressive architecture, all these fundamental caps of capabilities, and… they aren’t even testing any of it? Contrary to what you hear, LLM development has been very, very, very conservative, and even a single failed experiment can destroy a whole division.


    The Chinese devs are the forefront IMO, and are really pushing efficiency, but are falling into a similar trap elsewhere, unfortunately.


    So I don’t know how to answer your question.

    There is TONS of low hanging fruit to be picked, both on the model and implementation side, but it doesn’t seem like anyone is picking it efficiently? It’s largely following the corporate enshittification model of “don’t improve, scale.”

    Labs doing anything interesting/not scammy are either bought out and smothered (mostly in the US), tend to fall into a “herd mentality” (my observation looking into China), or are crushed by corpo nonsense (Korea/Middle East/US) or messed up laws (Europe).


  • If you’re on a browser, I’d recommend:

    https://github.com/amitbl/blocktube

    And perhaps other YouTube client apps have a similar feature.

    I find, for a given topic, that there are a few common channels spamming hundreds and hundreds of junk videos. Block them as you find them, and it cleans up the feed immensely.

    It’s absolutely mind boggling that YT doesn’t include this as a default feature.


    Also, respectfully I would not get too invested in YT.

    The other day, I found my TV (with the stock app) auto skipping sponsers. That’s just one of a bazillion ways Google is crushing creators that make anything but attention slop, intentionally, so that kind of long-form content you like may not last.