• TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    2 days ago

    288 GB HBM4 memory

    jfc…

    Looking at the spec’s… fucking hell these things probably cost over 100k.

    I wonder if we’ll see a generational performance leap with LLM’s scaling to this much memory.

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      Current models are speculated at 700 billion parameters plus. At 32 bit precision (half float), that’s 2.8TB of RAM per model, or about 10 of these units. There are ways to lower it, but if you’re trying to run full precision (say for training) you’d use over 2x this, something like maybe 4x depending on how you store gradients and updates, and then running full precision I’d reckon at 32bit probably. Possible I suppose they train at 32bit but I’d be kind of surprised.

      Edit: Also, they don’t release it anymore but some folks think newer models are like 1.5 trillion parameters. So figure around 2-3x that number above for newer models. The only real strategy for these guys is bigger. I think it’s dumb, and the returns are diminishing rapidly, but you got to sell the investors. If reciting nearly whole works verbatim is easy now, it’s going to be exact if they keep going. They’ll approach parameter spaces that can just straight up save things into their parameter spaces.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah they’re going to cost as much as a house.

      I think we’ll see much larger active portions of larger MOEs, and larger context windows, which would be useful.

      The non LLM models I run would benefit a lot from this, but I don’t know of I’ll ever be able to justify the cost of how much they’ll be.