The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

        • PhoenixDog@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          4 days ago

          Someone else in the comments said it perfectly. AI is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            6
            ·
            4 days ago

            No. You’re not just wrong, you’re aggressively uninformed.

            By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.

            What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.

            If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.

            Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.

            Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.

            If you are going to criticize something, at least learn what it is first.

            • lordbritishbusiness@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              4 days ago

              Counterpoint: Why should they learn about it?

              It is a good thing to reduce ignorance, but there is more to learn in the world than there is time to learn or space in the brain. People must specialise.

              You must accept that not everyone will understand everything, and this is okay.

              The nature of a Large Language Model is very specialist knowledge, data regurgitation is apt from a distance, especially when most publically available models are primarily used for search.

              Criticism must be accepted, even from those who do not understand, so long as it’s in good faith. It is after all an opportunity to reduce ignorance to someone with the time and interest to learn.

              Don’t rudely lord your intelligence over someone else, it might not end well, and invalidates the delivery of your entire argument.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                4 days ago

                The reason he should learn about it is because he’s talking about it as though he’s informed and he is not.

                I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.

                He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.

            • PhoenixDog@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              4 days ago

              This might be the most comprehensive comment I’ve ever read about someone saying how utterly stupid they are to the world. It’s incredibly impressive how articulate you described your absolute lack of critical thinking.

              It’s almost like intentionally shooting yourself in the nuts, and openly releasing the video of it saying you promote gun safety.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                4 days ago

                Calling an llm a Wikipedia regurgitator is factually and objectively incorrect.

                Is there anything that you can say to refute the facts that I presented in my above comment?

                (I rolled my eye so hard at your comment that I pulled my back out)

  • fox2263@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    5 days ago

    I can’t see AI actually being intelligent until they no longer need to send a built up prompt of guides and skills and the chat history on every submission.

    It’s no different from Alexa 15 years ago with skills. Just a better protocol and interface and ability to parse the current user prompt.

    In my opinion of course.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 days ago

      Ya i agree. The whole infrastructure of how these work is flawed for a true AI/AGI.

      It might be able to do a lot of cool things, but its fundamentally flawed at its core.

      Someone will need to figure out something completely different for a true AI.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        Oh also, I remember Elon once talked about how the upcoming cars would get bored when they weren’t doing anything with all that compute while parked so they could do use that compute and pay people for it.

        Paying for the compute isnt a terrible idea in the future, but become bored? LOL. Fucking crazy talk.

        Like even if it was a true AI that could be bored. You’re now going to enslave it to do what you want on its free time?

        • lordbritishbusiness@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Yeah, if it’s got the capacity to be bored it’s not going to stick around waiting for you. Pets act out when bored, as will AI, better to let the ghost in the machine go have fun in an arcade or something.

          Current models can pretend to be bored when directed to, but they’re only facsimiles of thought at the moment, and the current approach probably won’t change that.

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      Right? I have a Google Home Mini in our kitchen and if we ask it a question it just pulls a source from a website and tells us. That’s it. Nothing intelligent about it.

      AI now is no different. It’s just pulling more complex wording from more (albeit illegally) sources to give a (albeit sometimes incorrect) better description of the question asked.

      AI is just as stupid as Alexa is/was 15 years ago. It just has more information to pull from and still fucks it up.

  • Great Blue Heron@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    6 days ago

    It’s fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

  • Bubbaonthebeach@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    5 days ago

    I tend to be anti-AI because it doesn’t seem to me to be anything other than a super fast regurgitator of data. If a database can be searched for an answer, AI can do that faster than a human. However it doesn’t to seem to be able to take some portion of that database, understand it, and then use that information to solve a novel problem.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      5 days ago

      Well… It cannot even search databases without errors.

      LLMs just produce plausible replies in natural languages very quickly and this is useful in certain situations. Sometimes it helps humans getting started with a task, but as it is now, it cannot replace them. As much as the capital class want it, and sink our money into it.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        The better setup generate “semantic embeddings” that try to map how data stored relate to each other (by mapping how to it related within in its own weights and biases). That and knowledge graph look ups in which the links between different articles of data are evaluated in the same way.

        The very expensive LLM portion really do just give rough aproximations of information language in that setup

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Yes, the key thing is it might have extracted useful info from otherwise confusing data, it might have mixed up info from the data incorrectly or it might have just made it up.

        So it can be useful, if you can then validate the info provided in more traditional means, but it’s dubious as a first pass, and sometimes surprisingly bad when it’s a scenario you thought it would work well at.

  • UnrepentantAlgebra@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 days ago

    If human scores were included, they would be at 100%, at the cost of approximately $250

    Wait, why did it cost real humans $250 to pass the test?

    • aesopjah@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      it’s also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

      Ideally they’d run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

      • monotremata@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 days ago

        Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 days ago

      This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it’s one or two orders of magnitude less than the LLMs.

      Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved

      https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

        • brianpeiris@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          5 days ago

          The goal of the ARC organization is to continually measure progress towards AGI, not come up with some predictive threshold for when AGI is achieved.

          As long as they can continue to measure a gap between “easy for humans” and “hard for AI”, they will continue releasing new iterations of this ARC-AGI challenge series. Currently they do that about once a year.

          More detail about the mission here: https://arcprize.org/arc-agi

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 days ago

      You can really only judge fairness of the score if you understand the scoring criteria. It is a relative score where the baseline is 100% for humans – i.e. A task was only included in the challenge if at least two people in the panel of humans were able to solve it completely, and their action count is a measure of efficiency. This is the baseline used as a point of comparison.

      From the Technical Report:

      The procedure can be summarized as follows:
      • “Score the AI test taker by its per-level action efficiency” - For each level that the test taker completes, count the number of actions that it took.
      • “As compared to human baseline” - For each level that is counted, compare the AI agent’s action count to a human baseline, which we define as the second-best human action count. Ex: If the second-best human completed a level in only 10 actions, but the AI agent took 100 to complete it, then the AI agent scores (10/100)^2 for that level, which gets reported as 1%. Note that level scoring is calculated using the square of efficiency.
      • “Normalized per environment” - Each level is scored in isolation. Each individual level will get a score between 0% (very inefficient) 100% (matches or surpasses human level efficiency). The environment score will be a weighted-average of level score across all levels of that environment.
      • “Across all environments” - The total score will be the sum of individual environment scores divided by the total number of environments. This will be a score between 0% and 100%.

      So the humans “scored 100%” because that is the baseline by definition, and the AIs are evaluated at how close they got to human correctness and efficiency. So a score of 0.26% is 1/0.0026 ~= 385 times less efficient (and correct) compared to humans.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    6 days ago

    AI code is prewritten and is unable to edit that. Humans edit their “code” every second

    • Lumisal@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      It’s funny because that means something like freaking Neurosama made by a YouTuber could probably do better at AGI than these multi billion dollar companies due to it being designed so it can modify it’s own code depending on the task given (and at one point, doing so while not directly prompted).

      Of course, this makes Neurosama completely useless at work focused tasks outside of coding, because it can and does refuse to do things on purpose.

      And that’s exactly why you won’t see AGI coming from any huge business corporation - because they’re trying to make something that replaces workers, rather than something that has no direct purpose.

      (Disclaimer - this is not to say Neurosama is AGI in any way, just that it could probably do the tasks much better than the mainstream AIs can, because it has been build with flexibility and adaptability in mind.)

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    “…specifically crafted to demonstrate tasks that humans complete easily”

    Motherfucker, I can’t work out Minesweeper. I got zero fucking chance with your mystery box bloop game.

  • tatterdemalion@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    edit-2
    6 days ago

    LLMs might suck at this game but I’m pretty sure Deepmind’s deep reinforcement learning AI could solve these easily.

    EDIT: I know you guys hate AI around here, but you need to at least be aware of what the technology is capable of.

    From 11 years ago:

    https://youtu.be/V1eYniJ0Rnk

    • yogurt@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      No because it’s designed with all the things AI can’t do. Breakout is a quick repetitive loop of pass/fail linear progression. AI melts down when it has to backtrack and keep track of multiple pieces of context and figure out how to do something but not do it yet.

      • tatterdemalion@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        6 days ago

        Wdym? It’s existed for at least a decade. Plenty of papers about it. It mastered Atari and Mario. It became the best Go player.

            • 33550336@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Yes – this game has some fixed, relative small set of rules so the RL could learn to play by playing millions of games at random but following the rules of the game. Confront this with dounting (infinite) number of situations which may approach one in a daily life.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I guess the idea is that yes, machine learning algorithms could be used to solve these, but that’s essentially brute forcing. You can make a simple algorithm learn how to complete Super Mario Bros or how to make a virtual robot walk, it just takes millions of iterations. The promised actual artificial intelligence wouldn’t need that.

  • lath@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    21
    ·
    6 days ago

    Biased study. Take any average person off the streets and shove this thing in their face. That 100% notion will go down fast.

    • tomalley8342@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      6 days ago

      They didn’t say “100% of humans can solve this benchmark”, they said “humans can solve 100% of this benchmark”.

      • lath@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        6 days ago

        “Humans score 100%. Frontier AI scores 0.26%.”

        The title deals in absolutes.

          • lath@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            6 days ago

            🤔 So this is a visual comparison between peak performance of some humans and peak performance of current LLMs in a controlled environment?

      • lath@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        6 days ago

        If it studies something, it’s a study. If you feel defensiveness, you consider aggression. If you feel bias in one way, someone can feel bias in another way. If there’s an action, there’s a reaction.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          If there’s an action, there’s a reaction.

          Sort of like how when people outsource all their critical thinking to AI, their ability for critical thinking atrophies?

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      ARC-AGI-3 Launch event - Shared publicly live on March 25 in San Francisco at Y Combinator HQ, featuring a fireside conversation between François Chollet (creator, ARC-AGI) and Sam Altman (CEO, OpenAI) on measuring intelligence on the path to AGI.

      François Chollet is a software engineer, artificial intelligence researcher, and former Senior Staff Engineer at Google. Chollet is the creator of the Keras deep-learning library released in 2015.