Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Note that this uses a harness so it doesn't qualify for the official ARC-AGI-3 leaderboard

According to the authors the harness isn't ARC-AGI specific though https://x.com/agenticasdk/status/2037335806264971461



It is 100% ARC-AGI-3 specific though, just read through the prompts https://github.com/symbolica-ai/ARC-AGI-3-Agents/blob/symbol...


What a dick move. Making that prompt open source will probably mean that every other model that doesn't want to cheat will scrape that and accidentally cheat in the next models.


(disclaimer: i worked on early versions of agentica_sdk; but wasn't involved in recent developments and the ARC solver)

As other comments point out this is about harness development and harness efficiency. Agentica SDK is a sort of meta harness, that makes things easy: plug any "internal API" (as defined natively in your codebase) directly into your agent. Agentica SDK itself is not application specifc; but the APIs of your application are... application specific.

Re: the linked prompt. A harness is a set of tools and descriptions how to best use those tools, and sometimes some external control flow based on the outcome of using those tools. How to "best use the tools" should always be part of the prompt (like in this case).

So this work tries to answer: "short of telling the agent any solutions, make a simple but efficient API to play the games, hand it to the agent, and see how it does". In the world of harness development I think that's an interesting question to answer!


>In the world of harness development I think that's an interesting question to answer!

The challenge isn't about harness development though, and a sufficiently complex harness can solve these tasks rather easily.

And presenting it as if you've made a novel development for solving ARC-AGI-3 leads me to believe you're willing to waste all of our time for your benefit at every step in the future.


> a sufficiently complex harness can solve these tasks rather easily.

I claim this is not so easily done, and earlier iterations of ARC-AGI did not have the constraint in the first place. You want something that generalizes across all puzzles (hopefully even the private ones), and these puzzles are extremely diverse ... and hard; telling the model the controls and some basic guidelines for the game is the only "obvious" thing you can do.

The other point of my reply was efficiency, both in terms of creating and using the harness; the discussed solution is something that anyone (in fact, likely even an LLM itself) can cook up in a few minutes; it's not much more than a game control wrapper so the agent can play around with the game in live python and some generalities as laid out in the prompt.

(But I'm always happy to be proven wrong. What harnesses did you have in mind?)


this is so disingenuous on symbolica's part. these insincere announcements just make it harder for genuine attempts and novel ideas


Um, yes this is a extremely specific as a benchmark harness. It has a ton of knowledge encoded about the tasks at hand. The tweet is dishonest even in the best light.

The hard part of these tests isn't purely reasoning ability ffs.


> this uses a harness

This seems like an arbitrary restriction. Tool-use requires a harness, and their whitepaper never defines exactly what counts as valid.


Right, fair, but look at the prompt. For the purpose of testing general intelligence, this seems kind of pointless.


It isn't arbitrary. They want measure the capability of the general LLM


So if I say "I want to measure your capability as a mechanic" but then also "to ensure an accurate score you're forbidden to use any tools" how are you the human mechanic planning to diagnose and fix the engine problem without wrenches and jack stands and the like? It makes no sense.

That said their harness isn't generic. It includes a ridiculously detailed prompt for how to play this specific game. Forbidding tool use is arbitrary and above all pointless hoop jumping but that doesn't make the linked "achievement" any less fraudulent.


It is more like restricting the mechanic to only using commercially available tools and not allow them to create CUSTOM tools.


No, that would be analogous to disallowing customized harnesses, ie tooling specially crafted by someone else for the specific task at hand. Insisting that an LLM solve something without the ability to make use of any external tooling whatsoever is almost perfectly analogous to insisting that a human mechanic work on a car with nothing but his own bare hands.

The wrench is to the mechanic as the stock python repl is to the LLM.


They want the LLM that does the ARC-AGI-3 to be the same LLM that everyone uses.


Rephrase that in terms of the human mechanic and hopefully you can see the error of that reasoning. LLMs that perform tasks (as opposed to merely holding conversations) use tools just like we do. That's literally how we design them to operate.

In fact the LLMs that everyone uses today typically have access to specialized task specific tooling. Obviously specialized tools aren't appropriate for a test that measures the ability to generalize but generic tools are par for the course. Writing a bot to play a game for you would certainly serve to demonstrate an understanding of the task.


I'm pretty sure the LLM can use tools while doing arc-agi-3 but it has to the same tools available all the time not an incredibly elaborate custom harness.


To quote someone else from upthread, tool use requires a harness. Without one an LLM as commonly understood is a bare model that receives inputs and directly produces outputs the same as talking to an unaided person.


Then the LLM has to write the harness.


I'd like to suggest that prior to expressing disagreement you really ought to reread the comment you're replying to and make sure your understanding is correct.

Quoting this for the second time now - tool use requires a harness.

Without a harness the LLM has no ability to interact with the world. It has no agency. It's just spitting out text (or whatever else) into the void. There's no programming tools, no filesystem, no shell, nothing.


And by the rules of arc-agi-3 the LLM will have to write any harness it needs. I'm not sure what we are even arguing about this point.


Doesn't the chat version of chatgpt or gemini also have interleaved tool calls, so do those also count as with harnesses?


Harness is fine. I think people here are arguing what provided here to take the test is not harness.


We're calling agents harnesses now?


The point of this test is to check if an AI system can figure out the game. This isn't what happened here. A human figured out the game, wrote in their prompts exactly how the game works and THEN put the AI on the problem. This is 100% cheating and imo quite stupid.


The harness would be fine if the agent coded its own harness in a controlled environment while observing the game.

Not sure if the specific rules of this prize allow that, but I would accept that


ELI5 what is a harness?

EDIT from https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf:

> We seek to fight two forms of overfitting that would muddy public sensefinding:

> Task-specific overfitting. This includes any agent that is created with knowledge of public ARC-AGI-3 environments, subsequently being evaluated on the same environments. It could be either directly trained on these environments, or using a harness that is handcrafted or specifically configured by someone with knowledge of the public environments.


I think generally people regard a harness as the system instructions + tools made available to the LLM (and probably the thing that runs the LLM conversation in a loop.) An agent is collectively, the LLM plus the harness.


I for one think that harness development is perhaps the most interesting part at the moment and would love to have an alternative leaderboard with harnesses.


There is. Official leaderboard is without harness, and community leaderboard is with harness. Read ARC-AGI-3 Technical Paper for details.


I went through the technical paper again, and while they explain why they decided against the harness, I disagree with them - my take is that if harnesses are overfitting, then they should be penalized on the hidden test set.

Anyway, searching both in ARC-AGI's paper and website and directly on kaggle, I failed to find a with-harness leaderboard; can you please give the link?



Ah, it's based on this repo [0] and there's only 1 non-example submission there [1], from 2 weeks ago (so it only covers the preview games), and their schema doesn't a field to show that it's only the preview, nor does the thing properly parse the score or cost into the table. And the biggest thing is that apparently there's no validation whatsoever - submissions are not ever run on the hidden test games, so is essentially useless as a comparison.

[0] https://github.com/arcprize/ARC-AGI-Community-Leaderboard [1] https://github.com/arcprize/ARC-AGI-Community-Leaderboard/bl...


I'm so into harness development right now. Once it clicked that harnesses can bring more safety and determinism to LLMs, I started to wonder where I'd need that and why (vs MCP or just throwing Claude Code at everything), and my brain gears have been turning endlessly since then. I'd love to see more of what people do with them. My use cases are admittedly lame and boring, but it's such a fun paradigm to think and develop around.


Could you point me to some resources to learn about harnesses? I’d love to hear an example of a use case you’re thinking of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: