This post was originally published on this site.

Since 2024, Anthropic’s performance optimization team has given job applicants a take-home test to make sure they know their stuff. But as AI coding tools have gotten better, the test has had to change a lot to stay ahead of AI-assisted cheating.
Team lead Tristan Hume described the history of the challenge in a blog post on Wednesday. “Each new Claude model has forced us to redesign the test,” Hume writes. “When given the same time limit, Claude Opus 4 outperformed most human applicants. That still allowed us to distinguish the strongest candidates — but then, Claude Opus 4.5 matched even those.”
The result is a serious candidate-assessment problem. Without in-person proctoring, there’s no way to ensure someone isn’t using AI to cheat on the test — and if they do, they’ll quickly rise to the top. “Under the constraints of the take-home test, we no longer had a way to distinguish between the output of our top candidates and our most capable model,” Hume writes.
The issue of AI cheating is already wreaking havoc at schools and universities around the world, so ironic that AI labs are having to deal with it too. But Anthropic is also uniquely well-equipped to deal with the problem.
In the end, Hume designed a new test that had less to do with optimizing hardware, making it sufficiently novel to stump contemporary AI tools. But as part of the post, he shared the original test to see if anyone reading could come up with a better solution.
“If you can best Opus 4.5,” the post reads, “we’d love to hear from you.”




