Anthropic wants to fund a new, more comprehensive generation of AI benchmarks


Anthropic is launching a program to fund the development of new types of benchmarks capable of evaluating the performance and impact of AI models, including generative models such as the Cloud.

Anthropic’s program, unveiled Monday, will make payments to third-party organizations that, as the company said in a blog post, “can effectively scale advanced capabilities in AI models.” Interested individuals can submit applications for evaluation on a rolling basis.

“Our investment in these assessments is meant to elevate the entire field of AI safety, providing valuable tools that benefit the entire ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-related assessments remains challenging, and demand is outpacing supply.”

As we’ve mentioned before, AI has a benchmarking problem. The most frequently cited benchmarks for AI today are very poor at understanding how the average person actually uses the system being tested. There are also questions about whether some benchmarks, especially those released before the rise of modern generative AI, even measure what they claim, given their age.

The much higher-level, more difficult solution proposed by Anthropic is setting challenging standards by focusing on AI safety and societal implications through new tools, infrastructure, and methods.

The company specifically calls for tests that assess the model’s ability to accomplish tasks such as conducting cyberattacks, “enhancing” weapons of mass destruction (such as nuclear weapons), and manipulating or deceiving people (such as through deepfakes or misinformation). As for AI risks related to national security and defense, Anthropic says it is committed to developing a kind of “early warning system” to identify and assess risks, though it does not explain in the blog post what such a system might involve.

Anthropic has also said that its new program aims to support benchmarks and research into “end-to-end” tasks, examining AI’s potential to assist scientific studies, converse across multiple languages ​​and reduce implicit biases, as well as self-censor toxicity.

To achieve all this, Anthropic envisions a new platform that allows subject-matter experts to develop their own evaluations and large-scale testing of models involving “thousands” of users. The company says it has hired a full-time coordinator for the program and may buy or expand projects it thinks have the potential to scale.

“We offer a variety of funding options based on the needs and stage of each project,” Anthropic wrote in the post, though an Anthropic spokesperson declined to provide any further details about those options. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the Frontier Red Team, Fine-Tuning, Trust and Security, and other related teams.”

Anthropic’s effort to support the new AI benchmark is admirable – assuming, of course, that it has enough cash and manpower behind it. But given the company’s business ambitions in the AI ​​race, it may be hard to rely on it completely.

In the blog post, Anthropic is quite transparent about the fact that it wants some of the evaluations it funds to align with AI safety classifications it Developed (with some input from third parties such as the nonprofit AI research organization METR). This is within the company’s prerogative. But it may force program applicants to accept definitions of “safe” or “risky” AI they may not agree with.

Part of the AI ​​community may also object to Anthropic’s references to “catastrophic” and “deceptive” AI risks, such as the risks of nuclear weapons. Many experts say there is little evidence to suggest AI as we know it will ever achieve world-ending, human-surpassing capabilities. These experts said claims of imminent “super intelligence” only serve to distract from AI’s regulatory issues, such as AI’s deceptive tendencies.

In his post, Anthropic writes that he hopes his program will “serve as a catalyst for progress toward a future where comprehensive AI evaluation is an industry standard.” It’s a mission that many open, corporate-unaffiliated efforts to create better AI benchmarks can recognize. But it remains to be seen whether those efforts are willing to team up with an AI vendor whose loyalty ultimately lies with shareholders.


Please enter your comment!
Please enter your name here

Share post:




More like this

How IT departments coped with the CrowdStrike chaos

Just before 1:00 a.m. local time on Friday, a...

Alphabet will invest an additional $ 5 billion in Waymo

Alphabet will spend an additional $5 billion on its...

Cut your energy bills with these smart thermostat settings

You probably wouldn't think that you could Thermostat ...