This week in AI: With Chevron’s collapse, AI regulation appears dead in the water


Hello friends, welcome to TechCrunch’s regular AI newsletter.

This week in AI, the US Supreme Court struck down the “Chevron deference,” a 40-year-old decision on the power of federal agencies that required courts to defer to agencies’ interpretations of congressional laws.

When Congress left aspects of its statutes vague, the Chevron deference allowed agencies to create their own rules. Now courts will be expected to exercise their own legal judgment — and the ramifications could be wide-ranging. Congress — which is hardly the most functional body these days — must now effectively try to predict the future with its law, since agencies can no longer apply basic rules to new enforcement circumstances, writes Axios’ Scott Rosenberg.

And that could doom nationwide AI regulation efforts forever.

Already, Congress was struggling to pass a basic AI policy framework — to the extent that state regulators from both parties were forced to intervene. Now, if it is to avoid legal challenges, any regulation it writes will have to be highly specific — a seemingly difficult task given the speed and unpredictability with which the AI ​​industry moves.

Justice Elena Kagan specifically raised the issue of AI during oral arguments:

Let’s imagine that Congress passes an artificial intelligence bill and there are all kinds of delegations in it. By the nature of things and particularly the nature of the subject matter, there will be all kinds of places where, although there is no explicit delegation, Congress has in fact left a gap. … (d) Do we want the courts to fill that gap, or do we want an agency to fill that gap?

Now the courts will fill the gap. Or federal lawmakers will see this exercise as futile and shelve their AI bills. Whatever the outcome, regulating AI in the US has become harder than ever.


Google’s environmental AI costs: Google has released its 2024 Environmental Report, an 80-plus page document describing the company’s efforts to apply technology to environmental issues and reduce its negative contributions. But it dodges the question of how much energy Google’s AI is using, Devin writes. (AI is notoriously energy hungry.)

Figma deactivated the design feature: Figma CEO Dylan Field says Figma will temporarily disable its “Make Design” AI feature, which was said to copy the design of Apple’s weather app.

Meta changes its AI label: After Meta began tagging photos with a “Made with AI” label in May, photographers complained that the company was mistakenly labeling real photos. As Evans reports, Meta is now changing the tag to “AI info” across all of its apps in an attempt to placate critics.

Robot cats, dogs and birds: Brian writes about how New York State is distributing thousands of robot animals to the elderly amid an “epidemic of loneliness.”

Apple is bringing AI to the Vision Pro: Apple’s plans follow on from the previously announced Apple Intelligence launch on iPhone, iPad, and Mac. According to Bloomberg’s Mark Gurman, the company is also working to bring these features to its Vision Pro mixed-reality headset.

Research Paper of the Week

Text-generating models like OpenAI’s GPT-4o have become table stakes in technology. Apps that don’t No Nowadays people use them for tasks ranging from completing emails to writing code.

But despite the models’ popularity, how these models “understand” and generate human-sounding text is not a settled science. In an effort to strip away the layers, researchers at Northeastern University looked at tokenization, or the process of breaking down text into units Tokens So that the models can work more easily.

Today’s text-generating models process text as a series of tokens drawn from a set “token vocabulary”, where a token may correspond to a single word (“fish”) or a fragment of a larger word (“sal” and “mon” in “salmon”). The vocabulary of tokens available to the model is usually determined by First The training is thought to evolve based on the characteristics of the data used for training. But the researchers found evidence that models also evolve based on the characteristics of the data used for training. Built-in Terminology which maps groups of tokens – for example, multi-token words like “Northeastern” and the phrase “break a leg” – to semantically meaningful “units”.

Based on this evidence, the researchers developed a technique to “probe” the underlying vocabulary of any open model. From Meta’s Lama 2, they extracted phrases such as “Lancaster”, “World Cup player” and “Royal Navy”, as well as more obscure terms such as “Bundesliga player”.

This work has not been peer-reviewed, but the researchers believe it could be a first step toward understanding how lexical representations are formed in models — and could serve as a useful tool to help us understand what a given model “knows.”

Model of the Week

The Meta research team has trained several models to create 3D assets (i.e., 3D shapes with textures) from text descriptions, suitable for use in projects like apps and video games. While there are plenty of shape-creation models out there, Meta claims it is “state-of-the-art” and supports physically based rendering, which lets developers “re-illuminate” objects to give them the appearance of one or more light sources.

The researchers combined two models, AssetGen and TextureGen, inspired by Meta’s Emu image generator, into a pipeline called 3DGen to create the shapes. AssetGen turns a text prompt (e.g., “T-Rex wearing a green woolen sweater”) into a 3D mesh, while TextureGen enhances the “quality” of the mesh and adds a texture to create the final shape.

Image Credit: Meta

3DGen, which can also be used to restructure existing shapes, takes about 50 seconds from start to finish to create a new shape.

“By combining the strengths of (these models), 3DGen achieves very high-quality 3D object synthesis from textual prompts in less than a minute,” the researchers wrote in a technical paper. “When evaluated by professional 3D artists, 3DGen’s output is superior to industry alternatives in most cases, especially for complex prompts.”

Meta is set to incorporate tools like 3DGen into its metaverse game development efforts. According to a job listing, the company is looking to research and prototype VR, AR, and mixed-reality games created with the help of generative AI technology — possibly including custom shape generators.

grab bag

As a result of the partnership the two companies announced last month, Apple could get an observer seat on OpenAI’s board.

Phil Scheeler, the Apple executive responsible for leading the App Store and Apple programs, will join OpenAI’s board of directors as the second supervisor after Microsoft’s Dee Templeton, Bloomberg reports.

If the move comes to fruition, it would be a remarkable show of strength from Apple, which plans to integrate OpenAI’s AI-powered chatbot platform ChatGPT with several of its devices this year, as part of a broader set of AI features.

Apple won’t pay OpenAI for ChatGPT integration, reportedly arguing that PR exposure is as or more valuable than cash. In fact, OpenAI might pay up AppleApple is said to be considering a deal under which it would get a share in the revenue from any premium ChatGPT features that OpenAI brings to the Apple platform.

So, as my colleague Devin Koldeway pointed out, this puts OpenAI’s close ally and major investor Microsoft in the awkward position of effectively subsidizing Apple’s ChatGPT integration — with little consequence. What Apple wants, it gets, obviously — even if that means its partners have to settle the dispute.


Please enter your comment!
Please enter your name here

Share post:




More like this

A Side Sleeper Test & Review Popular Body Pillows (2024)

being a party This can be challenging for the...

How to Protect Your Startup from Email Scams

For years, it's been claimed that the "end of...