Pocket-sized AI models could usher in a new era of computing

Date:

When ChatGPT was released in November 2023, it was only accessible via the cloud because the model behind it was absolutely massive.

Today I’m running a similarly capable AI program on a MacBook Air, and it’s not even hot. The shrinkage shows how quickly researchers are refining AI models to make them leaner and more efficient. It also shows that doing things at scale isn’t the only way to make machines smarter.

The model that is now filling my laptop with ChatGPT-like intelligence and wisdom is called Phi-3-Mini. It is part of a family of small AI models recently released by Microsoft researchers. Although it is compact enough to run on a smartphone, I tested it by running it on a laptop and accessing it from an iPhone through an app called Enchanted, which provides a chat interface similar to the official ChatGPT app.

In a paper describing the Phi-3 family of models, Microsoft researchers say the model I used is comparable to GPT-3.5, the OpenAI model behind the first release of ChatGPT. This claim is based on measuring its performance on a number of standard AI benchmarks designed to measure general knowledge and reasoning. In my own testing, it certainly seems just as capable.

Will Knight via Microsoft

Microsoft announced a new “multimodal” Phi-3 model capable of handling audio, video, and text at its annual developer conference Build this week. This comes just days after OpenAI and Google both announced new AI assistants built on top of multimodal models accessed via the cloud.

Microsoft’s Lilliputian family of AI models suggests that it’s becoming possible to create all kinds of useful AI apps that don’t depend on the cloud. This can open up new use cases by allowing them to be more responsive or personalized. (Offline algorithms are a key part of the Recall feature announced by Microsoft that uses AI to make everything you do on your PC searchable.)

But the Phi family also reveals something about the nature of modern AI, and perhaps even how it can be improved. Sebastian Bubeck, a Microsoft researcher involved with the project, told me that the models were created to test whether being more selective about what an AI system is trained on could provide a way to improve its abilities.

The large language models that power chatbots and other services, such as OpenAI’s GPT-4 or Google’s Gemini, are typically large chunks of text extracted from books, websites and almost any other accessible source. Although this has raised legal questions, OpenAI and others have found that increasing the amount of text given to these models and the amount of computer power used to train them can reveal new capabilities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Samsung S95D QD-OLED review: Matte screen for reflective rooms

Perhaps that's why Samsung's stubborn Tizen interface seems so...

Terra One’s $7.5m deck| TechCrunch

Terra One aims to reduce the quantity of clean...

Reliance and Disney to seek CCI approval for cricket rights assurance

Reliance Industries and Walt Disney have sought antitrust clearance...

Get 20% off my favorite Sleep Number comforter for Memorial Day

My partner sleeps warm and likes to kick the...