Apple Releases Open Source On-Device AI Models

0
Apple AI

Apple has made a significant move in the field of artificial intelligence (AI) by releasing several open source large language models (LLMs) designed to run on-device, rather than relying on cloud servers. Known as OpenELM (Open-source Efficient Language Models), these LLMs are now available on the Hugging Face Hub, a platform dedicated to sharing AI code.

According to a white paper released by Apple, there are a total of eight OpenELM models, with four pre-trained using the CoreNet library and four instruction-tuned models. Apple employs a layer-wise scaling strategy aimed at enhancing both accuracy and efficiency.

Unlike previous practices that only provided model weights and inference code, Apple’s release includes the entire framework for training and evaluating the language model on publicly available datasets. This encompasses training logs, multiple checkpoints, and pre-training configurations.

Apple’s decision to release these OpenELM models is motivated by a desire to empower and enrich the open research community with state-of-the-art language models. By sharing open source models, researchers gain access to tools for investigating risks and addressing data and model biases, while developers and companies can utilize the models as-is or make modifications to suit their needs.

Furthermore, Apple’s move towards open sharing of information serves as a strategic tool for recruiting top talent in the field of AI. It provides opportunities for research papers that may not have been possible under Apple’s traditionally secretive policies.

While Apple has not yet implemented these AI capabilities on its devices, iOS 18 is anticipated to introduce a range of new AI features. There are also rumors suggesting that Apple plans to utilize its large language models on-device for enhanced privacy.