Friday, December 2, 2022

Apple slices its AI image synthesis times in half with new Stable Diffusion fix


Two examples of Stable Diffusion-generated artwork provided by Apple.

Enlarge / Two examples of Stable Diffusion-generated artwork provided by Apple. (credit: Apple)

On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image generator to run on Apple Silicon using Core ML, Apple's proprietary framework for machine learning models. The optimizations will allow app developers to use Apple Neural Engine hardware to run Stable Diffusion about twice as fast as previous Mac-based methods.

Stable Diffusion (SD), which launched in August, is an open source AI image synthesis model that generates novel images using text input. For example, typing "astronaut on a dragon" into SD will typically create an image of exactly that.

By releasing the new SD optimizations—available as conversion scripts on GitHub—Apple wants to unlock the full potential of image synthesis on its devices, which it notes on the Apple Research announcement page. "With the growing number of applications of Stable Diffusion, ensuring that developers can leverage this technology effectively is important for creating apps that creatives everywhere will be able to use."

Read 8 remaining paragraphs | Comments

Reference : https://arstechnica.com/?p=1901694

No comments:

Post a Comment

The Top 10 Telecommunications Stories of 2024

For IEEE Spectrum readers following telecommunications news in 2024, signals expanding their reach and range animated readers to read mo...