This Week's Sponsor:

Kolide

Ensures that if a device isn’t secure it can’t access your apps.  It’s Device Trust for Okta.


Posts tagged with "machine learning"

Stable Diffusion Optimizations Are Coming to iOS and iPadOS 16.2 and macOS 13.1 Via Core ML

Today, Apple announced on its Machine Learning Research website that iOS and iPadOS 16.2 and macOS 13.1 will gain optimizations to its Core ML framework for Stable Diffusion, the model that powers a wide variety of tools that allow users to do things like generate an image from text prompts and more. The post explains the advantages of running Stable Diffusion locally on Apple silicon devices:

One of the key questions for Stable Diffusion in any app is where the model is running. There are a number of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. First, the privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. Second, after initial download, users don’t require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs.

The optimizations to the Core ML framework are designed to simplify the process of incorporating Stable Diffusion into developers’ apps:

Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon.

The development of Stable Diffusion’s has been rapid since it became publicly available in August. I expect the optimizations to Core ML will only accelerate that trend in the Apple community and have the added benefit to Apple of enticing more developers to try Core ML.

If you want to take a look at the Core ML optimizations, they’re available on GitHub here and include “a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.”

Permalink

Pixelmator Pro Updated with Background Removal, Subject Selection, and Select and Mask Tools

Mac image editor Pixelmator Pro continues its streak of releasing machine learning-based tools that feel like magic, with a release that the Pixelmator team calls Abracadabra appropriately enough. The release of version 2.3 features tools to remove the background of an image, select just the subject of a photo, and a new Select and Mask feature for making fine-tuned selections.

I started with these images.

I started with these images.

When I first saw a demo of what Pixelmator 2.3 could do, I was a little skeptical that the features would work as well with my photos as the ones picked to show off the new tools. However, Pixelmator Pro’s new suite of related features is the real deal. With virtually no work on my part, I grabbed a photo of Federico and me from my trip to Rome, selected us, and after making a few refinements to the selection to pick up more of Federico’s hair (mine was perfect), I cut out the background, and replaced it with a photo I took in Dublin days before. After compositing the photos on separate layers, I color-matched the layers using ML Match Colors, so they’d fit together better.

The final composed image.

The final composed image.

The results aren’t perfect – the lighting and perspective are a little off – but those are issues with the photos I chose, not the tools I used. The photo of Federico and me was taken after the sun had set and was artificially lit, while the Dublin Canal was shot on a sunny morning, yet the composite image works incredibly well. What’s remarkable is what I was able to accomplish in just a few minutes. I also removed the background from one of the photos I took recently for my Stream Deck story, which worked perfectly with no additional work needed, which has interesting implications for product photography.

Remove Background takes advantage of Apple’s Core ML framework and works in just a few seconds. Select Subject works similarly but selects the subject of an image instead of erasing the background behind the subject. If you look closely at the masked selection below, you can see how well Pixelmator Pro did picking up the edges to get selection details like hair without any additional work by me. However, if an image needs a little selection touch-up, the Refine Edge Brush and Smart Refine feature make that sort of work easy too.

Pixelmator Pro’s new tools are available elsewhere in macOS, too, as Finder Quick Actions, Shortcuts actions, and AppleScript commands. I covered Pixelmator Pro’s Shortcuts actions earlier this fall, and they are some of the best available among Mac-only apps, so it’s fantastic to see those automation options continue to expand.

Pixelmator Pro has long been one of my must-have Mac apps. I don’t spend a lot of time editing images, but when I do, I appreciate that Pixelmator Pro makes the process easy and produces excellent results regardless of your experience with image editors.


John Giannandrea on the Broad Reach of Machine Learning in Apple’s Products

Today Samuel Axon at ArsTechnica published a new interview with two Apple executives: SVP of Machine Learning and AI Strategy John Giannandrea and VP of Product Marketing Bob Borchers. The interview is lengthy yet well worth reading, especially since it’s the most we’ve heard from Apple’s head of ML and AI since he departed Google to join the company in 2018.

Based on some of the things Giannandrea says in the interview, it sounds like he’s had a very busy two years. For example, when asked to list ways Apple has used machine learning in its recent software and products, Giannandrea lists a variety of things before ultimately indicating that it’s harder to name things that don’t use machine learning than ones that do.

There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning. It’s hard to find a part of the experience where you’re not doing some predictive [work].

One interesting tidbit mentioned by both Giannandrea and Borchers is that Apple’s increased dependence on machine learning hasn’t led to the company talking about ML non-stop. I’ve noticed this too – whereas a few years ago the company might have thrown out ‘machine learning’ countless times during a keynote presentation, these days it’s intentionally more careful and calculated in naming the term, and I think for good reason. As Giannandrea puts it, “I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart sort of disappear.” Borchers expounds on that idea:

This is clearly our approach, with everything that we do, which is, ‘Let’s focus on what the benefit is, not how you got there.’ And in the best cases, it becomes automagic. It disappears… and you just focus on what happened, as opposed to how it happened.

The full interview covers subjects like Apple’s Neural Engine, Apple Silicon for Macs, the benefits of handling ML tasks on-device, and much more, including a fun story from Giannandrea’s early days at Apple. You can read it here.

Permalink