This Week's Sponsor:

Direct Mail

Professional Email Marketing Built Just for Mac Users


Posts in Linked

The Trouble with Mixing Realities

Mark Gurman recently reported that Apple’s much-rumored headset will combine AR and VR technologies, which Brendon Bigley argues could be the wrong approach:

… I don’t think the road to mass adoption of virtual reality actually starts with virtual reality, it starts instead with augmented reality — a technology that can quickly prove its function if presented in a frictionless way. While even the best VR headsets demand isolation and escapism, a hypothetical product focused first and foremost on augmented reality would be all about enhancing the world around you rather than hiding from it.

Brendon’s story nails something that has been nagging me about recent headset rumors. The iPhone was a hit because it took things we already did at a desk with a computer and put them on a device we could take with us everywhere we go, expanding the contexts where those activities could be done. As Brendon observes, the Apple Watch did something similar with notifications. AR feels like something that fits in the same category – an enhancement of things we already do – while VR is inherently limiting, shutting you off from the physical world.

Like Brendon, it’s not that I’m not excited about the prospect of an Apple headset or the long-term prospects for virtual reality as a technology, but given where the technology is today, it does seem as though jumping into VR alongside AR could muddy the waters for both technologies. Of course, we’re all still working off of speculation and rumors. I have so many questions still and can’t wait to see what Apple has in store for us, hopefully later this year.

Permalink

MKBHD on Apple’s Processing Techniques for iPhone Photos

In his latest video, MKBHD eloquently summarized and explained something that I’ve personally felt for the past few years: pictures taken on modern iPhones often look sort-of washed out and samey, like much of the contrast and highlights from real life were lost somewhere along the way during HDR processing, Deep Fusion, or whatever Apple is calling their photography engine these days. From the video (which I’m embedding below), in the part where Marques notes how the iPhone completely ignored a light source that was pointing at one side of his face:

Look at how they completely removed the shadow from half of my face. I am clearly being lit from a source that’s to the side of me, and that’s part of reality. But in the iPhone’s reality you cannot tell, at least from my face, where the light is coming from. Every once in a while you get weird stuff like this, and it all comes back to the fact that it’s software making choices.

That’s precisely the issue here. The iPhone’s camera hardware is outstanding, but how iOS interprets and remixes the data it gets fed from the camera often leads to results that I find…boring and uninspired unless I manually touch them up with edits and effects. I like how Brendon Bigley put it:

Over time though, it’s become more and more evident that the software side of iOS has been mangling what should be great images taken with a great sensor and superbly crafted lenses. To be clear: The RAW files produced by this system in apps like Halide are stunning. But there’s something lost in translation when it comes to the stock Camera app and the ways in which it handles images from every day use.

Don’t miss the comparison shots between the Pixel 7 Pro and iPhone 14 Pro in MKBHD’s video. As an experiment for the next few weeks, I’m going to try what Brendon suggested and use the Rich Contrast photographic style on my iPhone 14 Pro Max.

Permalink

Samsung and Dell Take Aim at the Mac Monitor Market

Dell's upcoming 6K UltraSharp display. Source: Dell.

Dell’s upcoming 6K UltraSharp display. Source: Dell.

Dan Seifert writing for The Verge explains why this year’s CES has been such an exciting one for Mac users:

Though there have been many monitors marketed toward MacBook owners over the years, with features such as USB-C connectivity, high-wattage charging, and nicer than average designs, they’ve typically all had traditional 4K panels and sub-par pixel densities, as opposed to the higher-resolution displays that Apple puts in its devices. There was always a compromise required with one of those other monitors if you hooked a MacBook up to it.

Other than LG’s UltraFine displays, which had quality-control issues over the years, Mac users had no display options that matched the resolutions found on MacBook Pros or the 5K iMac. That changed with Apple’s Pro Display XDR and the Studio display, but both displays arrived with extremely high price tags.

That’s why monitors announced by Samsung and Dell at CES this week are so encouraging. Prices haven’t been set yet, but it’s a safe bet that they will be competitive with Apple’s.

The Samsung ViewFinity S9. Source: Samsung.

The Samsung ViewFinity S9. Source: Samsung.

Both displays promise functionality not found in Apple’s displays too. Samsung’s 5K ViewFinity S9 goes toe-to-toe with the Studio Display’s specs and adds a bunch of ports not available on Apple’s display.

Dell seems to be aiming directly at the Pro Display XDR. As Seifert explains:

Perhaps more interesting is the new Dell UltraSharp 32, the first monitor I’m aware of that matches the Pro Display XDR’s 32-inch size and 6K resolution. It doesn’t have the same HDR-capable local dimming display technology as the XDR, instead using an IPS Black panel sourced from LG, but it comes with integrated speakers, microphones, and a beefy 4K webcam, all of which are lacking from Apple’s high-end option. The UltraSharp 32 may be best described as a bigger version of the Studio Display, as it provides all of the necessary desk peripherals most people need but with a larger — just as sharp — panel. The Dell also tops out at 600 nits of brightness (the same as the Studio Display and Samsung’s S9) and comes with a whole litany of ports, including two Thunderbolt 4 (with up to 140W of power delivery), HDMI 2.1, ethernet, and four USB-A ports. It’s basically a complete Thunderbolt dock built into the back of the display.

I’m a big fan of Apple’s Studio Display, but its price was a hard pill to swallow and a factor that I’m sure has limited its appeal significantly. It remains to be seen how Samsung and Dell will price their monitors, but it’s good to see choice and competition comes to the high-resolution monitors that so many Mac users have wanted for so long.

Permalink

Apple Books Begins Offering AI-Based Book Narration

The Guardian reported today that audiobooks ‘Narrated by Apple Books’ have begun showing up in the Apple Books store. The audiobooks are narrated by AI-generated voices that Apple has picked to complement the genre of the books.

As 9to5Mac points out in its coverage, the feature was first announced last month on the Apple Books for Authors website, which offers details about the process for generating an AI-narrated audiobook. The website also explains that Apple is working with two outside publishers to produce the audiobooks. Currently, the program is limited to fiction and romance novels, plus a limited number of nonfiction and self-development titles. Samples of the voices available for each genre are linked on the site and are worth trying. Although the voices are clearly artificial, they’re some of the best I’ve heard from any service.

Although the narration used for the new ‘Narrated by Apple Books’ is synthesized using artificial intelligence, the production of a book is far from automated, with very specific criteria for eligible books and a one to two-month turnaround time. Still, it will be interesting to see how ‘Narrated by Apple Books’ affects the broader audiobook market. Audiobooks are expensive to produce, so I expect Apple’s new program will open up the option to more authors than before. However, as with other AI services, Apple’s could put voice actors out of work as its quality improves.

Permalink

Apple Contributes Magnetic Coupling Tech to the Qi Charging Standard

Sean Hollister of The Verge reports that Apple is contributing aspects of its MagSafe charging technology to the Qi wireless charging standard, which will bring magnetic coupling to Qi2-compatible mobile phones, including Android phones. According to Hollister’s interview with Paul Golden, a spokesperson for the Wireless Power Consortium:

There’s no reason to think a future Qi2 charger wouldn’t work seamlessly and identically with both Android and iPhones, Golden says when I ask. That’s because Apple, a WPC “steering member” (and chair of the board of directors) is contributing essentially the same “magnetic power profile” as MagSafe to the new Qi2 standard.

Hollister also reports that faster charging speeds are next on the Wireless Power Consortium’s to-do list:

That’s not all the WPC is working on, either! While the Qi2.0 release is largely just about adding magnets — it’s still primarily for phones, still tops out at 15 watts, still has the same foreign object detection, etc — the WPC intends to take advantage of guaranteed magnetic coupling to give us faster charging speeds, too. “When we finish with the spec for Qi2, we’ll immediately start working on a significantly higher power profile level for the next version of Qi2,” says Golden.

I’m glad to see Apple contributing to the Qi standard. Very few third-party manufacturers are using the official MagSafe standard, which usually means they charge more slowly. By standardizing the underlying magnetic connection and focusing next on charging speeds, we’ll hopefully see broader adoption of faster wireless charging across mobile phone accessories.

Permalink

Dark Sky Predicts Its Last Storm

With the turn of the New Year, Apple closed down Dark Sky for good. Apple acquired the app in 2020 and left it up and running until January 1st as it incorporated the app’s radar and real-time forecast features into its own Weather app. Dark Sky’s API, which was used by many third-party weather apps, was discontinued at the end of 2021 and was subsumed within Apple’s own WeatherKit API, which debuted last fall.

Over the holidays, Slate took a look at the app’s indie success story, which began with a successful Kickstarter campaign in 2011 that raised $40,000. One thing that I didn’t realize about Dark Sky is that its short-term precipitation forecasts were based solely on analysis of radar images, which didn’t win it fans among meteorologists:

Indeed, Dark Sky’s big innovation wasn’t simply that its map was gorgeous and user-friendly: The radar map was the forecast. Instead of pulling information about air pressure and humidity and temperature and calculating all of the messy variables that contribute to the weather—a multi-hundred-billion-dollars-a-year international enterprise of satellites, weather stations, balloons, buoys, and an army of scientists working in tandem around the world (see Blum’s book)—Dark Sky simply monitored changes to the shape, size, speed, and direction of shapes on a radar map and fast-forwarded those images. “It wasn’t meteorology,” Blum said. “It was just graphics practice.”

I hadn’t used Dark Sky in years when Apple bought it, except as a data source in other weather apps. Its forecasts may not have been as nuanced or accurate as a meteorologist’s, but there’s no denying its cultural impact on the world of apps, which is why I’ll be tucking this story away in my app history archives.

Permalink

Apple Has Stopped Development of System to Identify Child Sexual-Abuse Material

Joanna Stern of The Wall Street Journal, who interviewed Craig Federighi, Apple’s Senior Vice President of Software Engineering, in connection with the new security features coming to its platforms, reports that Apple has abandoned its efforts to identify child sexual-abuse materials in its devices. According to Stern:

Last year, Apple proposed software for the iPhone that would identify child sexual-abuse material on the iPhone. Apple now says it has stopped development of the system, following criticism from privacy and security researchers who worried that the software could be misused by governments or hackers to gain access to sensitive information on the phone.

Federighi told Stern:

 Child sexual abuse can be headed off before it occurs. That’s where we’re putting our energy going forward.

Apple also told The Wall Street Journal that Advanced Data Protection that allows users to opt into end-to-end encryption of new categories of personal data stored in iCloud, will be launched in the US this year and globally in 2023.

For an explanation of the new security protections announced today, be sure to catch Joanna Stern’s full interview with Craig Federighi.

Permalink

AppStories, Episode 308 – Gone but Not Forgotten

This week on AppStories, we explore the Apple apps and features that have disappeared from its platforms in recent years.

Sponsored by:

  • Pillow – Sleeping better, made simple.
  • Kolide – Maintaining endpoint security shouldn’t mean compromising employee privacy. Check out their manifesto: Honest Security.
  • Memberful – Monetize your passion with membership.

On AppStories+, I work on a portable video streaming setup, plus country-ambient music release announcements and other email we contend with.

We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about the benefits included with an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.

Permalink

Stable Diffusion Optimizations Are Coming to iOS and iPadOS 16.2 and macOS 13.1 Via Core ML

Today, Apple announced on its Machine Learning Research website that iOS and iPadOS 16.2 and macOS 13.1 will gain optimizations to its Core ML framework for Stable Diffusion, the model that powers a wide variety of tools that allow users to do things like generate an image from text prompts and more. The post explains the advantages of running Stable Diffusion locally on Apple silicon devices:

One of the key questions for Stable Diffusion in any app is where the model is running. There are a number of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. First, the privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. Second, after initial download, users don’t require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs.

The optimizations to the Core ML framework are designed to simplify the process of incorporating Stable Diffusion into developers’ apps:

Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon.

The development of Stable Diffusion’s has been rapid since it became publicly available in August. I expect the optimizations to Core ML will only accelerate that trend in the Apple community and have the added benefit to Apple of enticing more developers to try Core ML.

If you want to take a look at the Core ML optimizations, they’re available on GitHub here and include “a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.”

Permalink