This Week's Sponsor:

Textastic

The Powerful Code Editor for iPad and iPhone — Now Free to Try


Posts in Linked

App Store Vibes

Bryan Irace has an interesting take on the new generation of developer tools that have lowered the barrier to entry for new developers (and sometimes not even developers) when it comes to creating apps:

Recent criticism of Apple’s AI efforts has been juicy to say the least, but this shouldn’t distract us from continuing to criticize one of Apple’s most deserving targets: App Review. Especially now that there’s a perfectly good AI lens through which to do so.

It’s one thing for Apple’s AI product offerings to be non-competitive. Perhaps even worse is that as Apple stands still, software development is moving forward faster than ever before. Like it or not, LLMs—both through general chat interfaces and purpose-built developer tools—have meaningfully increased the rate at which new software can be produced. And they’ve done so both by making skilled developers more productive while also lowering the bar for less-experienced participants.

And:

I recently built a small iOS app for myself. I can install it on my phone directly from Xcode but it expires after seven days because I’m using a free Apple Developer account. I’m not trying to avoid paying Apple, but there’s enough friction involved in switching to a paid account that I simply haven’t been bothered. And I used to wrangle provisioning profiles for a living! I can’t imagine that I’m alone here, or that others with less tribal iOS development knowledge are going to have a higher tolerance for this. A friend asked me to send the app to them but that’d involve creating a TestFlight group, submitting a build to Apple, waiting for them to approve it, etc. Compare this to simply pushing to Cloudflare or Netlify and automatically having a URL you can send to a friend or share via Twitter. Or using tools like v0 or Replit, where hosting/distribution are already baked in.

Again, this isn’t new—but being able to build this much software this fast is new. App distribution friction has stayed constant while friction in all other stages of software development has largely evaporated. It’s the difference between inconvenient and untenable.

Perhaps “vibe coding” is the extreme version of this concept, but I think there’s something here. Creating small, low-stakes apps for personal projects or that you want to share with a small group of people is, objectively, getting easier. After reading Bryan’s post – which rightfully focuses on the distribution side of apps – I’m also wondering: what happens when the first big service comes along and figures out a way to bypass the App Store altogether (perhaps via the web?) to allow “anyone” to create apps, completely cutting out Apple and its App Review from the process?

In a way, this reminds me of blogging. Those who wanted to have an online writing space 30 years ago had to know some of the basics of hosting and HTML if they wanted to publish something for other people to read. Then Blogger came along and allowed anyone – regardless of their skill level – to be read. What if the same happened to mobile software? Should Apple and Google be ready for this possibility within the next few years?

I could see Google spin up a “Build with Gemini” initiative to let anyone create Android apps without any coding knowledge. I’m also reminded of this old Vision Pro rumor that claimed Apple’s Vision team was exploring the idea of letting people create “apps” with Siri.

If only the person in charge of that team went anywhere, right?

Permalink

Bloomberg Reports that Apple Is Shaking up Siri Leadership

Less than two weeks ago, Apple announced that it was delaying the launch of a more personalized Siri. Today, Mark Gurman, reporting for Bloomberg, says the company is shuffling leadership of the project, too. According to Gurman:

Chief Executive Officer Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he’s moving over another top executive to help: Vision Pro creator Mike Rockwell. In a new role, Rockwell will be in charge of the Siri virtual assistant, according to the people, who asked not to be identified because the moves haven’t been announced.

Giannandrea isn’t leaving Apple. Instead, Gurman says Giannandrea will continue to oversee “research, testing and technologies related to AI” including a team investigating robotics. Rockwell, who led the development of the Vision Pro, will report to Craig Federighi, Apple’s senior vice president of software.

Rockwell has had a long and successful track record at Apple, so hopefully Siri is in good hands going forward. It’s clear that there’s a lot of work to be done, but the promise of a more personalized Siri and a system for apps to communicate with each other via Apple Intelligence is something I’m glad the company isn’t giving up on. Hopefully, we’ll see some progress from Rockwell’s team soon.

Permalink

Pebble’s Inherent Disadvantages on the iPhone

It’s been just shy of one year since the U.S. Department of Justice and 15 states sued Apple for antitrust violations. It’s not clear what will become of that lawsuit given the change of administrations, but as it stands today, it’s still an active case.

One of the things that is hard about a case like the one filed against Apple is cutting through the legal arguments and economic jargon to understand the real-world issues underlying it. Earlier this week Eric Migicovsky one of the Pebble smartwatch founders who resuscitated the device this week, wrote an excellent post on his blog that explains the real world issues facing third-party smartwatch makers like Pebble.

Among other things:

It’s impossible for a 3rd party smartwatch to send text messages, or perform actions on notifications (like dismissing, muting, replying)….

It’s worth reading the post in its entirety for the other things third-party smartwatch makers can’t do on iOS, and as Migicovsky explains, things have gotten worse with time, not better. Since the Pebble’s time, the complaint against Apple adds that:

  • You must set notifications to display full content previews on your lockscreen for them to also be sent to a 3rd party watch (new restriction added in iOS 13).
  • Apple closed off the ability of smartwatches after Pebble to negotiate with carriers to provide messaging services, and now requires users to turn off iMessage (disabling iOS’s core messaging platform) if they want to take advantage of such contracts between a third-party smartwatch maker and cellular carriers.

The Apple Watch is great. There isn’t another smartwatch that I’ve even been tempted to try in recent years, but is that because no one has been able to make a good alternative or hasn’t because the disadvantages third-party wearables face are too great?

I’d like to see Apple focus on finding ways to better integrate other devices with the iPhone. There are undoubtedly security and privacy issues that need to be carefully considered, but figuring those things out should be a priority because choice and competition are better for Apple’s customers in the long run.

Permalink

Choosing Optimism About iOS 19

I loved this post by David Smith on his decision to remain optimistic about Apple’s rumored iOS 19 redesign despite, well, you know, everything:

Optimism isn’t enthusiasm. Enthusiasm is a feeling, optimism is a choice. I have much less of the enthusiastic feelings these days about my relationship to Apple and its technologies (discussed here on Under the Radar 312), but I can still choose to optimistically look for the positives in any situation. Something I’ve learned as I’ve aged is that pessimism feels better in the moment, but then slowly rots you over time. Whereas optimism feels foolish in the moment, but sustains you over time.

I’ve always disliked the word “enthusiast” (talk about a throwback), and I’ve been frequently criticized for choosing the more optimistic approach in covering Apple over the years. But David is right: pessimism feels better in the short term (and performs better if you’re good at writing headlines or designing YouTube thumbnails), but is not a good long-term investment. (Of course, when the optimism is also gone for good…well, that’s a different kind of problem.)

But back to David’s thoughts on the iOS 19 redesign. He lists this potential reason to be optimistic about having to redesign his apps:

It would provide a point of differentiation for my app against other apps who wouldn’t adopt the new design language right away (either large companies which have their own design system or laggards who wouldn’t prioritize it).

He’s correct: the last time Apple rolled out a major redesign of iOS, they launched a dedicated section on the App Store which, on day one, featured indie apps updated for iOS 7 such as OmniFocus, Twitterrific, Reeder 2, Pocket Casts 4, and Perfect Weather. It lasted for months. Twelve years later1, I doubt that bigger companies will be as slow as they were in 2013 to adopt Apple’s new design language, but more agile indie developers will undoubtedly have an advantage here.

He also writes:

Something I regularly remind myself as I look at new Apple announcements is that I never have the whole picture of what is to come for the platform, but Apple does. They know if things like foldable iPhones or HomeKit terminals are on the horizon and how a new design would fit in best with them. If you pay attention and try to read between the lines they often will provide the clues necessary to “skate where the puck is going” and be ready when new, exciting things get announced subsequently.

This is the key point for me going into this summer’s review season. Just like when Apple introduced size classes in iOS 8 at WWDC 2014 and launched Slide Over and Split View multitasking for the iPad (alongside the first iPad Pro) the next year, I have to imagine that changes in this year’s design language will pave the way for an iPhone that unfolds into a mini tablet, a convertible Mac laptop, App Intents on a dedicated screen, or more. So while I’m not enthusiastic about Apple’s performance in AI or developer relations, I choose to be optimistic about the idea that this year’s redesign may launch us into an exciting season of new hardware and apps for the next decade.


  1. Think about it this way: when iOS 7 was released, the App Store was only five years old. ↩︎
Permalink

Lux’s Sebastiaan de With on the iPhone 16e’s Essential Camera Experience

As I read Sebastiaan de With’s review of the iPhone 16e’s camera, I found myself chuckling when I got to this part:

You can speculate what the ‘e’ in ‘16e’ stands for, but in my head it stands for ‘essential’. Some things that I consider particularly essential to the iPhone are all there: fantastic build quality, an OLED screen, iOS and all its apps, and Face ID. It even has satellite connectivity. Some other things I also consider essential are not here: MagSafe is very missed, for instance, but also multiple cameras. It be [sic] reasonable to look at Apple’s Camera app, then, and see what comprises the ‘essential’ iPhone camera experience according to Apple.

What amused me was that I initially planned to call my iPhone 16e review the ‘e’ Is for Essential, but I settled on ‘elemental’ instead. Whether the ‘e’ in iPhone 16e stands for either of our guesses or neither really doesn’t matter. Like Sebastiaan, I find what Apple chose to include and exclude from the 16e fascinating.

When it comes to the iPhone 16e’s camera, there are differences compared to the iPhone 16 Pro, which is the focus of Sebastiaan’s review. The 16e supports fewer features than the Pro and the photos it takes don’t reproduce quite as much detail, especially in low-light conditions. There are other differences, too, so it’s worth comparing the review’s side-by-side comparison shots of the 16e to the 16 Pro.

Overall, though, I think it’s fair to say Sebastiaan came away impressed with the 16e’s camera, which has been my experience, too. So far, I’ve only used it to shoot video for our podcasts, and with good lighting, the results are excellent. Despite some differences, the iPhone 16e combined with the wealth of photo and video apps, like Lux’s Halide and Kino, make it a great way to enjoy the essential iPhone photography experience.

Permalink

Where’s Swift Assist?

Last June at WWDC, Apple announced Swift Assist, a way to generate Swift code using natural language prompts. However, as Tim Hardwick writes for MacRumors, Swift Assist hasn’t been heard from since then:

Unlike Apple Intelligence, Swift Assist never appeared in beta. Apple hasn’t announced that it’s been delayed or cancelled. The company has since released Xcode 16.3 beta 2, and as Michael Tsai points out, it’s not even mentioned in the release notes.

Meanwhile, developers have moved on, adopting services like Cursor, which does much of what was promised with Swift Assist, if not more. A similar tool built specifically for Swift projects and Apple’s APIs would be a great addition to Xcode, but it’s been nine months, and developers haven’t heard anything more about Swift Assist. Apple owes them an update.

Permalink

The M3 Ultra Mac Studio for Local LLMs

Speaking of the new Mac Studio and Apple making the best computers for AI: this is a terrific overview by Max Weinbach about the new M3 Ultra chip and its real-world performance with various on-device LLMs:

The Mac I’ve been using for the past few days is the Mac Studio with M3 Ultra SoC, 32-core CPU, 80-core GPU, 256GB Unified Memory (192GB usable for VRAM), and 4TB SSD. It’s the fastest computer I have. It is faster in my workflows for even AI than my gaming PC (which will be used for comparisons below; it has an Intel i9 13900K, RTX 5090, 64GB of DDR5, and a 2TB NVMe SSD).

It’s a very technical read, but the comparison between the M3 Ultra and a vanilla (non-optimized) RTX 5090 is mind-blogging to me. According to Weinbach, it all comes down to Apple’s MLX framework:

I’ll keep it brief; the LLM performance is essentially as good as you’ll get for the majority of models. You’ll be able to run better models faster with larger context windows on a Mac Studio or any Mac with Unified Memory than essentially any PC on the market. This is simply the inherent benefit of not only Apple Silicon but Apple’s MLX framework (the reason we can efficiently run the models without preloading KV Cache into memory, as well as generate tokens faster as context windows grow).

In case you’re not familiar, MLX is Apple’s open-source framework that – I’m simplifying – optimizes training and serving models on Apple Silicon’s unified memory architecture. It is a wonderful project with over 1,600 community models available for download.

As Weinbach concludes:

I see one of the best combos any developer can do as: M3 Ultra Mac Studio with an Nvidia 8xH100 rented rack. Hopper and Blackwell are outstanding for servers, M3 Ultra is outstanding for your desk. Different machines for a different use, while it’s fun to compare these for sport, that’s not the reality.⁠⁠

There really is no competition for an AI workstation today. The reality is, the only option is a Mac Studio.

Don’t miss the benchmarks in the story.

Permalink

Is Apple Shipping the Best AI Computers?

For all the criticism (mine included) surrounding Apple’s delay of various Apple Intelligence features, I found this different perspective by Ben Thompson fascinating and worth considering:

What that means in practical terms is that Apple just shipped the best consumer-grade AI computer ever. A Mac Studio with an M3 Ultra chip and 512GB RAM can run a 4-bit quantized version of DeepSeek R1 — a state-of-the-art open-source reasoning model — right on your desktop. It’s not perfect — quantization reduces precision, and the memory bandwidth is a bottleneck that limits performance — but this is something you simply can’t do with a standalone Nvidia chip, pro or consumer. The former can, of course, be interconnected, giving you superior performance, but that costs hundreds of thousands of dollars all-in; the only real alternative for home use would be a server CPU and gobs of RAM, but that’s even slower, and you have to put it together yourself. Apple didn’t, of course, explicitly design the M3 Ultra for R1; the architectural decisions undergirding this chip were surely made years ago. In fact, if you want to include the critical decision to pursue a unified memory architecture, then your timeline has to extend back to the late 2000s, whenever the key architectural decisions were made for Apple’s first A4 chip, which debuted in the original iPad in 2010. Regardless, the fact of the matter is that you can make a strong case that Apple is the best consumer hardware company in AI, and this week affirmed that reality.

Anecdotally speaking, based on the people who cover AI that I follow these days, it seems there are largely two buckets of folks who are into local, on-device models: those who have set up pricey NVIDIA rigs at home for their CUDA cores (the vast minority); and – the undeniable majority – those who run a spectrum of local models on their Macs of different shapes and configurations (usually, MacBook Pros). If you have to run high-end, performance-intensive local models for academic or scientific workflows on a desktop, the M3 Ultra Mac Studio sounds like an absolute winner.

However, I’d point out that – again, as far as local, on-device models are concerned – Apple is not shipping the best possible hardware on smartphones.

While the entire iPhone 16 lineup is stuck on 8 GB of RAM (and we know how memory-hungry these models can be), Android phones with at least 12 GB or 16 GB of RAM are becoming pretty much the norm now, especially in flagship territory. Even better in Android land, what are being advertised as “gaming phones” with a whopping 24 GB of RAM (such as the ASUS ROG Phone 9 Pro or the RedMagic 10 Pro) may actually make for compelling pocket computers to run smaller, distilled versions of DeepSeek, LLama, or Mistral with better performance than current iPhones.

Interestingly, I keep going back to this quote from Mark Gurman’s latest report on Apple’s AI challenges:

There are also concerns internally that fixing Siri will require having more powerful AI models run on Apple’s devices. That could strain the hardware, meaning Apple either has to reduce its set of features or make the models run more slowly on current or older devices. It would also require upping the hardware capabilities of future products to make the features run at full strength.

Given Apple’s struggles, their preference for a hybrid on-device/server-based AI system, and the market’s evolution on Android, I don’t think Apple can afford to ship 8 GB on iPhones for much longer if they’re serious about AI and positioning their hardware as the best consumer-grade AI computers.

Permalink

Apple Delays Siri Personalization

Apple released a statement to John Gruber of Daring Fireball today announcing that it is delaying a “more personalized Siri.” According to Apple’s Jacqueline Roy:

Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.

This isn’t surprising given where things stand with Siri and Apple Intelligence more generally, but it is still disappointing. Of all the features shown off at WWDC last year, the ability to have Siri take actions in multiple apps on your behalf through natural language requests was one of the most eagerly anticipated. But, I’d prefer to get a feature that works than one that is half-baked.

Still, you have to wonder where the rest of the AI market will be by the time a “more personalized Siri” is released and whether it will look as much like yesterday’s tech as some of today’s Apple Intelligence features do.

Permalink