This Week's Sponsor:

Textastic

The Powerful Code Editor for iPad and iPhone — Now Free to Try


Posts in Linked

How Does This Keep Happening?

Today, Blue Prince, a critically acclaimed videogame appeared on Apple’s App Store. The trouble was, it wasn’t offered for sale by its developer, Dogubomb, or its publisher, Raw Fury. The real Blue Prince is only available on the Xbox, PlayStation, and PC.

What appeared on the App Store, and has since been removed, was an opportunistic scam as Jay Peters explained for The Verge:

Before it was removed, I easily found one iOS copy of the game just by searching Blue Prince on the App Store – it was the first search result. The icon looked like it would be the icon for a hypothetical mobile version of the game, the listing had screenshots that looked like they were indeed from Blue Prince, and the description for the game matched the description on Steam.

The port was available long enough for Blue Prince’s developer and publisher to post about it on Bluesky and, according to Peters, for the fake to reach #8 in the App Store Entertainment category. I feel for anyone who bought the game assuming it was legit given Peters’ experience:

I also quickly ran into a major bug: when I tried to walk through one of the doors from the Entrance Hall, I fell through the floor.

This isn’t the first time this sort of thing has happened. As Peters points out it happened to Palworld and Wordle too. Other popular games that have appeared on the App Store as janky scam ports include Cuphead, a version of Balatro that appeared before its official release on iOS, and Unpacking.

This seems like the sort of thing that could be fixed through automation. Scammers want users to find these games, so they can make a quick buck. As a result, the name of the game is often identical to what you’d find on the Steam, Xbox, or PlayStation stores. It strikes me that a combination of automated searching for the top games on each store, combined with an analysis of how quickly a game is moving up the charts would catch a lot of this sort of thing, flagging it for reviewers who could take a closer look.


By the way, if you haven’t tried Blue Prince, you should. It’s an amazing game and early contender for game of the year. You can learn more about the game and find links to where to buy it here. Also, Brendon Bigley, my NPC co-host, has an excellent written and video review of Blue Prince on Wavelengths.

Permalink

Contabulation

Rumors have been flying for a while about a planned redesign for iOS 19. One of the rumors is that iOS tab bars will support search bars, which led Ben McCarthy, the developer of Obscura, to write a terrific breakdown of how tab bars should be used:

If search is the primary form of navigation, as in Safari, Maps, or Callsheet, it should be at the bottom. If a search bar is just used for filtering content already on screen, then it can make more sense to leave it at the top, as scrolling is probably the more natural way to find what you’re looking for (the Settings app is a good example of this). So I’m delighted at the rumours that iOS 19’s Tab Bars can adapt into Search Bars when needed. I think it’ll be [a] big improvement and allow for more flexible navigation patterns with less code.

But Ben didn’t just provide pointers on how tab bars should be used. They also explained that tab bars:

  • should support actions and context menus,
  • accommodate more than five tabs,
  • and allow for user-generated tabs, something that is common on macOS.

It’s a great post, well worth studying as we wait to see whether and how far Apple will go in modifying the tab bar. As Ben notes, the tab bar has been around since the beginning of the iPhone, has changed very little, and is due for a redesign. I agree.

Permalink

Whisky Shuts Down Project That Enabled Windows Gaming on Mac

Not long ago, Isaac Marovitz, the developer behind Whisky, the open source WINE front-end that made it easy to play Windows games on a Mac, announced the project had come to an end. Whisky is how Niléane got Cities: Skylines 2 running on an M2 MacBook Air in 2023, and the project was well-regarded in the gaming community for its ease of use. In shutting down the project, Marovitz encouraged Whisky users to move to CrossOver, a paid app by CodeWeavers.

In an interview with Ars Technica’s Kevin Purdy, Marovitz said:

“I am 18, yes, and attending Northeastern University, so it’s always a balancing act between my school work and dev work,” Isaac Marovitz wrote to Ars. The Whisky project has “been more or less in this state for a few months, I posted the notice mostly to clarify and formally announce it,” Marovitz said, having received “a lot of questions” about the project status.

As Purdy explained for Ars Technica, Marovitz also became concerned that his free project threatened CrossOver, and by extension, WINE itself. Last week, CodeWeavers’ CEO wrote about the shutdown, to acknowledge Marovitz’s work and commend his desire to protect the WINE project.

It’s always a shame to see a project as popular and polished as Whisky discontinued. Some gamers may not like that CrossOver is a paid product, but I’m glad that there’s an alternative for those who want it.

To me though, the popularity and fragility of projects like Whisky highlight that a better solution would be for Apple to open its Game Porting Toolkit to users. The Game Porting Toolkit is built on CrossOver’s open source code. However, unlike the CrossOver app sold to gamers, Apple’s Game Porting Toolkit is meant for developers who want to move a game from Windows to Mac. It’s not impossible for gamers to use, but it’s not easy either. I’m not the first to suggest this, and Valve has demonstrated both the technical and commercial viability of such an approach with Proton, but as WWDC approaches, a user-facing Game Porting Toolkit is near the top of my macOS 16 wish list.

Permalink

How Could Apple Use Open-Source AI Models?

Yesterday, Wayne Ma, reporting for The Information, published an outstanding story detailing the internal turmoil at Apple that led to the delay of the highly anticipated Siri AI features last month. From the article:

In November 2022, OpenAI released ChatGPT to a thunderous response from the tech industry and public. Within Giannandrea’s AI team, however, senior leaders didn’t respond with a sense of urgency, according to former engineers who were on the team at the time.

The reaction was different inside Federighi’s software engineering group. Senior leaders of the Intelligent Systems team immediately began sharing papers about LLMs and openly talking about how they could be used to improve the iPhone, said multiple former Apple employees.

Excitement began to build within the software engineering group after members of the Intelligent Systems team presented demos to Federighi showcasing what could be achieved on iPhones with AI. Using OpenAI’s models, the demos showed how AI could understand content on a user’s phone screen and enable more conversational speech for navigating apps and performing other tasks.

Assuming the details in this report are correct, I truly can’t imagine how one could possibly see the debut of ChatGPT two years ago and not feel a sense of urgency. Fortunately, other teams at Apple did, and it sounds like they’re the folks who have now been put in charge of the next generation of Siri and AI.

There are plenty of other details worth reading in the full story (especially the parts about what Rockwell’s team wanted to accomplish with Siri and AI on the Vision Pro), but one tidbit in particular stood out to me: Federighi has now given the green light to rely on third-party, open-source LLMs to build the next wave of AI features.

Federighi has already shaken things up. In a departure from previous policy, he has instructed Siri’s machine-learning engineers to do whatever it takes to build the best AI features, even if it means using open-source models from other companies in its software products as opposed to Apple’s own models, according to a person familiar with the matter.

“Using” open-source models from other companies doesn’t necessarily mean shipping consumer features in iOS powered by external LLMs. I’ve seen some people interpret this paragraph as Apple preparing to release a local Siri powered by Llama 4 or DeepSeek, and I think we should pay more attention to that “build the best AI features” (emphasis mine) line.

My read of this part is that Federighi might have instructed his team to use distillation to better train Apple’s in-house models as a way to accelerate the development of the delayed Siri features and put them back on the company’s roadmap. Given Tim Cook’s public appreciation for DeepSeek and this morning’s New York Times report that the delayed features may come this fall, I wouldn’t be shocked to learn that Federighi told Siri’s ML team to distill DeepSeek R1’s reasoning knowledge into a new variant of their ∼3 billion parameter foundation model that runs on-device. Doing that wouldn’t mean that iOS 19’s Apple Intelligence would be “powered by DeepSeek”; it would just be a faster way for Apple to catch up without throwing away the foundational model they unveiled last year (which, supposedly, had a ~30% error rate).

In thinking about this possibility, I got curious and decided to check out the original paper that Apple published last year with details on how they trained the two versions of AFM (Apple Foundation Model): AFM-server and AFM-on-device. The latter would be the smaller, ~3 billion model that gets downloaded on-device with Apple Intelligence. I’ll let you guess what Apple did to improve the performance of the smaller model:

For the on-device model, we found that knowledge distillation (Hinton et al., 2015) and structural pruning are effective ways to improve model performance and training efficiency. These two methods are complementary to each other and work in different ways. More specifically, before training AFM-on-device, we initialize it from a pruned 6.4B model (trained from scratch using the same recipe as AFM-server), using pruning masks that are learned through a method similar to what is described in (Wang et al., 2020; Xia et al., 2023).

Or, more simply:

AFM-server core training is conducted from scratch, while AFM-on-device is distilled and pruned from a larger model.

If the distilled version of AFM-on-device that was tested until a few weeks ago produced a wrong output one third of the time, perhaps it would be a good idea to perform distillation again based on knowledge from other smarter and larger models? Say, using 250 Nvidia GB300 NVL72 servers?

(One last fun fact: per their paper, Apple trained AFM-server on 8192 TPUv4 chips for 6.3 trillion tokens; that setup still wouldn’t be as powerful as “only” 250 modern Nvidia servers today.)

Permalink

A Peek Into LookUp’s Word of the Day Art and Why It Could Never Be AI-Generated

Yesterday, Vidit Bhargava, developer of the award-winning dictionary app LookUp, wrote on his blog about the way he hand-makes each piece of artwork that accompanies the app’s Word of the Day. While revealing that he has employed this practice every day for an astonishing 10 years, Vidit talked about how each image is made from scratch as an illustration or using photography that he shoots specifically for the design:

Each Word of the Day has been illustrated with care, crafting digital illustrations, picking the right typography that conveys the right emotion.

Some words contain images, these images are painstakingly shot, edited and crafted into a Word of the Day graphic by me.

I’ve noticed before that each Word of the Day image in LookUp seemed unique, but I assumed Vidit was using stock imagery and illustrations as a starting point each time. The revelation that he is creating almost all of these from scratch every single day was incredible and gave me a whole new level of respect for the developer.

The idea of AI-generated art (specifically art that is wholly generated from scratch by LLMs) is something that really sticks in my throat – never more so than with the recent rip-off of the beautiful, hand-drawn Studio Ghibli films by OpenAI. Conversely, Vidit’s work shows passion and originality.

To quote Vidit, “Real art takes time, effort and perseverance. The process is what makes it valuable.”

You can read the full blog post here.


Is Electron Really That Bad?

I’ve been thinking about this video by Theo Browne for the past few days, especially in the aftermath of my story about working on the iPad and realizing its best apps are actually web apps.

I think Theo did a great job contextualizing the history of Electron and how we got to this point where the majority of desktop apps are built with it. There are two sections of the video that stood out to me and I want to highlight here. First, this observation – which I strongly agree with – regarding the desktop apps we ended up having thanks to Electron and why we often consider them “buggy”:

There wouldn’t be a ChatGPT desktop app if we didn’t have something like Electron. There wouldn’t be a good Spotify player if we didn’t have something like Electron. There wouldn’t be all of these awesome things we use every day. All these apps… Notion could never have existed without Electron. VS Code and now Cursor could never have existed without Electron. Discord absolutely could never have existed without Electron.

All of these apps are able to exist and be multi-platform and ship and theoretically build greater and greater software as a result of using this technology. That has resulted in some painful side effects, like the companies growing way faster than expected because they can be adopted so easily. So they hire a bunch of engineers who don’t know what they’re doing, and the software falls apart. But if they had somehow magically found a way to do that natively, it would have happened the same exact way.

This has nothing to do with Electron causing the software to be bad and everything to do with the software being so successful that the companies hire too aggressively and then kill their own software in the process.

The second section of the video I want to call out is the part where Theo links to an old thread from the developer of BoltAI, a native SwiftUI app for Mac that went through multiple updates – and a lot of work on the developer’s part – to ensure the app wouldn’t hit 100% CPU usage when simply loading a conversation with ChatGPT. As documented in the thread from late 2023, this is a common issue for the majority of AI clients built with SwiftUI, which is often less efficient than Electron when it comes to rendering real-time chat messages. Ironic.

Theo argues:

You guys need to understand something. You are not better at rendering text than the Chromium team is. These people have spent decades making the world’s fastest method for rendering documents across platforms because the goal was to make Chrome as fast as possible regardless of what machine you’re using it on. Electron is cool because we can build on top of all of the efforts that they put in to make Electron and specifically to make Chromium as effective as it is. The results are effective.

The fact that you can swap out the native layer with SwiftUI with even just a web view, which is like Electron but worse, and the performance is this much better, is hilarious. Also notice there’s a couple more Electron apps he has open here, including Spotify, which is only using less than 3% of his CPU. Electron apps don’t have to be slow. In fact, a lot of the time, a well-written Electron app is actually going to perform better than an equivalently well-written native app because you don’t get to build rendering as effectively as Google does.

Even if you think you made up your mind about Electron years ago, I suggest watching the entire video and considering whether this crusade against more accessible, more frequently updated (and often more performant) desktop software still makes sense in 2025.

Permalink

Recording Video and Gaming: A Setup Update

It’s been a couple of months since I updated my desk setup. In that time, I’ve concentrated on two areas: video recording and handheld gaming.

I wasn’t happy with the Elgato Facecam Pro 4K camera, so I switched to the iPhone 16e. The Facecam Pro is a great webcam, but the footage it shot for our podcasts was mediocre. In the few weeks that I’ve moved to the 16e, I’ve been very happy with it. My office is well lit, and the video I’ve shot with the 16e is clear, detailed, and vibrant.

The iPhone 16e sits behind an Elgato Prompter, a desktop teleprompter that can act as a second Mac display. That display can be used to read scripts, which I haven’t done much of yet, or for apps. I typically put my Zoom window on the Prompter’s display, so when I look at my co-hosts on Zoom, I am also looking into the camera.

The final piece of my video setup that I added since the beginning of the year is the Tourbox Elite Plus. It’s a funny looking contraption with lots of buttons and dials that fits comfortably in your hand. It’s a lot like a Stream Deck or Logitech MX Creative Console, but the many shapes and sizes of its buttons, dials, and knobs set it apart and make it easier to associate each with a certain action. Like similar devices, everything can be tied to keyboard shortcuts, macros, and automations, making it an excellent companion for audio and video editing.

On the gaming side of things, my biggest investment has been in a TP-Link Wi-Fi 7 Mesh System. Living in a three-story condo makes setting up good Wi-Fi coverage hard. With my previous system I decided to skip putting a router on the third floor, which was fine unless I wanted to play games in bed in the evening. With a new three-router system that supports Wi-Fi 7 I have better coverage and speed, which has already made game streaming noticeably better.

Ayn Odin 2 Portal Pro. Source: Ayn.

Ayn Odin 2 Portal Pro. Source: Ayn.

The other changes are the addition of the Ayn Odin 2 Portal Pro, which we’ve covered on NPC: Next Portable Console. I love its OLED screen and the fact that it runs Android, which makes streaming games and setting up emulators a breeze. It supports Wi-Fi 7, too, so it pairs nicely with my new Wi-Fi setup.

A few weeks ago, I realized that I often sit on my couch with a pillow in my lap to prop up my laptop or iPad Pro. That convinced me to add Mechanism’s Gaming Pillow to my setup, which I use in the evening from my couch or later in bed. Mechanism makes a bunch of brackets and other accessories to connect various devices to the pillow’s arm, which I plan to explore more in the coming weeks.

The 8BitDo Ultimate 2 Controller. Source: 8BitDo.

The 8BitDo Ultimate 2 Controller. Source: 8BitDo.

There are a handful of other changes that I’ve made to my setup that you can find along with everything else I’m currently using on our Setups page, but there are two other items I wanted to shout out here. The first is the JSAUX 16” FlipGo Pro Dual Monitor, which I recently reviewed. It’s two 16” stacked matte screens joined by a hinge. It’s a wonderfully weird and incredibly useful way to get a lot of screen real estate in a relatively small package. The second item is 8BitDo’s new Ultimate 2 Wireless Controller that works with Windows and Android. I was a fan of the original version of this controller, but this update preserves the original’s build quality and adds new features like L4 and R4 buttons, TMR joysticks that use less energy than Hall Effect joysticks, and 2.4G via a USB-C dongle and Bluetooth connection options.

That’s it for now. In the coming months, I hope to redo parts of my smart home setup, so stay tuned for another update later this summer or in the fall.

Permalink

Opening iOS Is Good News for Smartwatches

Speaking of opening up iOS to more types of applications, I enjoyed this story by Victoria Song, writing at The Verge about the new EU-mandated interoperability requirements that include, among other things, smartwatches:

This is a big reason why it’s a good thing that the European Commission recently gave Apple marching orders to open up iOS interoperability to other gadget makers. You can read our explainer on the nitty gritty of what this means, but the gist is that it’s going to be harder for Apple to gatekeep iOS features to its own products. Specific to smartwatches, Apple will have to allow third-party smartwatch makers to display and interact with iOS notifications. I’m certain Garmin fans worldwide, who have long complained about the inability to send quick replies on iOS, erupted in cheers.

And this line, which is so true in its simplicity:

Some people just want the ability to choose how they use the products they buy.

Can you imagine if your expensive Mac desktop had, say, some latency if you decided to enter text with a non-Apple keyboard? Or if the USB-C port only worked with proprietary Apple accessories? Clearly, those restrictions would be absurd on computers that cost thousands of dollars. And yet, similar restrictions have long existed on iPhones and the iOS ecosystem, and it’s time to put an end to them.

Permalink

On Apple Allowing Third-Party Assistants on iOS

This is an interesting idea by Parker Ortolani: what if Apple allowed users to change their default assistant from Siri to something else?

I do not want to harp on the Siri situation, but I do have one suggestion that I think Apple should listen to. Because I suspect it is going to take quite some time for the company to get the new Siri out the door properly, they should do what was previously unthinkable. That is, open up iOS to third-party assistants. I do not say this lightly. I am one of those folks who does not want iOS to be torn open like Android, but I am willing to sign on when it makes good common sense. Right now it does.

And:

I do not use Gemini as my primary LLM generally, I prefer to use ChatGPT and Claude most of the time for research, coding, and writing. But Gemini has proved to be the best assistant out of them all. So while we wait for Siri to get good, give us the ability to use custom assistants at the system level. It does not have to be available to everyone, heck create a special intent that Google and these companies need to apply for if you want. But these apps with proper system level overlays would be a massive improvement over the existing version of Siri. I do not want to have to launch the app every single time.

As a fan of the progressive opening up of iOS that’s been happening in Europe thanks to our laws, I can only welcome such a proposal – especially when I consider the fact that long-pressing the side button on my expensive phone defaults to an assistant that can’t even tell which month it is. If Apple truly thinks that Siri helps users “find what they need and get things done quickly”, they should create an Assistant API and allow other companies to compete with them. Let iPhone users decide which assistant they prefer in 2025.

Some people may argue that other assistants, unlike Siri, won’t be able to access key features such as sending messages or integrating with core iOS system frameworks. My reply would be: perhaps having a more prominent placement on iOS would actually push third-party companies to integrate with the iOS APIs that do exist. For instance, there is nothing stopping OpenAI from integrating ChatGPT with the Reminders app; they have done exactly that with MapKit, and if they wanted, they could plug into HomeKit, HealthKit, and the dozens of other frameworks available to developers. And for those iOS features that don’t have an API for other companies to support…well, that’s for Apple to fix.

From my perspective, it always goes back to the same idea: I should be able to freely swap out software on my Apple pocket computer just like I can thanks to a safe, established system on my Apple desktop computer. (Arguably, that is also the perspective of, you know, the law in Europe.) Even Google – a company that would have all the reasons not to let people swap the Gemini assistant for anything else – lets folks decide which assistant they want to use on Android. And, as you can imagine, competition there is producing some really interesting results.

I’m convinced that, at this point, a lot of people despise Siri and would simply prefer pressing their assistant button to talk to ChatGPT or Claude – even if that meant losing access to reminders, timers, and whatever it is that Siri can reliably accomplish these days. (I certainly wouldn’t mind putting Claude on my iPhone and leaving Siri on the Watch for timers and HomeKit.) Whether it’s because of superior world knowledge, proper multilingual abilities (something that Siri still doesn’t support!), or longer contextual conversations, hundreds of millions of people have clearly expressed their preference for new types of digital assistance and conversations that go beyond the antiquated skillset of Siri.

If a new version of Siri isn’t going to be ready for some time, and if Apple does indeed want to make the best computers for AI, maybe it’s time to open up that part of iOS in a way that goes beyond the (buggy) ChatGPT integration with Siri.

Permalink