John Voorhees

3214 posts on MacStories since November 2015

John is MacStories’ Managing Editor, has been writing about Apple and apps since joining the team in 2015, and today, runs the site alongside Federico.

John also co-hosts four MacStories podcasts: AppStories, which covers the world of apps, MacStories Unwind, which explores the fun differences between American and Italian culture and recommends media to listeners, Ruminate, a show about the weird web and unusual snacks, and NPC: Next Portable Console, a show about the games we take with us.


The Iconfactory Launches Kickstarter to Expand Ollie’s Arcade with Frenzic

Ollie’s Arcade was launched in 2023 by The Iconfactory with three mini-games reminiscent of beloved classic videogames. Today, the company launched a Kickstarter campaign with the goal of raising $20,000. If reached, The Iconfactory will make Ollie’s Arcade free and bring its game Frenzic to the bundle. Frenzic was one of the earliest iOS game and a fan favorite that was later adapted for Apple Arcade, but has since left Apple’s subscription gaming service.

The Ollie’s Arcade campaign is about more than just its games though. As The Iconfactory explains:

From a bigger perspective the Kickstarter is yet another way we’re trying to sustain ourselves and keep the Iconfactory up and running. We all know times are tough, especially for small, non-investor driven businesses like ours. We’ve struggled to pay our salaries, keep up with the rising cost of health care and to compete against the onslaught of AI driven design solutions. The new KS won’t be enough to solve all our revenue problems, but it will help give us runway to keep the lights on while we find new ways to stick around and serve you. The more we raise now, the longer and safer that runway gets.

The Iconfactory has a long and stories history since it was founding in 1996. Since then they’ve produced some of the most thoughtfully designed apps and icons around. The Iconfactory isn’t the only company whose business has been hurt by generative AI, but I sincerely hope it isn’t one of the fatalities too. Check out the Kickstarter campaign and chip in to help them and make some great games free for everyone. And while you’re at it, don’t miss all the other great apps they have on the App Store like Tapestry, Tot, Triode, xScope, and many other.


How I Used Claude to Build a Transcription Bot that Learns From Its Mistakes

Step 1: Transcribe with parakeet-mlx.

Step 1: Transcribe with parakeet-mlx.

[Update: Due to the way parakeet-mlx handles transcript timeline synchronization, which can result in caption timing issues, this workflow has been reverted to use the Apple Speech framework. Otherwise, the workflow remains the same as described below.]

When I started transcribing AppStories and MacStories Unwind three years ago, I had wanted to do so for years, but the tools at the time were either too inaccurate or too expensive. That turned a corner with OpenAI’s Whisper, an open-source speech-to-text model that blew away other readily available options.

Still, the results weren’t good enough to publish those transcripts anywhere. Instead, I kept them as text-searchable archives to make it easier to find and link to old episodes.

Since then, a cottage industry of apps has arisen around Whisper transcription. Some of those tools do a very good job with what is now an aging model, but I have never been satisfied with their accuracy or speed. However, when we began publishing our podcasts as videos, I knew it was finally time to start generating transcripts because as inaccurate as Whisper is, YouTube’s automatically generated transcripts are far worse.

VidCap in action.

VidCap in action.

My first stab at video transcription was to use apps like VidCap and MacWhisper. After a transcript was generated, I’d run it through MassReplaceIt, a Mac app that lets you create and apply a huge dictionary of spelling corrections using a bulk find-and-replace operation. As I found errors in AI transcriptions by manually skimming them, I’d add those corrections to my dictionary. As a result, the transcriptions improved over time, but it was a cumbersome process that relied on me spotting errors, and I didn’t have time to do more than scan through each transcript quickly.

That’s why I was so enthusiastic about the speech APIs that Apple introduced last year at WWDC. The accuracy wasn’t any better than Whisper, and in some circumstances it was worse, but it was fast, which I appreciate given the many steps needed to get a YouTube video published.

The process was sped up considerably when Claude Skills were released. A skill can combine a script with instructions to create a hybrid automation with both the deterministic outcome of scripting and the fuzzy analysis of LLMs.

Transcribing with yap.

Transcribing with yap.

I’d run yap, a command line tool that I used to transcribe videos with Apple’s speech-to-text framework. Next, I’d open the Claude app, attach the resulting transcript, and run a skill that would run the script, replacing known spelling errors. Then, Claude would analyze the text against its knowledge base, looking for other likely misspellings. When it found one, Claude would reply with some textual context, asking if the proposed change should be made. After I responded, Claude would further improve my transcript, and I’d tell Claude which of its suggestions to add to the script’s dictionary, helping improve the results a little each time I used the skill.

Over the holidays, I refined my skill further and moved it from the Claude app to the Terminal. The first change was to move to parakeet-mlx, an Apple silicon-optimized version of NVIDIA’s Parakeet model that was released last summer. Parakeet isn’t as fast as Apple’s speech APIs, but it’s more accurate, and crucially, its mistakes are closer to the right answers phonetically than the ones made by Apple’s tools. Consequently, Claude is more likely to find mistakes that aren’t in my dictionary of misspellings in its final review.

Managing the built-in corrections dictionary.

Managing the built-in corrections dictionary.

With Claude Opus 4.5’s assistance, I rebuilt the Python script at the heart of my Claude skill to run videos through parakeet-mlx, saving the results as either a .srt or .txt file (or both) in the same location as the original file but prepended with “CLEANED TRANSCRIPT.” Because Claude Code can run scripts and access local files from Terminal, the transition to the final fuzzy pass for errors is seamless. Claude asks permission to access the cleaned transcript file that the script creates and then generates a report with suggested changes.

A list of obscure words Claude suggested changing. Every one was correct.

A list of obscure words Claude suggested changing. Every one was correct.

The last step is for me to confirm which suggested changes should be made and which should be added to the dictionary of corrections. The whole process takes just a couple of minutes, and it’s worth the effort. For the last episode of AppStories, the script found and corrected 27 errors, many of which were misspellings of our names, our podcasts, and MacStories. The final pass by Claude managed to catch seven more issues, including everything from a misspelling of the band name Deftones to Susvara, a model of headphones, and Bazzite, an open-source SteamOS project. Those are far from everyday words, but now, their misspellings are not only fixed in the latest episode of AppStories, they’re in the dictionary where those words will always be corrected whether Claude’s analysis catches them or not.

Claude even figured out "goti" was a reference to GOTY (Game of the Year).

Claude even figured out “goti” was a reference to GOTY (Game of the Year).

I’ve used this same pattern over and over again. I have Claude build me a reliable, deterministic script that helps me work more efficiently; then, I layer in a bit of generative analysis to improve the script in ways that would be impossible or incredibly complex to code deterministically. Here, that generative “extra” looks for spelling errors. Elsewhere, I use it to do things like rank items in a database based on a natural language prompt. It’s an additional pass that elevates the performance of the workflow beyond what was possible when I was using a find-and-replace app and later a simple dictionary check that I manually added items to. The idea behind my transcription cleanup workflow has been the same since the beginning, but boy, have the tools improved the results since I first used Whisper three years ago.


Apple Confirms AI Partnership with Google

Apple has confirmed to CNBC that it has entered into a multi-year partnership with Google to use the search giant’s models and cloud technology for its own AI efforts. According to an unnamed Apple spokesperson:

After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users.

The report still leaves many questions unanswered, including how Gemini fits in with Apple’s own Foundation Models and whether and to what extent Apple will rely on Google hardware. However, after months of speculation and reports from Mark Gurman at Bloomberg that Apple and Google were negotiating, it looks like we’re on the cusp of Apple’s AI strategy coming into better focus.


UPDATE:

Subsequent to the statement made by Apple to CNBC, Apple and Google released a slightly more detailed joint statement that Google published on X:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

So, while the Apple Foundation Models that power Apple Intelligence will be based on Gemini and unspecified cloud technology, Apple Intelligence features themselves, including more personalized Siri, will continue to run locally on Apple devices and on Apple’s Private Cloud Compute to maintain user privacy.


My Favorite Gear From CES 2026 – and Some Weird and Wonderful Gadgets, Too

It’s CES time again, which means another edition of our annual roundup of the most eye-catching gadgets seasoned with a helping of weird and wonderful tech. I’m sure it will come as no surprise that robots, AI, and TVs are some of the most prominent themes at CES in 2026, but there’s a lot more, so buckle in for a tour of what to expect from the gadget world in the coming months.

AR Glasses

Viture encourages customers to both unleash and embrace The Beast. Source: Viture.

Viture encourages customers to both unleash and embrace The Beast. Source: Viture.

I first tried Xreal AR glasses shortly before the Vision Pro was released. The experience at the time wasn’t great, but you could see the potential for what has turned out to be one of the Vision Pro’s greatest strengths: working on a huge virtual display. There’s also a lot of potential for gaming.

It looks like the tech behind AR glasses is finally getting to a point where I may dip in again this year. Xreal updated and reduced the price of its entry-level 1S glasses, which will make the category accessible to more people.

The company also introduced the Neo dock, a 10,000 mAh battery that also serves as a hub for connecting a game console or other device to its AR glasses. Notably, the Neo is compatible with the Nintendo Switch 2, which caught my eye immediately.

Read more


Coming Soon: What’s Next on Apple TV and Apple Arcade in January 2026

To ring in the new year, Apple has a great lineup of returning Apple TV shows and brand-new Apple Arcade games on the way, with everything kicking off later this week. Here are the highlights.

Apple Arcade Games (January 8)

On January 8th, Apple will release four new games:

  • True Skate+: a skateboarding simulator with realistic physics that’s set in more than 20 real-world locations. The game has been available on the App Store for years, but now Arcade subscribers can play on their iPhone and iPad, or even stream a game via AirPlay to an Apple TV without the In-App Purchases.
  • Sago Mini Jinja’s Garden: a family game that lets preschool-aged kids explore gardens, cook, and harvest ingredients in three distinct 3D areas.

  • Cozy Caravan: another family-friendly game where kids can create and play as an animal character, make meals, and enjoy activities like fishing and games, all while preparing for the Whizz Bang Fair.

  • Potion Punch 2+: a restaurant management simulator where you manage a variety of shops like the Potion Café or Enchantment Shop. The app features a fantasy theme filled with magic and monsters with an amusing storyline.

Then starting this Friday, a host of new Apple TV shows will begin to appear:

Tehran, Season 3 (January 9)

The International Emmy Award-winning Israeli spy thriller returns for its third season with Hugh Laurie, who joins the cast as a South African nuclear inspector. Created by Moshe Zonder, Dana Eden, and Maor Kohn, the series follows Mossad agent Tamar Rabinyan (Niv Sultan) as she takes on dangerous missions deep inside Iran. After going rogue at the end of season two, Tamar fights to win back the Mossad’s support and survive, with the eight-episode season directed by Daniel Syrkin.

Add to your Calendar:

Hijack, Season 2 (January 14)

I loved season one of this series starring Idris Elba, who returns as corporate negotiator Sam Nelson in this real-time thriller. After saving a hijacked flight last season, Sam finds himself at the center of a new crisis on a Berlin underground train. The eight-episode season includes new cast members Toby Jones, Lisa Vicari, and Clare-Hope Ashitey, with returning stars Christine Adams, Max Beesley, and Archie Panjabi.

Add to your Calendar:

Read more


macOS Tahoe’s Messy Menus

Nikita Prokopov writing on tonsky.me about macOS Tahoe’s menu icons:

In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

It’s a brutal assessment of the sprinkling of iconography throughout Tahoe’s menu system that had me nodding along in agreement as I read it.

There’s no denying the inconsistencies in icon choices, their lack of legibility, and the overall clutter added to menus. Yet at the same time, I can’t say I’ve been terribly bothered by them either. That’s probably because I use keyboard shortcuts and launchers so much, rarely relying on the Mac’s menu system. At the same time, though, part of me wonders whether those tiny icons are at least partially what drove me to buy a bigger monitor recently. I don’t think so, but maybe?

In any event, if you care about design, Prokopov’s detailed and well-illustrated analysis of Tahoe’s menu icons is well worth your time.

Permalink

How I Revived My Decade-Old App with Claude Code

Blink from 2017 (left) and 2026 (right).

Blink from 2017 (left) and 2026 (right).

Every holiday season, Federico and I spend our downtime on nerd projects. This year, both of us spent a lot of that time building tools for ourselves with Claude Code in what developed into a bit of a competition as we each tried to one-up the other’s creations. We’ll have more on what we’ve been up to on AppStories, MacStories, and for Club members soon, but today, I wanted to share an experiment I ran last night that I think captures a very personal and potentially far-reaching slice of what tools like Claude Code can enable.

Blink from 2017 running on a modern iPhone.

Blink from 2017 running on a modern iPhone.

Before I wrote at MacStories, I made a few apps, including Blink, which generated affiliate links for Apple’s media services. The app had a good run from 2015-2017, but I pulled it from the App Store when Apple ended its affiliate program for apps because that was the part of the app that was used the most. Since then, the project has sat in a private GitHub repo untouched.

Last night, I was sitting on the couch working on a Safari web extension when I opened GitHub and saw that old Blink code, which sparked a thought. I wondered whether Claude Code could update Blink to use Swift and SwiftUI with minimal effort on my part. I don’t have any intention of re-releasing Blink, but I couldn’t shake the “what if” rattling in my head, so I cloned the repo and put Claude to work.

Read more


App Marketing: My Extended Q&A for Paul Hudson’s Everything but the Code

Earlier this year, Paul Hudson asked me to answer a few questions about app marketing for a book he was writing called Everything But the Code.

The book is finished now, and it’s full of great advice from Paul and a long list of indie developers whose apps are some of MacStories’ favorites. Paul covers the entire process of making apps, from validating an idea to selling your app and beyond. The only thing he doesn’t cover, as the book’s title makes clear, is building apps, which is the subject of other books and courses he’s created.

Paul was kind enough to ask me to share some insights on marketing apps to the press. You’ll find my contributions in the Prelaunch and Publicity and Aftermath and Evolution chapters, and now that the book is final, I thought I’d share extended versions of my responses with readers. Although the focus is on apps, I expect there are a few lessons here for anyone pitching their creative work to the world. So, here you go.

Paul Hudson: What common mistakes do developers make when pitching their app to the press?

Me: Most developers do a great job thinking through what they’re pitching but don’t spend enough time thinking about who they’re pitching to. I’d love to be able to tell developers do these five things, and you’ll have a pitch you can send to anyone, but it doesn’t work that way. Developers need to think about things like who at a publication typically covers certain types of apps.

For example, if you know a publication has a musician on staff who has covered music apps before, that person should be at the top of your list if you’ve built a guitar tab app. However, that doesn’t mean you shouldn’t contact anyone else at the publication. People get busy, so don’t limit yourself. However, focus your efforts on the people who are most likely to be receptive to your app.

It also pays to make things easy for the person you’re pitching to. Keep your pitch short and to the point, link to a press kit, beta, and other materials, and follow up closer to launch.

A few other pitch pointers:

  • Don’t wait to send your pitches until the last minute. Personally, I prefer getting pitches at least a couple of weeks in advance of a launch, so I can make the time for testing and writing about them.
  • Don’t send pitches during WWDC, on Apple event days, or major holidays. Your pitch is much more likely to get lost in the shuffle on those days.
  • You don’t need to ask if it’s okay to send a TestFlight link. If the person you’re pitching to isn’t interested, they won’t use it.
  • It’s okay to copy multiple people at a publication if you’re unsure who to contact.
  • Try to understand where a writer likes to be contacted. Email is probably the safest bet, but social media DMs might be better for some people.
  • It’s okay to send follow-up reminders about your app launch. I personally appreciate them.
  • Don’t expect app feedback from most press contacts. I let developers know when I find the kind of bug I’d mention in a review, but unfortunately, I usually don’t have time for much more than that.
  • Don’t take it personally if you don’t get a response to a pitch. Remember, the people you contact are getting a lot of pitches.
  • Don’t close down your TestFlight beta immediately after you launch your app. If a publication can’t get a story out to coincide with your launch, closing down your beta immediately so it can no longer be downloaded makes it less likely they’ll cover it post-launch.
  • Don’t forget to include the name of your app in your pitch – yes, that happens.

Read more


OpenAI Opens Up ChatGPT App Submissions to Developers

Announced earlier this year at OpenAI’s DevDay, developers may now submit ChatGPT apps for review and publication. OpenAI’s blog post explains that:

Apps extend ChatGPT conversations by bringing in new context and letting users take actions like order groceries, turn an outline into a slide deck, or search for an apartment.

Under the hood, OpenAI is using MCP, Model Context Protocol, which was pioneered by Anthropic late last year and donated to the Agentic AI Foundation last week.

Apps are currently available in the web version of ChatGPT from the sidebar or tools menu and, once connected, can be accessed by @mentioning them. Early participants include Adobe, which preannounced its apps last week, Apple Music, Spotify, Zillow, OpenTable, Figma, Canva, Expedia, Target, AllTrails, Instacart, and others.

I was hoping the Apple Music app would allow me to query my music library directly, but that’s not possible. Instead, it allows ChatGPT to do things like search Apple Music’s full catalog and generate playlists, which is useful but limited.

ChatGPT's Apple Music app lets you create playlists.

ChatGPT’s Apple Music app lets you create playlists.

Currently, there’s no way for developers to complete transactions inside ChatGPT. Instead, sales can be kicked to another app or the web, although OpenAI says it is exploring ways to offer transactions inside ChatGPT. Developers who want to submit an app must follow OpenAI’s app submission guidelines (sound familiar?) and can learn more from a variety of resources that OpenAI has made available.

A playlist generated by ChatGPT from a 40-year-old setlist.

A playlist generated by ChatGPT from a 40-year-old setlist.

I haven’t spent a lot of time with the apps that are available, but despite the lack of access to your library, the Apple Music integration can be useful when combined with ChatGPT’s world knowledge. I asked it to create a playlist of the songs that The Replacements played at a show I saw in 1985, and while I don’t recall the exact setlist, ChatGPT matched what’s on Setlist.fm, a user-maintained wiki of live shows. I could have made this playlist myself, but it was convenient to have ChatGPT do it instead, even if the Apple Music integration is limited to 25-song playlists, which meant that The Replacements’ setlist was split into two playlists.

We’re still in the early days of MCP, and participation by companies will depend on whether they can make incremental sales to users via ChatGPT. Still, there’s clearly potential for apps embedded in chatbots to take off.