This was a long time in the making – and I hinted as much in my iOS 26 review back in September – but after a lot of back and forth and a year of daily usage, I’ve decided to switch back from Spotify to Apple Music. Put simply: despite the CEO’s replacement, I continue...
The Great Digital Declutter
This week, Federico and John clean house, deleting old apps, screenshots, half-built shortcuts, huge downloads, and more. A look at the workflows and apps we use to stay organized and clean up our digital messes.
On AppStories+, Federico’s Typing Mind experiments continue, while John shares his experience of Claude Code to build tools to run MacStories.
We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.
To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.
AppStories Episode 460 - The Great Digital Declutter
31:12
Trying to Make Sense of the Rumored, Gemini-Powered Siri Overhaul
Quite the scoop from Mark Gurman yesterday on what Apple is planning for major Siri improvements in 2026:
Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.
There is a lot to unpack here and I have a lot of questions.
Exploring AI Browsers
This week, Federico and John look at the hype surrounding AI browsers to see if there’s any there there.
Then, on AppStories+, Federico explains his experiments with lightning fast alternative AI models in Typing Mind.
We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.
To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.
AppStories Episode 459 - Exploring AI Browsers
38:23
On MiniMax M2 and LLMs with Interleaved Thinking Steps
In addition to Kimi K2 (which I recently wrote about here) and GLM-4.6 (which will become an option on Cerebras in a few days, when I’ll play around with it), one of the more interesting open-source LLM releases out of China lately is MiniMax M2. This MoE model (230B parameters, 10B activated at any given time) claims to reach 90% of the performance of Sonnet 4.5…at 8% the cost. You can read more about the model here; Simon Willison blogged about it here; you can also test it with MLX on an Apple silicon Mac.
What I find especially interesting about M2 is that it’s the first model to support interleaved thinking steps in between responses and tool calls, which is something that Anthropic pioneered with Claude Sonnet 4 back in May. Here’s Skyler Miao, head of engineering at MiniMax, in a post on X (unfortunately, most of the open-source AI community is only active there):
As we work more closely with partners, we’ve been surprised how poorly community support interleaved thinking, which is crucial for long, complex agentic tasks. Sonnet 4 introduced it 5 months ago, but adoption is still limited.
We think it’s one of the most important features for agentic models: it makes great use of test-time compute.
The model can reason after each tool call, especially when tool outputs are unexpected. That’s often the hardest part of agentic jobs: you can’t predict what the env returns. With interleaved thinking, the model could reason after get tool outputs, and try to find out a better solution.
We’re now working with partners to enable interleaved thinking in M2 — and hopefully across all capable models.
I’ve been using Claude as my main “production” LLM for the past few months and, as I’ve shared before, I consider the fact that both Sonnet and Haiku think between steps an essential aspect of their agentic nature and integration with third-party apps.
That being said, I have been testing MiniMax M2 on TypingMind in addition to Kimi K2 for the past week and it is, indeed, impressive. I plugged MiniMax M2 into TypingMind using their Anthropic-compatible endpoint; out of the box, the model worked with interleaved thinking and the several plugins I’ve built for myself in TypingMind using Claude. I haven’t used M2 for any vibe-coding tasks yet, but for other research or tool-based queries (like adding notes to Notion and tasks to Todoist), M2 effectively felt like a version of Sonnet not made by Anthropic.
Right now, MiniMax M2 isn’t hosted on any of the fast inference providers; I’ve accessed it via the official MiniMax API endpoint, whose inference speed isn’t that different from Anthropic’s cloud. The possibility of MiniMax M2 on Cerebras or Groq is extremely fascinating, and I hope it’s in the cards for the near future.
Using URL Auto Redirector to Send Email Links to Superhuman in Chrome
Due to a series of circumstances that involved various web apps that did not work in Safari, I recently found myself having to use Google Chrome as my default browser on my Mac again. Let me be clear: I don’t like this, but I had to pick Chrome for a variety of reasons. Specifically, I...
AI Experiments: Fast Inference with Groq and Third-Party Tools with Kimi K2 in TypingMind
I’ll talk about this more in depth in Monday’s episode of AppStories (if you’re a Plus subscriber, it’ll be out on Sunday), but I wanted to post a quick note on the site to show off what I’ve been experimenting with this week. I started playing around with TypingMind, a web-based wrapper for all kinds of LLMs (from any provider you want to use), and, in the process, I’ve ended up recreating parts of my Claude setup with third-party apps…at a much, much higher speed. Here, let me show you with a video:
Kimi K2 hosted on Groq on the left.Replay
The Potential and Limits of the M5 iPad Pro
This week, Federico and John consider whether Apple made the case for running local LLMs and gaming on the M5 iPad Pro and discusss who should consider buying it.
On AppStories+, we explain how Claude Skills work and why they are one of Anthropic’s most exciting features in a while.
Also available on YouTube here.
We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.
To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.
AppStories Episode 458 - The Potential and Limits of the M5 iPad Pro
33:32
Automatically Moving Reminders to Todoist with Shortcuts Automations
Ever since I decided – once again – to use Todoist as my primary task manager (there are many reasons behind this, chief among them being its excellent Ramble feature and how nice it is to use it with Claude Haiku 4.5), I ran into an age-old limitation of iOS: I wish I could use...
](https://cdn.macstories.net/banneras-1629219199428.png)

