Federico Viticci

9620 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.


Trying to Make Sense of the Rumored, Gemini-Powered Siri Overhaul

Quite the scoop from Mark Gurman yesterday on what Apple is planning for major Siri improvements in 2026:

Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.

There is a lot to unpack here and I have a lot of questions.

Read more


On MiniMax M2 and LLMs with Interleaved Thinking Steps

MiniMax M2 with interleaved thinking steps and tools in TypingMind.

MiniMax M2 with interleaved thinking steps and tools in TypingMind.

In addition to Kimi K2 (which I recently wrote about here) and GLM-4.6 (which will become an option on Cerebras in a few days, when I’ll play around with it), one of the more interesting open-source LLM releases out of China lately is MiniMax M2. This MoE model (230B parameters, 10B activated at any given time) claims to reach 90% of the performance of Sonnet 4.5…at 8% the cost. You can read more about the model here; Simon Willison blogged about it here; you can also test it with MLX on an Apple silicon Mac.

What I find especially interesting about M2 is that it’s the first model to support interleaved thinking steps in between responses and tool calls, which is something that Anthropic pioneered with Claude Sonnet 4 back in May. Here’s Skyler Miao, head of engineering at MiniMax, in a post on X (unfortunately, most of the open-source AI community is only active there):

As we work more closely with partners, we’ve been surprised how poorly community support interleaved thinking, which is crucial for long, complex agentic tasks. Sonnet 4 introduced it 5 months ago, but adoption is still limited.

We think it’s one of the most important features for agentic models: it makes great use of test-time compute.

The model can reason after each tool call, especially when tool outputs are unexpected. That’s often the hardest part of agentic jobs: you can’t predict what the env returns. With interleaved thinking, the model could reason after get tool outputs, and try to find out a better solution.

We’re now working with partners to enable interleaved thinking in M2 — and hopefully across all capable models.

I’ve been using Claude as my main “production” LLM for the past few months and, as I’ve shared before, I consider the fact that both Sonnet and Haiku think between steps an essential aspect of their agentic nature and integration with third-party apps.

That being said, I have been testing MiniMax M2 on TypingMind in addition to Kimi K2 for the past week and it is, indeed, impressive. I plugged MiniMax M2 into TypingMind using their Anthropic-compatible endpoint; out of the box, the model worked with interleaved thinking and the several plugins I’ve built for myself in TypingMind using Claude. I haven’t used M2 for any vibe-coding tasks yet, but for other research or tool-based queries (like adding notes to Notion and tasks to Todoist), M2 effectively felt like a version of Sonnet not made by Anthropic.

Right now, MiniMax M2 isn’t hosted on any of the fast inference providers; I’ve accessed it via the official MiniMax API endpoint, whose inference speed isn’t that different from Anthropic’s cloud. The possibility of MiniMax M2 on Cerebras or Groq is extremely fascinating, and I hope it’s in the cards for the near future.


AI Experiments: Fast Inference with Groq and Third-Party Tools with Kimi K2 in TypingMind

Kimi K2, hosted on Groq, running in TypingMind with a custom plugin I made.

Kimi K2, hosted on Groq, running in TypingMind with a custom plugin I made.

I’ll talk about this more in depth in Monday’s episode of AppStories (if you’re a Plus subscriber, it’ll be out on Sunday), but I wanted to post a quick note on the site to show off what I’ve been experimenting with this week. I started playing around with TypingMind, a web-based wrapper for all kinds of LLMs (from any provider you want to use), and, in the process, I’ve ended up recreating parts of my Claude setup with third-party apps…at a much, much higher speed. Here, let me show you with a video:

Kimi K2 hosted on Groq on the left.Replay

Read more


Max Weinbach on the M5’s Neural Accelerators

In addition to the M5 iPad Pro, which I reviewed earlier today, I also received an M5 MacBook Pro review unit from Apple last week. I really wanted to write a companion piece to my iPad Pro story about MLX and the M5’s Neural Accelerators; sadly, I couldn’t get the latest MLX branch to work on the MacBook Pro either.

However, Max Weinbach at Creative Strategies did, and shared some impressive results with the M5 and its GPU’s Neural Accelerators:

These dedicated neural accelerators in each core lead to that 4x speedup of compute! In compute heavy parts of LLMs, like the pre-fill stage (the processing that happens during the time to first token) this should lead to massive speed-ups in performance! The decode, generating each token, should be accelerated by the memory bandwidth improvements of the SoC.

Now, I would have loved to show this off! Unfortunately, full support for the Neural Accelerators isn’t in MLX yet. There is preliminary support, though! There will be an update later this year with full support, but that doesn’t mean we can’t test now! Unfortunately, I don’t have an M4 Mac on me (traveling at the moment) but what I was able to do was compare M5 performance before and after tensor core optimization! We’re seeing between a 3x and 4x speedup in prefill performance!

Looking at Max’s benchmarks with Qwen3 8B and a ~20,000-token prompt, there is indeed a 3.65x speedup in tokens/sec in the prefill stage – jumping from 158.2 tok/s to a remarkable 578.7 tok/s. This is why I’m very excited about the future of MLX for local inference on M5, and why I’m also looking forward to M5 Pro/M5 Max chipsets in future Mac models.

Permalink

M5 iPad Pro Review: An AI and Gaming Upgrade for AI and Games That Aren’t There Yet

The M5 iPad Pro.

The M5 iPad Pro.

How do you review an iPad Pro that’s visually identical to its predecessor and marginally improves upon its performance with a spec bump and some new wireless radios?

Let me try:

I’ve been testing the new M5 iPad Pro since last Thursday. If you’re a happy owner of an M4 iPad Pro that you purchased last year, stay like that; there is virtually no reason for you to sell your old model and get an M5-upgraded edition. That’s especially true if you purchased a high-end configuration of the M4 iPad Pro last year with 16 GB of RAM, since upgrading to another high-end M5 iPad Pro model will get you…16 GB of RAM again.

The story is slightly different for users coming from older iPad Pro models and those on lower-end configurations, but barely. Starting this year, the two base-storage models of the iPad Pro are jumping from 8 GB of RAM to 12 GB, which helps make iPadOS 26 multitasking smoother, but it’s not a dramatic improvement, either.

Apple pitches the M5 chip as a “leap” for local AI tasks and gaming, and to an extent, that is true. However, it is mostly true on the Mac, where – for a variety of reasons I’ll cover below – there are more ways to take advantage of what the M5 can offer.

In many ways, the M5 iPad Pro is reminiscent of the M2 iPad Pro, which I reviewed in October 2022: it’s a minor revision to an excellent iPad Pro redesign that launched the previous year, which set a new bar for what we should expect from a modern tablet and hybrid computer – the kind that only Apple makes these days.

For all these reasons, the M5 iPad Pro is not a very exciting iPad Pro to review, and I would only recommend this upgrade to heavy iPad Pro users who don’t already have the (still remarkable) M4 iPad Pro. But there are a couple of narratives worth exploring about the M5 chip on the iPad Pro, which is what I’m going to focus on for this review.

Read more


Anthropic Releases Haiku 4.5: Sonnet 4 Performance, Twice as Fast

Earlier today, Anthropic released Haiku 4.5, a new version of their “small and fast” model that matches Sonnet 4 performance from five months ago at a fraction of the cost and twice the speed. From their announcement:

What was recently at the frontier is now cheaper and faster. Five months ago, Claude Sonnet 4 was a state-of-the-art model. Today, Claude Haiku 4.5 gives you similar levels of coding performance but at one-third the cost and more than twice the speed.

And:

Claude Sonnet 4.5, released two weeks ago, remains our frontier model and the best coding model in the world. Claude Haiku 4.5 gives users a new option for when they want near-frontier performance with much greater cost-efficiency. It also opens up new ways of using our models together. For example, Sonnet 4.5 can break down a complex problem into multi-step plans, then orchestrate a team of multiple Haiku 4.5s to complete subtasks in parallel.

I’m not a programmer, so I’m not particularly interested in benchmarks for coding tasks and Claude Code integrations. However, as I explained in this Plus segment of AppStories for members, I’m very keen to play around with fast models that considerably reduce inference times to allow for quicker back and forth in conversations. As I detailed on AppStories, I’ve had a solid experience with Cerebras and Bolt for Mac to generate responses at over 1,000 tokens per second.

I have a personal test that I like to try with all modern LLMs that support MCP: how quickly they can append the word “Test” to my daily note in Notion. Based on a few experiments I ran earlier today, Haiku 4.5 seems to be the new state of the art for both following instructions and speed in this simple test.

I ran my tests with LLMs that support MCP-based connectors: Claude and Mistral. Both were given system-level instructions on how to access my daily notes: Claude had the details in its profile personalization screen; in Mistral, I created a dedicated agent with Notion instructions. So, all things being equal, here’s how long it took three different, non-thinking models to run my command:

  • Mistral: 37 seconds
  • Claude Sonnet 4.5: 47 seconds
  • Claude Haiku 4.5: 18 seconds

That is a drastic latency reduction compared to Sonnet 4.5, and it’s especially impressive when we consider how Mistral is using Flash Answers, which is fast inference powered by Cerebras. As I shared on AppStories, it seems to confirm that it’s possible to have speed and reliability for agentic tool-calling without having to use a large model.

I ran other tests with Haiku 4.5 and the Todoist MCP and, similarly, I was able to mark tasks as completed and reschedule them in seconds, with none of the latency I previously observed in Sonnet 4.5 and Opus 4.1. As it stands now, if you’re interested in using LLMs with apps and connectors without having to wait around too long for responses and actions, Haiku 4.5 is the model to try.


LLMs As Conduits for Data Portability Between Apps

One of the unsung benefits of modern LLMs – especially those with MCP support or proprietary app integrations – is their inherent ability to facilitate data transfer between apps and services that use different data formats.

This is something I’ve been pondering for the past few months, and the latest episode of Cortex – where Myke wished it was possible to move between task managers like you can with email clients – was the push I needed to write something up. I’ve personally taken on multiple versions of this concept with different LLMs, and the end result was always the same: I didn’t have to write a single line of code to create import/export functionalities that two services I wanted to use didn’t support out of the box.

Read more


Reports of Slide Over’s Death Were Greatly Exaggerated

Well, that didn’t take long.

In yesterday’s second developer beta of iPadOS 26.1, Apple restored the Slide Over functionality that was removed with the debut of the new windowing system in iPadOS 26.0 last month. Well…they sort of restored Slide Over, at least.

In my review of iPadOS 26, I wrote:

So in iPadOS 26, Apple decided to scrap Split View and Slide Over altogether, leaving users the choice between full-screen apps, a revamped Stage Manager, and the brand new windowed mode. At some level, I get it. Apple probably thinks that the functionality of Split View can be replicated with new windowing controls (as we’ll see, there are actual tiling options to split the screen into halves) and that most people who were using these two modes would be better served by the new multitasking system the company designed for iPadOS 26.

At the same time, though, I can’t help but feel that the removal of Slide Over is a misstep on Apple’s part. There’s really no great way to replicate the versatility of Slide Over with the iPad’s new windowing. Making a bunch of windows extra small and stacked on the side of the screen would require a lot of manual resizing and repositioning; at that point, you’re just using a worse version of classic windowing. I don’t know what Apple’s solution could have been here – particularly because, like I said above, the iPad did end up with too many multitasking systems to pick from. But the Mac also has several multitasking features, and people love the Mac, so maybe that’s fine, too?

Slide Over will be missed, but perhaps there’ll be a way for Apple to make it come back.

The unceremonious removal of Slide Over from iPadOS 26 was the most common comment I received from MacStories readers over the past month. I also saw a lot of posts on different subreddits from people who claimed they weren’t updating to iPadOS 26 so they wouldn’t lose Slide Over functionality. Perhaps Apple underestimated how much people loved and used Slide Over, or maybe – like I argued – they thought that multitasking and window resizing could replace it. In any case, Slide Over is back, but it’s slightly different from what it used to be.

The bad news first: the new Slide Over doesn’t support multiple apps in the Slide Over stack with their own dedicated app switcher. (This option was introduced in iPadOS 13.) So far, the new Slide Over is single-window only, and it works alongside iPadOS windowing to put one specific window in Slide Over mode. Any window can be moved into Slide Over, but only one Slide Over entity can exist at a time. From this perspective, Slide Over is different from full-screen: that mode also works alongside windowing, but multiple windows can be in their full-screen “spaces” at the same time.

On one hand, I hope that Apple can find a way to restore Slide Over’s former support for multiple apps. On the other, I feel like the “good news” part is the reason that will prevent the company from doing so. What I like about the new Slide Over implementation is that the window can be resized: you’re no longer constrained to using Slide Over in a “tall iPhone” layout, which is great. I like having the option to stretch out Music (which I’ve always used in Slide Over on iPad), and I also appreciate the glassy border that is displayed around the Slide Over window to easily differentiate it from regular windows. I feel, however, that since you can now resize the Slide Over window, also enabling support for multiple apps in Slide Over may get too confusing or complex to manage. Personally, now that I’ve tested it, I’d take a resizable single Slide Over window over multiple non-resizable apps in Slide Over.

Between improvements to local capture and even more keyboard shortcuts, it’s great (and reassuring) to see Apple iterate on iPadOS so quickly after last month’s major update. Remember when we used to wait two years for minor changes?

Permalink

Apps in ChatGPT

OpenAI announced a lot of developer-related features at yesterday’s DevDay event, and as you can imagine, the most interesting one for me is the introduction of apps in ChatGPT. From the OpenAI blog:

Today we’re introducing a new generation of apps you can chat with, right inside ChatGPT. Developers can start building them today with the new Apps SDK, available in preview.

Apps in ChatGPT fit naturally into conversation. You can discover them when ChatGPT suggests one at the right time, or by calling them by name. Apps respond to natural language and include interactive interfaces you can use right in the chat.

And:

Developers can start building and testing apps today with the new Apps SDK preview, which we’re releasing as an open standard built on the Model Context Protocol⁠ (MCP). To start building, visit our documentation for guidelines and example apps, and then test your apps using Developer Mode in ChatGPT.

Also:

Later this year, we’ll launch apps to ChatGPT Business, Enterprise and Edu. We’ll also open submissions so developers can publish their apps in ChatGPT, and launch a dedicated directory where users can browse and search for them. Apps that meet the standards provided in our developer guidelines will be eligible to be listed, and those that meet higher design and functionality standards may be featured more prominently—both in the directory and in conversations.

Looks like we got the timing right with this week’s episode of AppStories about demystifying MCP and what it means to connect apps to LLMs. In the episode, I expressed my optimism for the potential of MCP and the idea of augmenting your favorite apps with the capabilities of LLMs. However, I also lamented how fragmented the MCP ecosystem is and how confusing it can be for users to wrap their heads around MCP “servers” and other obscure, developer-adjacent terminology.

In classic OpenAI fashion, their announcement of apps in ChatGPT aims to (almost) completely abstract the complexity of MCP from users. In one announcement, OpenAI addressed my two top complaints about MCP that I shared on AppStories: they revealed their own upcoming ecosystem of apps, and they’re going to make it simple to use.

Does that ring a bell? It’s impossible to tell right now if OpenAI’s bet to become a platform will be successful, but early signs are encouraging, and the company has the leverage of 800 million active users to convince third-party developers to jump on board. Just this morning, I asked ChatGPT to put together a custom Spotify playlist with bands that had a similar vibe to Moving Mountains in their Pneuma era, and after thinking for a few minutes, it worked. I did it from the ChatGPT web app and didn’t have to involve the App Store at all.

If I were Apple, I’d start growing increasingly concerned at the prospect of another company controlling the interactions between users and their favorite apps. As I argued on AppStories, my hope is that the rumored MCP framework allegedly being worked on by Apple is exactly that – a bridge (powered by App Intents) between App Store apps and LLMs that can serve as a stopgap until Apple gets their LLM act together. But that’s a story for another time.

Permalink