Federico Viticci

10468 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.

This Week's Sponsor:

SoundSource

New Year, New Audio Setup: SoundSource 6 from Rogue Amoeba


Max Weinbach on the M5’s Neural Accelerators

In addition to the M5 iPad Pro, which I reviewed earlier today, I also received an M5 MacBook Pro review unit from Apple last week. I really wanted to write a companion piece to my iPad Pro story about MLX and the M5’s Neural Accelerators; sadly, I couldn’t get the latest MLX branch to work on the MacBook Pro either.

However, Max Weinbach at Creative Strategies did, and shared some impressive results with the M5 and its GPU’s Neural Accelerators:

These dedicated neural accelerators in each core lead to that 4x speedup of compute! In compute heavy parts of LLMs, like the pre-fill stage (the processing that happens during the time to first token) this should lead to massive speed-ups in performance! The decode, generating each token, should be accelerated by the memory bandwidth improvements of the SoC.

Now, I would have loved to show this off! Unfortunately, full support for the Neural Accelerators isn’t in MLX yet. There is preliminary support, though! There will be an update later this year with full support, but that doesn’t mean we can’t test now! Unfortunately, I don’t have an M4 Mac on me (traveling at the moment) but what I was able to do was compare M5 performance before and after tensor core optimization! We’re seeing between a 3x and 4x speedup in prefill performance!

Looking at Max’s benchmarks with Qwen3 8B and a ~20,000-token prompt, there is indeed a 3.65x speedup in tokens/sec in the prefill stage – jumping from 158.2 tok/s to a remarkable 578.7 tok/s. This is why I’m very excited about the future of MLX for local inference on M5, and why I’m also looking forward to M5 Pro/M5 Max chipsets in future Mac models.

Permalink

M5 iPad Pro Review: An AI and Gaming Upgrade for AI and Games That Aren’t There Yet

The M5 iPad Pro.

The M5 iPad Pro.

How do you review an iPad Pro that’s visually identical to its predecessor and marginally improves upon its performance with a spec bump and some new wireless radios?

Let me try:

I’ve been testing the new M5 iPad Pro since last Thursday. If you’re a happy owner of an M4 iPad Pro that you purchased last year, stay like that; there is virtually no reason for you to sell your old model and get an M5-upgraded edition. That’s especially true if you purchased a high-end configuration of the M4 iPad Pro last year with 16 GB of RAM, since upgrading to another high-end M5 iPad Pro model will get you…16 GB of RAM again.

The story is slightly different for users coming from older iPad Pro models and those on lower-end configurations, but barely. Starting this year, the two base-storage models of the iPad Pro are jumping from 8 GB of RAM to 12 GB, which helps make iPadOS 26 multitasking smoother, but it’s not a dramatic improvement, either.

Apple pitches the M5 chip as a “leap” for local AI tasks and gaming, and to an extent, that is true. However, it is mostly true on the Mac, where – for a variety of reasons I’ll cover below – there are more ways to take advantage of what the M5 can offer.

In many ways, the M5 iPad Pro is reminiscent of the M2 iPad Pro, which I reviewed in October 2022: it’s a minor revision to an excellent iPad Pro redesign that launched the previous year, which set a new bar for what we should expect from a modern tablet and hybrid computer – the kind that only Apple makes these days.

For all these reasons, the M5 iPad Pro is not a very exciting iPad Pro to review, and I would only recommend this upgrade to heavy iPad Pro users who don’t already have the (still remarkable) M4 iPad Pro. But there are a couple of narratives worth exploring about the M5 chip on the iPad Pro, which is what I’m going to focus on for this review.

Read more


Apple’s Intelligence Quest: Beyond Smart Siri

This week, Federico and John discuss what might be next for Apple Intelligence and how it fits into the broader AI market.

On AppStories+, Federico and John cover the fallout from the Sora app and why AI can’t replace human creativity.


We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.


AppStories+ Deeper into the world of apps

AppStories Episode 457 - Apple’s Intelligence Quest: Beyond Smart Siri

0:00
36:30

AppStories+ Deeper into the world of apps

This episode is sponsored by:

  • Claude: Get 50% off Claude Pro, including access to Claude Code.

Read more


Anthropic Releases Haiku 4.5: Sonnet 4 Performance, Twice as Fast

Earlier today, Anthropic released Haiku 4.5, a new version of their “small and fast” model that matches Sonnet 4 performance from five months ago at a fraction of the cost and twice the speed. From their announcement:

What was recently at the frontier is now cheaper and faster. Five months ago, Claude Sonnet 4 was a state-of-the-art model. Today, Claude Haiku 4.5 gives you similar levels of coding performance but at one-third the cost and more than twice the speed.

And:

Claude Sonnet 4.5, released two weeks ago, remains our frontier model and the best coding model in the world. Claude Haiku 4.5 gives users a new option for when they want near-frontier performance with much greater cost-efficiency. It also opens up new ways of using our models together. For example, Sonnet 4.5 can break down a complex problem into multi-step plans, then orchestrate a team of multiple Haiku 4.5s to complete subtasks in parallel.

I’m not a programmer, so I’m not particularly interested in benchmarks for coding tasks and Claude Code integrations. However, as I explained in this Plus segment of AppStories for members, I’m very keen to play around with fast models that considerably reduce inference times to allow for quicker back and forth in conversations. As I detailed on AppStories, I’ve had a solid experience with Cerebras and Bolt for Mac to generate responses at over 1,000 tokens per second.

I have a personal test that I like to try with all modern LLMs that support MCP: how quickly they can append the word “Test” to my daily note in Notion. Based on a few experiments I ran earlier today, Haiku 4.5 seems to be the new state of the art for both following instructions and speed in this simple test.

I ran my tests with LLMs that support MCP-based connectors: Claude and Mistral. Both were given system-level instructions on how to access my daily notes: Claude had the details in its profile personalization screen; in Mistral, I created a dedicated agent with Notion instructions. So, all things being equal, here’s how long it took three different, non-thinking models to run my command:

  • Mistral: 37 seconds
  • Claude Sonnet 4.5: 47 seconds
  • Claude Haiku 4.5: 18 seconds

That is a drastic latency reduction compared to Sonnet 4.5, and it’s especially impressive when we consider how Mistral is using Flash Answers, which is fast inference powered by Cerebras. As I shared on AppStories, it seems to confirm that it’s possible to have speed and reliability for agentic tool-calling without having to use a large model.

I ran other tests with Haiku 4.5 and the Todoist MCP and, similarly, I was able to mark tasks as completed and reschedule them in seconds, with none of the latency I previously observed in Sonnet 4.5 and Opus 4.1. As it stands now, if you’re interested in using LLMs with apps and connectors without having to wait around too long for responses and actions, Haiku 4.5 is the model to try.


LLMs As Conduits for Data Portability Between Apps

One of the unsung benefits of modern LLMs – especially those with MCP support or proprietary app integrations – is their inherent ability to facilitate data transfer between apps and services that use different data formats.

This is something I’ve been pondering for the past few months, and the latest episode of Cortex – where Myke wished it was possible to move between task managers like you can with email clients – was the push I needed to write something up. I’ve personally taken on multiple versions of this concept with different LLMs, and the end result was always the same: I didn’t have to write a single line of code to create import/export functionalities that two services I wanted to use didn’t support out of the box.

Read more


What Else Do We Want from Apple in 2025?

This week, with rumors of more products in the pipeline, Federico and John share what they want most from Apple before the end of the year.

On AppStories+, Federico experiments with lightning fast inference and iteration using Cerebras for scripting.


We deliver AppStories+ to subscribers with bonus content, ad-free, and at a high bitrate early every week.

To learn more about an AppStories+ subscription, visit our Plans page, or read the AppStories+ FAQ.


AppStories+ Deeper into the world of apps

AppStories Episode 456 - What Else Do We Want from Apple in 2025?

0:00
31:07

AppStories+ Deeper into the world of apps

Read more



Testing the Limits of the New Spotify Integration in ChatGPT

I wrote about the potential of apps in ChatGPT earlier this week on MacStories. Today, I want to share more details on how one of the first apps that support this new integration – Spotify – works inside ChatGPT, and how I’ve been using it to “vibe-playlist” my way into…well, AI-generated playlists based on real...


Reports of Slide Over’s Death Were Greatly Exaggerated

Well, that didn’t take long.

In yesterday’s second developer beta of iPadOS 26.1, Apple restored the Slide Over functionality that was removed with the debut of the new windowing system in iPadOS 26.0 last month. Well…they sort of restored Slide Over, at least.

In my review of iPadOS 26, I wrote:

So in iPadOS 26, Apple decided to scrap Split View and Slide Over altogether, leaving users the choice between full-screen apps, a revamped Stage Manager, and the brand new windowed mode. At some level, I get it. Apple probably thinks that the functionality of Split View can be replicated with new windowing controls (as we’ll see, there are actual tiling options to split the screen into halves) and that most people who were using these two modes would be better served by the new multitasking system the company designed for iPadOS 26.

At the same time, though, I can’t help but feel that the removal of Slide Over is a misstep on Apple’s part. There’s really no great way to replicate the versatility of Slide Over with the iPad’s new windowing. Making a bunch of windows extra small and stacked on the side of the screen would require a lot of manual resizing and repositioning; at that point, you’re just using a worse version of classic windowing. I don’t know what Apple’s solution could have been here – particularly because, like I said above, the iPad did end up with too many multitasking systems to pick from. But the Mac also has several multitasking features, and people love the Mac, so maybe that’s fine, too?

Slide Over will be missed, but perhaps there’ll be a way for Apple to make it come back.

The unceremonious removal of Slide Over from iPadOS 26 was the most common comment I received from MacStories readers over the past month. I also saw a lot of posts on different subreddits from people who claimed they weren’t updating to iPadOS 26 so they wouldn’t lose Slide Over functionality. Perhaps Apple underestimated how much people loved and used Slide Over, or maybe – like I argued – they thought that multitasking and window resizing could replace it. In any case, Slide Over is back, but it’s slightly different from what it used to be.

The bad news first: the new Slide Over doesn’t support multiple apps in the Slide Over stack with their own dedicated app switcher. (This option was introduced in iPadOS 13.) So far, the new Slide Over is single-window only, and it works alongside iPadOS windowing to put one specific window in Slide Over mode. Any window can be moved into Slide Over, but only one Slide Over entity can exist at a time. From this perspective, Slide Over is different from full-screen: that mode also works alongside windowing, but multiple windows can be in their full-screen “spaces” at the same time.

On one hand, I hope that Apple can find a way to restore Slide Over’s former support for multiple apps. On the other, I feel like the “good news” part is the reason that will prevent the company from doing so. What I like about the new Slide Over implementation is that the window can be resized: you’re no longer constrained to using Slide Over in a “tall iPhone” layout, which is great. I like having the option to stretch out Music (which I’ve always used in Slide Over on iPad), and I also appreciate the glassy border that is displayed around the Slide Over window to easily differentiate it from regular windows. I feel, however, that since you can now resize the Slide Over window, also enabling support for multiple apps in Slide Over may get too confusing or complex to manage. Personally, now that I’ve tested it, I’d take a resizable single Slide Over window over multiple non-resizable apps in Slide Over.

Between improvements to local capture and even more keyboard shortcuts, it’s great (and reassuring) to see Apple iterate on iPadOS so quickly after last month’s major update. Remember when we used to wait two years for minor changes?

Permalink