Federico Viticci

9609 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.

This Week's Sponsor:

Quip

A supercharged clipboard manager for Apple devices with on-device intelligence, iCloud sync, and text expansion.


Oasis Just Glitched the Algorithm

Beautiful, poignant story by Steven Zeitchik, writing for The Hollywood Reporter, on the magic of going to an Oasis concert in 2025.

It would have been weird back in Oasis’ heyday to talk about a big stadium-rock show being uniquely “human” — what the hell else could it be? But after decades of music chosen by algorithm, of the spirit of listen-together radio fracturing into a million personalized streams, of social media and the politics that fuel it ordering acts into groups of the allowed and prohibited, of autotuning and overdubbing washing out raw instruments, of our current cultural era’s spell of phone-zombification, of the communal spaces of record stores disbanded as a mainstream notion of gathering, well, it’s not such a given anymore. Thousands of people convening under the sky to hear a few talented fellow humans break their backs with a bunch of instruments, that oldest of entertainment constructs, now also feels like a radical one.

And:

The Gallaghers seemed to be coming just in time, to remind us of what it was like before — to issue a gentle caveat, by the power of positive suggestion, that we should think twice before plunging further into the abyss. To warn that human-made art is fragile and too easily undone — in fact in their case for 16 years it was undone — by its embodiments acting too much like petty, well, humans. And the true feat, the band was saying triumphantly Sunday, is that there is a way to hold it together.

I make no secret of the fact that Oasis are my favorite band of all time which, very simply, defined my teenage years. They’re responsible for some of my most cherished memories with my friends, enjoying music together.

I was lucky enough to be able to see Oasis in London this summer. To be honest with you, we didn’t have great seats. But what I’ll remember from that night won’t necessarily be the view (eh) or the audio quality at Wembley (surprisingly great). I’ll remember the sheer joy of shouting Live Forever with Silvia next to me. I’ll remember doing the Poznan with Jeremy and two guys next to us who just went for it because Liam asked to hug the stranger next to you. I’ll remember the thrill of witnessing Oasis walk back on stage after 16 years with 80,000 other people feeling the same thing as me, right there and then.

This story by Zeitchik hit me not only because it’s Oasis, but because I’ve always believed in the power of music recommendations that come from other humans – not algorithms – who would like you to also enjoy something. And to do so together.

If only for two hours one summer night in a stadium, there’s beauty to losing your voice to music not delivered by an algorithm.

Permalink

Claude’s Chat History and App Integrations as a Form of Lock-In

Earlier today, Anthropic announced that, similar to ChatGPT, Claude will be able to search and reference your previous chats with it. From their support document:

You can now prompt Claude to search through your previous conversations to find and reference relevant information in new chats. This feature helps you continue discussions seamlessly and retrieve context from past interactions without re-explaining everything.

If you’re wondering what Claude can actually search:

You can prompt Claude to search conversations within these boundaries:

  • All chats outside of projects.
  • Individual project conversations (searches are limited to within each specific project).

Conversation history is a powerful feature of modern LLMs, and although Anthropic hasn’t announced personalized context based on memory yet (a feature that not everybody likes), it seems like that’s the next shoe to drop. Chat search, memory with personalized context, larger context windows, and performance are the four key aspects I preferred in ChatGPT; Anthropic just addressed one of them, and a second may be launching soon.

As I’ve shared on Mastodon, despite the power and speed of GPT-5, I find myself gravitating more and more toward Claude (and specifically Opus 4.1) because of MCP and connectors. Claude works with the apps I already use and allows me to easily turn conversations into actions performed in Notion, Todoist, Spotify, or other apps that have an API that can talk to Claude. This is changing my workflow in two notable ways: I’m only using ChatGPT for “regular” web search queries (mostly via the Safari extension) and less for work because it doesn’t match Claude’s extensive MCP support with tools; and I’m prioritizing web apps that have well-supported web APIs that work with LLMs over local apps that don’t (Spotify vs. Apple Music, Todoist vs. Reminders, Notion vs. Notes, etc.). Chat search (and, again, I hope personalized context based on memory soon) further adds to this change in the apps I use.

Let me offer an example. I like combining Claude’s web search abilities with Zapier tools that integrate with Spotify to make Claude create playlists for me based on album reviews or music roundups. A few weeks ago, I started the process of converting this Chorus article into a playlist, but I never finished the task since I was running into Zapier rate limits. This evening, I asked Claude if we ever worked on any playlists, it found the old chats and pointed out that one of them still needed to be completed. From there, it got to work again, picked up where it left off in Chorus’ article, and finished filling the playlist with the most popular songs that best represent the albums picked by Jason Tate and team. So not only could Claude find the chat, but it got back to work with tools based on the state of the old conversation.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn't integrate with LLMs like this.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn’t integrate with LLMs like this.

Even more impressively, after Claude was done finishing the playlist from an old chat, I asked it to take all the playlists created so far and append their links to my daily note in Notion; that also worked. From my phone, in a conversation that started as a search test for old chats and later grew into an agentic workflow that called tools for web search, Spotify, and Notion.

I find these use cases very interesting, and they’re the reason I struggle to incorporate ChatGPT into my everyday workflow beyond web searches. They’re also why I hesitate to use Apple apps right now, and I’m not sure Liquid Glass will be enough to win me back over.

Permalink

Thoughts on iPadOS 26: Hello, It’s Good to Be Back

iPadOS 26.

iPadOS 26.

Apple released the first public betas of iOS and iPadOS 26 last week, and I’m going to cut to the chase with this story: although I’m still wrapping my head around Liquid Glass and trying to understand where this new design language will land, iPadOS 26 has fundamentally revolutionized my workflow in just a little over a month. While talking to Craig Federighi at WWDC, I did get the sense that Apple was approaching the iPad platform from a different – perhaps more humble – perspective, with a newfound willingness to listen to power users and find a better balance between the simplicity of the iPad and its flexibility. Actually using iPadOS 26, however, has far exceeded my expectations – which pushed me to completely rethink my desk setup (again) and the apps I use around the iPad Pro and iPadOS 26.

Conversely, I’ve been struggling to understand iOS 26 and the role of Liquid Glass. I’ve documented my issues with Apple’s new design with a variety of examples recently, but the truth is that at this point in the beta cycle, I don’t know what to write about Liquid Glass yet. For this reason, despite my many attempts to write this story over the past few weeks, I’ve decided to take a different approach.

Today, I only feel comfortable sharing my opinion about iPadOS 26, and I’ve chosen to delay my analysis of iOS 26 until later this year. I’ve found it incredibly challenging to form an opinion on Liquid Glass and iOS 26 when everything is still so in flux and being adjusted on a beta-by-beta basis. I feel like sharing what I think about Liquid Glass right now would be a fruitless exercise, or shortsighted perhaps, one way or another. Instead, since I find iPadOS 26 to be more of a known entity at the moment, I’ve decided to focus on that and how this software update is changing the way I work. The time will come for me to write about Liquid Glass and Apple’s vision for the future of its software design. Today, though, I’m all about the iPad.

It’s been an interesting month since WWDC. This year more than ever, I have a feeling that Apple isn’t done tweaking its OSes and much will continue to change between now and September. But for now, as always, let’s dive in.

Read more



Testing AirPods 4’s Beta Update and Improved Recording Quality for Voice Notes

Earlier today, I updated my AirPods 4’s firmware to the beta version, which Apple released yesterday. I was curious to play around with the software update for two reasons:

  1. AirPods are getting support for automatically pausing media playback when you fall asleep, and
  2. Apple is advertising improved “studio quality” recording on AirPods 4 and AirPods Pro 2 with this update.

I’ll cut to the chase: while I haven’t been able to test sleep detection yet since I don’t take naps during the day, I think Apple delivered on its promise of improved voice recordings with AirPods.

Read more


The Curious Case of Apple and Perplexity

Good post by Parker Ortolani, analyzing the pros and cons of a potential Perplexity acquisition by Apple:

According to Mark Gurman, Apple executives are in the early stages of mulling an acquisition of Perplexity. My initial reaction was “that wouldn’t work.” But I’ve taken some time to think through what it could look like if it were to come to fruition.

He gets to the core of the issue with this acquisition:

At the end of the day, Apple needs a technology company, not another product company. Perplexity is really good at, for lack of a better word, forking models. But their true speciality is in making great products, they’re amazing at packaging this technology. The reality is though, that Apple already knows how to do that. Of course, only if they can get out of their own way. That very issue is why I’m unsure the two companies would fit together. A company like Anthropic, a foundational AI lab that develops models from scratch is what Apple could stand to benefit from. That’s something that doesn’t just put them on more equal footing with Google, it’s something that also puts them on equal footing with OpenAI which is arguably the real threat.

While I’m not the biggest fan of Perplexity’s web scraping policies and its CEO’s remarks, it’s undeniable that the company has built a series of good consumer products, they’re fast at integrating the latest models from major AI vendors, and they’ve even dipped their toes in the custom model waters (with Sonar, an in-house model based on Llama). At first sight, I would agree with Ortolani and say that Apple would need Perplexity’s search engine and LLM integration talent more than the Perplexity app itself. So far, Apple has only integrated ChatGPT into its operating systems; Perplexity supports all the major LLMs currently in existence. If Apple wants to make the best computers for AI rather than being a bleeding-edge AI provider itself…well, that’s pretty much aligned with Perplexity’s software-focused goals.

However, I wonder if Perplexity’s work on its iOS voice assistant may have also played a role in these rumors. As I wrote a few months ago, Perplexity shipped a solid demo of what a deep LLM integration with core iOS services and frameworks could look like. What could Perplexity’s tech do when integrated with Siri, Spotlight, Safari, Music, or even third-party app entities in Shortcuts?

Or, look at it this way: if you’re Apple, would you spend $14 billion to buy an app and rebrand it as “Siri That Works” next year?

Permalink

I Have Many Questions About Apple’s Updated Foundation Models and the (Great) ‘Use Model’ Action in Shortcuts

Apple's 'Use Model' action in Shortcuts.

Apple’s ‘Use Model’ action in Shortcuts.

I mentioned this on AppStories during the week of WWDC: I think Apple’s new ‘Use Model’ action in Shortcuts for iOS/iPadOS/macOS 26, which lets you prompt either the local or cloud-based Apple Foundation models, is Apple Intelligence’s best and most exciting new feature for power users this year. This blog post is a way for me to better explain why as well as publicly investigate some aspects of the updated Foundation models that I don’t fully understand yet.

Read more


Initial Notes on iPadOS 26’s Local Capture Mode

Now this is what I call follow-up: six years after I linked to Jason Snell’s first experiments with podcasting on the iPad Pro (which later became part of a chapter of my Beyond the Tablet story from 2019), I get to link to Snell’s first impressions of iPadOS 26’s brand new local capture mode, which lets iPad users record their own audio and video during a call.

First, some context:

To ensure that the very best audio and video is used in the final product, we tend to use a technique called a “multi-ender.” In addition to the lower-quality call that’s going on, we all record ourselves on our local device at full quality, and upload those files when we’re done. The result is a final product that isn’t plagued by the dropouts and other quirks of the call itself. I’ve had podcasts where one of my panelists was connected to us via a plain old phone line—but they recorded themselves locally and the finished product sounded completely pristine.

This is how I’ve been recording podcasts since 2013. We used to be on a call on Skype and record audio with QuickTime; now we use Zoom, Audio Hijack, and OBS for video, but the concept is the same. Here’s Snell on how the new iPadOS feature, which lives in Control Center, works:

The file it saves is marked as an mp4 file, but it’s really a container featuring two separate content streams: full-quality video saved in HEVC (H.265) format, and lossless audio in the FLAC compression format. Regardless, I haven’t run into a single format conversion issue. My audio-sync automations on my Mac accept the file just fine, and Ferrite had no problem importing it, either. (The only quirk was that it captured audio at a 48KHz sample rate and I generally work at 24-bit, 44.1KHz. I have no idea if that’s because of my microphone or because of the iPad, but it doesn’t really matter since converting sample rates and dithering bit depths is easy.)

I tested this today with a FaceTime call. Everything worked as advertised, and the call’s MP4 file was successfully saved in my Downloads folder in iCloud Drive (I wish there was a way to change this). I was initially confused by the fact that recording automatically begins as soon as a call starts: if you press the Local Capture button in Control Center before getting on a call, as soon as it connects, you’ll be recording. It’s kind of an odd choice to make this feature just a…Control Center toggle, but I’ll take it! My MixPre-3 II audio interface and microphone worked right away, and I think there’s a very good chance I’ll be able to record AppStories and my other shows from my iPad Pro – with no more workarounds – this summer.

Permalink

Interview: Craig Federighi Opens Up About iPadOS, Its Multitasking Journey, and the iPad’s Essence

iPadOS 26. Source: Apple.

iPadOS 26. Source: Apple.

It’s a cool, sunny morning at Apple Park as I’m walking my way along the iconic glass ring to meet with Apple’s SVP of Software Engineering, Craig Federighi, for a conversation about the iPad.

It’s the Wednesday after WWDC, and although there are still some developers and members of the press around Apple’s campus, it seems like employees have returned to their regular routines. Peek through the glass, and you’ll see engineers working at their stations, half-erased whiteboards, and an infinite supply of Studio Displays on wooden desks with rounded corners. Some guests are still taking pictures by the WWDC sign. There are fewer security dogs, but they’re obviously all good.

Despite the list of elaborate questions on my mind about iPadOS 26 and its new multitasking, the long history of iPad criticisms (including mine) over the years, and what makes an iPad different from a Mac, I can’t stop thinking about the simplest, most obvious question I could ask – one that harkens back to an old commercial about the company’s modular tablet:

In 2025, what even is an iPad according to Federighi?

Read more