Federico Viticci

9613 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.

This Week's Sponsor:

Gamery

A sleek and intuitive game library app for casuals and pros


Reports of Slide Over’s Death Were Greatly Exaggerated

Well, that didn’t take long.

In yesterday’s second developer beta of iPadOS 26.1, Apple restored the Slide Over functionality that was removed with the debut of the new windowing system in iPadOS 26.0 last month. Well…they sort of restored Slide Over, at least.

In my review of iPadOS 26, I wrote:

So in iPadOS 26, Apple decided to scrap Split View and Slide Over altogether, leaving users the choice between full-screen apps, a revamped Stage Manager, and the brand new windowed mode. At some level, I get it. Apple probably thinks that the functionality of Split View can be replicated with new windowing controls (as we’ll see, there are actual tiling options to split the screen into halves) and that most people who were using these two modes would be better served by the new multitasking system the company designed for iPadOS 26.

At the same time, though, I can’t help but feel that the removal of Slide Over is a misstep on Apple’s part. There’s really no great way to replicate the versatility of Slide Over with the iPad’s new windowing. Making a bunch of windows extra small and stacked on the side of the screen would require a lot of manual resizing and repositioning; at that point, you’re just using a worse version of classic windowing. I don’t know what Apple’s solution could have been here – particularly because, like I said above, the iPad did end up with too many multitasking systems to pick from. But the Mac also has several multitasking features, and people love the Mac, so maybe that’s fine, too?

Slide Over will be missed, but perhaps there’ll be a way for Apple to make it come back.

The unceremonious removal of Slide Over from iPadOS 26 was the most common comment I received from MacStories readers over the past month. I also saw a lot of posts on different subreddits from people who claimed they weren’t updating to iPadOS 26 so they wouldn’t lose Slide Over functionality. Perhaps Apple underestimated how much people loved and used Slide Over, or maybe – like I argued – they thought that multitasking and window resizing could replace it. In any case, Slide Over is back, but it’s slightly different from what it used to be.

The bad news first: the new Slide Over doesn’t support multiple apps in the Slide Over stack with their own dedicated app switcher. (This option was introduced in iPadOS 13.) So far, the new Slide Over is single-window only, and it works alongside iPadOS windowing to put one specific window in Slide Over mode. Any window can be moved into Slide Over, but only one Slide Over entity can exist at a time. From this perspective, Slide Over is different from full-screen: that mode also works alongside windowing, but multiple windows can be in their full-screen “spaces” at the same time.

On one hand, I hope that Apple can find a way to restore Slide Over’s former support for multiple apps. On the other, I feel like the “good news” part is the reason that will prevent the company from doing so. What I like about the new Slide Over implementation is that the window can be resized: you’re no longer constrained to using Slide Over in a “tall iPhone” layout, which is great. I like having the option to stretch out Music (which I’ve always used in Slide Over on iPad), and I also appreciate the glassy border that is displayed around the Slide Over window to easily differentiate it from regular windows. I feel, however, that since you can now resize the Slide Over window, also enabling support for multiple apps in Slide Over may get too confusing or complex to manage. Personally, now that I’ve tested it, I’d take a resizable single Slide Over window over multiple non-resizable apps in Slide Over.

Between improvements to local capture and even more keyboard shortcuts, it’s great (and reassuring) to see Apple iterate on iPadOS so quickly after last month’s major update. Remember when we used to wait two years for minor changes?

Permalink

Apps in ChatGPT

OpenAI announced a lot of developer-related features at yesterday’s DevDay event, and as you can imagine, the most interesting one for me is the introduction of apps in ChatGPT. From the OpenAI blog:

Today we’re introducing a new generation of apps you can chat with, right inside ChatGPT. Developers can start building them today with the new Apps SDK, available in preview.

Apps in ChatGPT fit naturally into conversation. You can discover them when ChatGPT suggests one at the right time, or by calling them by name. Apps respond to natural language and include interactive interfaces you can use right in the chat.

And:

Developers can start building and testing apps today with the new Apps SDK preview, which we’re releasing as an open standard built on the Model Context Protocol⁠ (MCP). To start building, visit our documentation for guidelines and example apps, and then test your apps using Developer Mode in ChatGPT.

Also:

Later this year, we’ll launch apps to ChatGPT Business, Enterprise and Edu. We’ll also open submissions so developers can publish their apps in ChatGPT, and launch a dedicated directory where users can browse and search for them. Apps that meet the standards provided in our developer guidelines will be eligible to be listed, and those that meet higher design and functionality standards may be featured more prominently—both in the directory and in conversations.

Looks like we got the timing right with this week’s episode of AppStories about demystifying MCP and what it means to connect apps to LLMs. In the episode, I expressed my optimism for the potential of MCP and the idea of augmenting your favorite apps with the capabilities of LLMs. However, I also lamented how fragmented the MCP ecosystem is and how confusing it can be for users to wrap their heads around MCP “servers” and other obscure, developer-adjacent terminology.

In classic OpenAI fashion, their announcement of apps in ChatGPT aims to (almost) completely abstract the complexity of MCP from users. In one announcement, OpenAI addressed my two top complaints about MCP that I shared on AppStories: they revealed their own upcoming ecosystem of apps, and they’re going to make it simple to use.

Does that ring a bell? It’s impossible to tell right now if OpenAI’s bet to become a platform will be successful, but early signs are encouraging, and the company has the leverage of 800 million active users to convince third-party developers to jump on board. Just this morning, I asked ChatGPT to put together a custom Spotify playlist with bands that had a similar vibe to Moving Mountains in their Pneuma era, and after thinking for a few minutes, it worked. I did it from the ChatGPT web app and didn’t have to involve the App Store at all.

If I were Apple, I’d start growing increasingly concerned at the prospect of another company controlling the interactions between users and their favorite apps. As I argued on AppStories, my hope is that the rumored MCP framework allegedly being worked on by Apple is exactly that – a bridge (powered by App Intents) between App Store apps and LLMs that can serve as a stopgap until Apple gets their LLM act together. But that’s a story for another time.

Permalink

iOS and iPadOS 26: The MacStories Review

Old and new through the liquid glass.

My first job, I was in-house at a fur company with this old pro copywriter, Greek, named Teddy. And Teddy told me the most important idea in advertising is “new”. Creates an itch. You simply put your product in there as a kind of calamine lotion. But he also talked about a deeper bond with the product: nostalgia. It’s delicate, but potent.

– Don Draper (Mad Men Season 1, Episode 13 – “The Wheel”)

I was reminded of this Don Draper quote from one of my all-time favorite TV scenes – the Kodak Carousel pitch – when reflecting upon my contrasting feelings about iOS and iPadOS 26 a few weeks ago. Some of you may be wondering what I’m doing here, starting my annual review of an operating system with a Mad Men reference. But here we are today, with an eye-catching iOS update that, given the circumstances, is betting it all on the glittering allure of a new visual design, and a tablet operating system that comes full circle with old, almost nostalgic functionalities repurposed for the modern age.

I’ve spent the past three months using and working with iOS and iPadOS 26, and there’s this idea I keep coming back to: the old and new coexist in Apple’s software strategy this year, and they paint a hyperrealistic picture of a company that’s stuck in a transition phase of its own making.

Read more


Testing Claude’s Native Integration with Reminders and Calendar on iOS and iPadOS

Reminders created by Claude for iOS after a series of web searches.

Reminders created by Claude for iOS after a series of web searches.

A few months ago, when Perplexity unveiled their voice assistant integrated with native iOS frameworks, I wrote that I was surprised no other major AI lab had shipped a similar feature in its iOS apps:

The most important point about this feature is the fact that, in hindsight, this is so obvious and I’m surprised that OpenAI still hasn’t shipped the same feature for their incredibly popular ChatGPT voice mode. Perplexity’s iOS voice assistant isn’t using any “secret” tricks or hidden APIs: they’re simply integrating with existing frameworks and APIs that any third-party iOS developer can already work with. They’re leveraging EventKit for reminder/calendar event retrieval and creation; they’re using MapKit to load inline snippets of Apple Maps locations; they’re using Mail’s native compose sheet and Safari View Controller to let users send pre-filled emails or browse webpages manually; they’re integrating with MusicKit to play songs from Apple Music, provided that you have the Music app installed and an active subscription. Theoretically, there is nothing stopping Perplexity from rolling additional frameworks such as ShazamKit, Image Playground, WeatherKit, the clipboard, or even photo library access into their voice assistant. Perplexity hasn’t found a “loophole” to replicate Siri functionalities; they were just the first major AI company to do so.

It’s been a few months since Perplexity rolled out their iOS assistant, and, so far, the company has chosen to keep the iOS integrations exclusive to voice mode; you can’t have text conversations with Perplexity on iPhone and iPad and ask it to look at your reminders or calendar events.

Anthropic, however, has done it and has become – to the best of my knowledge – the second major AI lab to plug directly into Apple’s native iOS and iPadOS frameworks, with an important twist: in the latest version of Claude, you can have text conversations and tell the model to look into your Reminders database or Calendar app without having to use voice mode.

Read more


Oasis Just Glitched the Algorithm

Beautiful, poignant story by Steven Zeitchik, writing for The Hollywood Reporter, on the magic of going to an Oasis concert in 2025.

It would have been weird back in Oasis’ heyday to talk about a big stadium-rock show being uniquely “human” — what the hell else could it be? But after decades of music chosen by algorithm, of the spirit of listen-together radio fracturing into a million personalized streams, of social media and the politics that fuel it ordering acts into groups of the allowed and prohibited, of autotuning and overdubbing washing out raw instruments, of our current cultural era’s spell of phone-zombification, of the communal spaces of record stores disbanded as a mainstream notion of gathering, well, it’s not such a given anymore. Thousands of people convening under the sky to hear a few talented fellow humans break their backs with a bunch of instruments, that oldest of entertainment constructs, now also feels like a radical one.

And:

The Gallaghers seemed to be coming just in time, to remind us of what it was like before — to issue a gentle caveat, by the power of positive suggestion, that we should think twice before plunging further into the abyss. To warn that human-made art is fragile and too easily undone — in fact in their case for 16 years it was undone — by its embodiments acting too much like petty, well, humans. And the true feat, the band was saying triumphantly Sunday, is that there is a way to hold it together.

I make no secret of the fact that Oasis are my favorite band of all time which, very simply, defined my teenage years. They’re responsible for some of my most cherished memories with my friends, enjoying music together.

I was lucky enough to be able to see Oasis in London this summer. To be honest with you, we didn’t have great seats. But what I’ll remember from that night won’t necessarily be the view (eh) or the audio quality at Wembley (surprisingly great). I’ll remember the sheer joy of shouting Live Forever with Silvia next to me. I’ll remember doing the Poznan with Jeremy and two guys next to us who just went for it because Liam asked to hug the stranger next to you. I’ll remember the thrill of witnessing Oasis walk back on stage after 16 years with 80,000 other people feeling the same thing as me, right there and then.

This story by Zeitchik hit me not only because it’s Oasis, but because I’ve always believed in the power of music recommendations that come from other humans – not algorithms – who would like you to also enjoy something. And to do so together.

If only for two hours one summer night in a stadium, there’s beauty to losing your voice to music not delivered by an algorithm.

Permalink

Claude’s Chat History and App Integrations as a Form of Lock-In

Earlier today, Anthropic announced that, similar to ChatGPT, Claude will be able to search and reference your previous chats with it. From their support document:

You can now prompt Claude to search through your previous conversations to find and reference relevant information in new chats. This feature helps you continue discussions seamlessly and retrieve context from past interactions without re-explaining everything.

If you’re wondering what Claude can actually search:

You can prompt Claude to search conversations within these boundaries:

  • All chats outside of projects.
  • Individual project conversations (searches are limited to within each specific project).

Conversation history is a powerful feature of modern LLMs, and although Anthropic hasn’t announced personalized context based on memory yet (a feature that not everybody likes), it seems like that’s the next shoe to drop. Chat search, memory with personalized context, larger context windows, and performance are the four key aspects I preferred in ChatGPT; Anthropic just addressed one of them, and a second may be launching soon.

As I’ve shared on Mastodon, despite the power and speed of GPT-5, I find myself gravitating more and more toward Claude (and specifically Opus 4.1) because of MCP and connectors. Claude works with the apps I already use and allows me to easily turn conversations into actions performed in Notion, Todoist, Spotify, or other apps that have an API that can talk to Claude. This is changing my workflow in two notable ways: I’m only using ChatGPT for “regular” web search queries (mostly via the Safari extension) and less for work because it doesn’t match Claude’s extensive MCP support with tools; and I’m prioritizing web apps that have well-supported web APIs that work with LLMs over local apps that don’t (Spotify vs. Apple Music, Todoist vs. Reminders, Notion vs. Notes, etc.). Chat search (and, again, I hope personalized context based on memory soon) further adds to this change in the apps I use.

Let me offer an example. I like combining Claude’s web search abilities with Zapier tools that integrate with Spotify to make Claude create playlists for me based on album reviews or music roundups. A few weeks ago, I started the process of converting this Chorus article into a playlist, but I never finished the task since I was running into Zapier rate limits. This evening, I asked Claude if we ever worked on any playlists, it found the old chats and pointed out that one of them still needed to be completed. From there, it got to work again, picked up where it left off in Chorus’ article, and finished filling the playlist with the most popular songs that best represent the albums picked by Jason Tate and team. So not only could Claude find the chat, but it got back to work with tools based on the state of the old conversation.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn't integrate with LLMs like this.

Resuming a chat that was about creating a Spotify playlist (right). Sadly, Apple Music doesn’t integrate with LLMs like this.

Even more impressively, after Claude was done finishing the playlist from an old chat, I asked it to take all the playlists created so far and append their links to my daily note in Notion; that also worked. From my phone, in a conversation that started as a search test for old chats and later grew into an agentic workflow that called tools for web search, Spotify, and Notion.

I find these use cases very interesting, and they’re the reason I struggle to incorporate ChatGPT into my everyday workflow beyond web searches. They’re also why I hesitate to use Apple apps right now, and I’m not sure Liquid Glass will be enough to win me back over.

Permalink

Thoughts on iPadOS 26: Hello, It’s Good to Be Back

iPadOS 26.

iPadOS 26.

Apple released the first public betas of iOS and iPadOS 26 last week, and I’m going to cut to the chase with this story: although I’m still wrapping my head around Liquid Glass and trying to understand where this new design language will land, iPadOS 26 has fundamentally revolutionized my workflow in just a little over a month. While talking to Craig Federighi at WWDC, I did get the sense that Apple was approaching the iPad platform from a different – perhaps more humble – perspective, with a newfound willingness to listen to power users and find a better balance between the simplicity of the iPad and its flexibility. Actually using iPadOS 26, however, has far exceeded my expectations – which pushed me to completely rethink my desk setup (again) and the apps I use around the iPad Pro and iPadOS 26.

Conversely, I’ve been struggling to understand iOS 26 and the role of Liquid Glass. I’ve documented my issues with Apple’s new design with a variety of examples recently, but the truth is that at this point in the beta cycle, I don’t know what to write about Liquid Glass yet. For this reason, despite my many attempts to write this story over the past few weeks, I’ve decided to take a different approach.

Today, I only feel comfortable sharing my opinion about iPadOS 26, and I’ve chosen to delay my analysis of iOS 26 until later this year. I’ve found it incredibly challenging to form an opinion on Liquid Glass and iOS 26 when everything is still so in flux and being adjusted on a beta-by-beta basis. I feel like sharing what I think about Liquid Glass right now would be a fruitless exercise, or shortsighted perhaps, one way or another. Instead, since I find iPadOS 26 to be more of a known entity at the moment, I’ve decided to focus on that and how this software update is changing the way I work. The time will come for me to write about Liquid Glass and Apple’s vision for the future of its software design. Today, though, I’m all about the iPad.

It’s been an interesting month since WWDC. This year more than ever, I have a feeling that Apple isn’t done tweaking its OSes and much will continue to change between now and September. But for now, as always, let’s dive in.

Read more



Testing AirPods 4’s Beta Update and Improved Recording Quality for Voice Notes

Earlier today, I updated my AirPods 4’s firmware to the beta version, which Apple released yesterday. I was curious to play around with the software update for two reasons:

  1. AirPods are getting support for automatically pausing media playback when you fall asleep, and
  2. Apple is advertising improved “studio quality” recording on AirPods 4 and AirPods Pro 2 with this update.

I’ll cut to the chase: while I haven’t been able to test sleep detection yet since I don’t take naps during the day, I think Apple delivered on its promise of improved voice recordings with AirPods.

Read more