This Week's Sponsor:

Inoreader

Boost Productivity and Gain Insights with AI-Powered Intelligence Tools


Podcast Rewind: A Dock-Free Experiment, WWDC Reunion Plans, and an International Treasure Hunt

Enjoy the latest episodes from MacStories’ family of podcasts:

Comfort Zone

Chris brings some new apps, Matt defends himself for another hardware purchase, and Niléane’s no-dock challenge has brutal results. Then, Matt and Niléane really let Chris down with his end-of-show question. Like really, really let down.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools

MacStories Unwind

This week, Federico and John share the news that they’ll be at WWDC together for the first time since 2023. Then, Federico recommends Clair Obscur: Expedition 33, an RPG that has taken a lot of gamers by surprise, and John is into Duster, a new JJ Abrams TV show set in the ‘70s. All that, plus a Raiders of the Lost Ark deal.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools

Magic Rays of Light

Sigmund and Devon highlight Apple Original film Fountain of Youth from Guy Ritchie, break down the new features of CarPlay Ultra, and revisit feature documentary Deaf President Now! upon its streaming release.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools

Read more


Mozilla Is Shutting Down Pocket

Today, Mozilla announced in a support document that it will soon end development of Pocket, its read-later app that’s been around since the early days of the App Store:

We’ve made the difficult decision to shut down Pocket on July 8, 2025. Thank you for being part of our journey over the years—we’re proud of the impact Pocket has had for our users and communities.

I never like to see an app that people rely on go, but I’m not surprised that Mozilla has pulled its support for Pocket either. The app evolved rapidly in the early days when it was called Read It Later and competing fiercely with Instapaper. But that rivalry burned itself out years ago, and after Mozilla purchased Pocket, it seemed adrift.

My Pocket queue is a read-later time capsule.

My Pocket queue is a read-later time capsule.

Recently, Mozilla laid off 30% of its workforce and Pocket faced new competition from the likes of Matter and Readwise Reader, which entered the fray with new ideas about what a read-later app could be. As I wrote in my first review of Matter:

Apps like Instapaper and Read It Later, which became Pocket, pioneered saving web articles for later. The original iPhone ran on AT&T’s EDGE mobile network in the U.S. and coverage was spotty. Read-later apps saved stripped-down versions of articles from the web that could be downloaded quickly and read offline when EDGE was unavailable. The need to save content offline because of slow and unreliable mobile networks is far less pressing today, but collecting links and time-shifting reading remains popular.

Today, read-later apps like Readwise are more focused on research, integrating with note-taking systems, and leveraging AI. There’s still a place for simpler solutions such as GoodLinks, which is one of my personal favorites, but given the existential threat Mozilla currently faces, ending Pocket was probably the right choice.


Early Impressions of Claude Opus 4 and Using Tools with Extended Thinking

Claude Opus 4 and extended thinking with tools.

Claude Opus 4 and extended thinking with tools.

For the past two days, I’ve been testing an early access version of Claude Opus 4, the latest model by Anthropic that was just announced today. You can read more about the model in the official blog post and find additional documentation here. What follows is a series of initial thoughts and notes based on the 48 hours I spent with Claude Opus 4, which I tested in both the Claude app and Claude Code.

For starters, Anthropic describes Opus 4 as its most capable hybrid model with improvements in coding, writing, and reasoning. I don’t use AI for creative writing, but I have dabbled with “vibe coding” for a collection of personal Obsidian plugins (created and managed with Claude Code, following these tips by Harper Reed), and I’m especially interested in Claude’s integrations with Google Workspace and MCP servers. (My favorite solution for MCP at the moment is Zapier, which I’ve been using for a long time for web automations.) So I decided to focus my tests on reasoning with integrations and some light experiments with the upgraded Claude Code in the macOS Terminal.

Read more


Notes on Mercury Weather’s New Radar Maps Feature

Since covering Mercury Weather 2.0 and its launch on the Vision Pro here on MacStories, I’ve been keeping up with the weather app on Club MacStories. It’s one of my favorite Mac menu bar apps, it has held a spot on my default Apple Watch face since its launch, and last fall, it added severe weather notifications.

I love the app’s design and focus as much today as I did when I wrote about its debut in 2023. Today, though, Mercury Weather is a more well-rounded app than ever before. Through regular updates, the app has filled in a lot of the holes in its feature set that may have turned off some users two years ago.

Today, Mercury Weather adds weather radar maps, which was one of the features I missed most from other weather apps, along with the severe weather notifications that were added late last year. It’s a welcome addition that means the next time a storm is bearing down on my neighborhood, I won’t have to switch to a different app to see what’s coming my way.

Zooming out to navigate the globe.

Zooming out to navigate the globe.

Radar maps are available on the iPhone, iPad, and Mac versions of Mercury Weather; they offer a couple of different map styles and a legend that explains what each color on the map means. If you zoom out, you can get a global view of Earth with your favorite locations noted on the map. Tap one, and you’ll get the current conditions for that spot. Mercury Weather already had an extensive set of widgets for the iPhone, iPad, and Mac, but this update adds small, medium, and large widgets for the radar map, too.

A Mercury Weather radar map on the Mac.

A Mercury Weather radar map on the Mac.

With a long list of updates since launch, Mercury Weather is worth another look if you passed on it before because it was missing features you wanted. The app is available on the App Store as a free download. Certain features require a subscription or lifetime purchase via an in-app purchase.


Microsoft Eyes Xbox Web Store after Epic Court Decision

In the wake of U.S. District Judge Yvonne Gonzalez Rogers’ decision in Epic Games’ litigation against Apple, I commented on NPC: Next Portable Console that I expected Microsoft to enter the fray with its own web store soon. As reported by Tom Warren at The Verge, it looks like that’s exactly what Microsoft intends to do. Commenting on Judge Gonzalez Rogers’ contempt order in the context of Epic’s recent motion to return Fortnite to the App Store, Warren notes:

It’s a key ruling that has already allowed Fortnite to return to the App Store in the US, complete with the ability for Epic Games to link out to its own payment system inside the game. Microsoft has wanted to offer a similar experience for its Xbox mobile store prior to the ruling, but it says its solution “has been stymied by Apple.”

Ultimately, Microsoft wants its customers to be able to purchase and play its games from inside the Xbox app:

Microsoft started rolling out the ability to purchase games and DLC inside the Xbox mobile app last month, but it had to remove the remote play option to adhere to Apple’s App Store policies. You can’t currently buy an Xbox game in the Xbox mobile app on iOS and then stream it inside that same app. You have to manually navigate to the Xbox Cloud Gaming mobile website on a browser to get access to cloud gaming.

Developers continue to add options to link out to the web to purchase content, but as Microsoft’s court filing shows, the biggest players on the App Store are weighing the cost of setting up their own storefronts against the risk that Judge Gonzalez Rogers’ decision will be reversed on appeal.

Permalink

OpenAI to Buy Jony Ive’s Stealth Startup for $6.5 Billion

Jony Ive’s stealth AI company known as io is being acquired by OpenAI for $6.5 billion in a deal that is expected to close this summer subject to regulatory approvals. According to reporting by Mark Gurman and Shirin Ghaffary of Bloomberg:

The purchase — the largest in OpenAI’s history — will provide the company with a dedicated unit for developing AI-powered devices. Acquiring the secretive startup, named io, also will secure the services of Ive and other former Apple designers who were behind iconic products such as the iPhone.

The partnership builds on a 23% stake in io that OpenAI purchased at the end of last year and comes with what Bloomberg describes as 55 hardware engineers, software developers, and manufacturing experts, plus a cast of accomplished designers.

Ive had this to say about the purportedly novel products he and OpenAI CEO Sam Altman are planning:

“People have an appetite for something new, which is a reflection on a sort of an unease with where we currently are,” Ive said, referring to products available today. Ive and Altman’s first devices are slated to debut in 2026.

Bloomberg also notes that Ive and his team of designers will be taking over all design at OpenAI, including software design like ChatGPT.

For now, the products OpenAI is working on remain a mystery, but given the purchase price and io’s willingness to take its first steps into the spotlight, I expect we’ll be hearing more about this historic collaboration in the months to come.

Permalink

Podcast Rewind: Handheld Rumors and Airbnb Executives on the App’s Redesign

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and John interview Airbnb Vice President of Product Marketing, Jud Coplan, and Vice President of Design, Teo Connor.

On AppStories+, Federico explores running LLMs locally on an M3 Ultra Mac Studio.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools
  • TRMNL – Clarity, at a glance. Get $15 off for 1 week only.

NPC: Next Portable Console

This week, Federico and John round up the many new Switch 2 details that have emerged as launch day draws near. Plus, they share two new and interesting handheld rumors from Anbernic and Miyoo and more.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools

On NPC XL, Federico shares tablet gaming recommendations with John, reevaluates whether he’s hung up on specs, and looks at what Lenovo is doing to integrate its tablets with two all-new controllers.

Read more


Notes on Early Mac Studio AI Benchmarks with Qwen3-235B-A22B and Qwen2.5-VL-72B

I received a top-of-the-line Mac Studio (M3 Ultra, 512 GB of RAM, 8 TB of storage) on loan from Apple last week, and I thought I’d use this opportunity to revive something I’ve been mulling over for some time: more short-form blogging on MacStories in the form of brief “notes” with a dedicated Notes category on the site. Expect more of these “low-pressure”, quick posts in the future.

I’ve been sent this Mac Studio as part of my ongoing experiments with assistive AI and automation, and one of the things I plan to do over the coming weeks and months is playing around with local LLMs that tap into the power of Apple Silicon and the incredible performance headroom afforded by the M3 Ultra and this computer’s specs. I have a lot to learn when it comes to local AI (my shortcuts and experiments so far have focused on cloud models and the Shortcuts app combined with the LLM CLI), but since I had to start somewhere, I downloaded LM Studio and Ollama, installed the llm-ollama plugin, and began experimenting with open-weights models (served from Hugging Face as well as the Ollama library) both in the GGUF format and Apple’s own MLX framework.

LM Studio.

LM Studio.

I posted some of these early tests on Bluesky. I ran the massive Qwen3-235B-A22B model (a Mixture-of-Experts model with 235 billion parameters, 22 billion of which activated at once) with both GGUF and MLX using the beta version of the LM Studio app, and these were the results:

  • GGUF: 16 tokens/second, ~133 GB of RAM used
  • MLX: 24 tok/sec, ~124 GB RAM

As you can see from these first benchmarks (both based on the 4-bit quant of Qwen3-235B-A22B), the Apple Silicon-optimized version of the model resulted in better performance both for token generation and memory usage. Regardless of the version, the Mac Studio absolutely didn’t care and I could barely hear the fans going.

I also wanted to play around with the new generation of vision models (VLMs) to test modern OCR capabilities of these models. One of the tasks that has become kind of a personal AI eval for me lately is taking a long screenshot of a shortcut from the Shortcuts app (using CleanShot’s scrolling captures) and feed it either as a full-res PNG or PDF to an LLM. As I shared before, due to image compression, the vast majority of cloud LLMs either fail to accept the image as input or compresses the image so much that graphical artifacts lead to severe hallucinations in the text analysis of the image. Only o4-mini-high – thanks to its more agentic capabilities and tool-calling – was able to produce a decent output; even then, that was only possible because o4-mini-high decided to slice the image in multiple parts and iterate through each one with discrete pytesseract calls. The task took almost seven minutes to run in ChatGPT.

This morning, I installed the 72-billion parameter version of Qwen2.5-VL, gave it a full-resolution screenshot of a 40-action shortcut, and let it run with Ollama and llm-ollama. After 3.5 minutes and around 100 GB RAM usage, I got a really good, Markdown-formatted analysis of my shortcut back from the model.

To make the experience nicer, I even built a small local-scanning utility that lets me pick an image from Shortcuts and runs it through Qwen2.5-VL (72B) using the ‘Run Shell Script’ action on macOS. It worked beautifully on my first try. Amusingly, the smaller version of Qwen2.5-VL (32B) thought my photo of ergonomic mice was a “collection of seashells”. Fair enough: there’s a reason bigger models are heavier and costlier to run.

Given my struggles with OCR and document analysis with cloud-hosted models, I’m very excited about the potential of local VLMs that bypass memory constraints thanks to the M3 Ultra and provide accurate results in just a few minutes without having to upload private images or PDFs anywhere. I’ve been writing a lot about this idea of “hybrid automation” that combines traditional Mac scripting tools, Shortcuts, and LLMs to unlock workflows that just weren’t possible before; I feel like the power of this Mac Studio is going to be an amazing accelerator for that.

Next up on my list: understanding how to run MLX models with mlx-lm, investigating long-context models with dual-chunk attention support (looking at you, Qwen 2.5), and experimenting with Gemma 3. Fun times ahead!


Is Apple’s AI Predicament Fixable?

On Sunday, Bloomberg’s Mark Gurman published a comprehensive recap of Apple’s AI troubles. There wasn’t much new in Gurman’s story, except quotes from unnamed sources that added to the sense of conflict playing out inside the company. That said, it’s perfect if you haven’t been paying close attention since Apple Intelligence was first announced last June.

What’s troubling about Apple’s predicament isn’t that Apple’s super mom and other AI illustrations looks like they were generated in 2022, a lifetime ago in the world of AI. The trouble is what the company’s struggles mean for next-generation interactions with devices and productivity apps. The promise of natural language requests made to Siri that combine personal context with App Intents is exciting, but it’s mired in multiple layers of technical issues that need to be solved starting, as Gurman reported, with Siri.

The mess is so profound that it raises the question of whether Apple has the institutional capabilities to fix it. As M.G. Siegler wrote yesterday on Spyglass:

Apple, as an organization, simply doesn’t seem built correctly to operate in the age of AI. This technology, even more so than the web, moves insanely fast and is all about iteration. Apple likes to move slowly, measuring a million times and cutting once. Shipping polished jewels. That’s just not going to cut it with AI.

Having studied the fierce competition among AI companies for months, I agree with Siegler. This isn’t like hardware where Apple has successfully entered a category late and dominated it. Hardware plays to Apple’s design and supply chain strengths. In contrast, the rapid iteration of AI models and apps is the antithesis of Apple’s annual OS cycle. It’s a fundamentally different approach driven by intense competition and fueled by billions of dollars of cash.

I tend to agree with Siegler that given where things stand, Apple should replace a lot of Siri’s capabilities with a third-party chatbot and in the longer-term make an acquisition to shake up how it approaches AI. However, I also think the chances of either of those things happening are unlikely given Apple’s historical focus on internally developed solutions.

Permalink