This Week's Sponsor:

TRMNL

The E-ink Companion For Your Favorite Tools


Notes on Mercury Weather’s New Radar Maps Feature

Since covering Mercury Weather 2.0 and its launch on the Vision Pro here on MacStories, I’ve been keeping up with the weather app on Club MacStories. It’s one of my favorite Mac menu bar apps, it has held a spot on my default Apple Watch face since its launch, and last fall, it added severe weather notifications.

I love the app’s design and focus as much today as I did when I wrote about its debut in 2023. Today, though, Mercury Weather is a more well-rounded app than ever before. Through regular updates, the app has filled in a lot of the holes in its feature set that may have turned off some users two years ago.

Today, Mercury Weather adds weather radar maps, which was one of the features I missed most from other weather apps, along with the severe weather notifications that were added late last year. It’s a welcome addition that means the next time a storm is bearing down on my neighborhood, I won’t have to switch to a different app to see what’s coming my way.

Zooming out to navigate the globe.

Zooming out to navigate the globe.

Radar maps are available on the iPhone, iPad, and Mac versions of Mercury Weather; they offer a couple of different map styles and a legend that explains what each color on the map means. If you zoom out, you can get a global view of Earth with your favorite locations noted on the map. Tap one, and you’ll get the current conditions for that spot. Mercury Weather already had an extensive set of widgets for the iPhone, iPad, and Mac, but this update adds small, medium, and large widgets for the radar map, too.

A Mercury Weather radar map on the Mac.

A Mercury Weather radar map on the Mac.

With a long list of updates since launch, Mercury Weather is worth another look if you passed on it before because it was missing features you wanted. The app is available on the App Store as a free download. Certain features require a subscription or lifetime purchase via an in-app purchase.


Microsoft Eyes Xbox Web Store after Epic Court Decision

In the wake of U.S. District Judge Yvonne Gonzalez Rogers’ decision in Epic Games’ litigation against Apple, I commented on NPC: Next Portable Console that I expected Microsoft to enter the fray with its own web store soon. As reported by Tom Warren at The Verge, it looks like that’s exactly what Microsoft intends to do. Commenting on Judge Gonzalez Rogers’ contempt order in the context of Epic’s recent motion to return Fortnite to the App Store, Warren notes:

It’s a key ruling that has already allowed Fortnite to return to the App Store in the US, complete with the ability for Epic Games to link out to its own payment system inside the game. Microsoft has wanted to offer a similar experience for its Xbox mobile store prior to the ruling, but it says its solution “has been stymied by Apple.”

Ultimately, Microsoft wants its customers to be able to purchase and play its games from inside the Xbox app:

Microsoft started rolling out the ability to purchase games and DLC inside the Xbox mobile app last month, but it had to remove the remote play option to adhere to Apple’s App Store policies. You can’t currently buy an Xbox game in the Xbox mobile app on iOS and then stream it inside that same app. You have to manually navigate to the Xbox Cloud Gaming mobile website on a browser to get access to cloud gaming.

Developers continue to add options to link out to the web to purchase content, but as Microsoft’s court filing shows, the biggest players on the App Store are weighing the cost of setting up their own storefronts against the risk that Judge Gonzalez Rogers’ decision will be reversed on appeal.

Permalink

OpenAI to Buy Jony Ive’s Stealth Startup for $6.5 Billion

Jony Ive’s stealth AI company known as io is being acquired by OpenAI for $6.5 billion in a deal that is expected to close this summer subject to regulatory approvals. According to reporting by Mark Gurman and Shirin Ghaffary of Bloomberg:

The purchase — the largest in OpenAI’s history — will provide the company with a dedicated unit for developing AI-powered devices. Acquiring the secretive startup, named io, also will secure the services of Ive and other former Apple designers who were behind iconic products such as the iPhone.

The partnership builds on a 23% stake in io that OpenAI purchased at the end of last year and comes with what Bloomberg describes as 55 hardware engineers, software developers, and manufacturing experts, plus a cast of accomplished designers.

Ive had this to say about the purportedly novel products he and OpenAI CEO Sam Altman are planning:

“People have an appetite for something new, which is a reflection on a sort of an unease with where we currently are,” Ive said, referring to products available today. Ive and Altman’s first devices are slated to debut in 2026.

Bloomberg also notes that Ive and his team of designers will be taking over all design at OpenAI, including software design like ChatGPT.

For now, the products OpenAI is working on remain a mystery, but given the purchase price and io’s willingness to take its first steps into the spotlight, I expect we’ll be hearing more about this historic collaboration in the months to come.

Permalink

Podcast Rewind: Handheld Rumors and Airbnb Executives on the App’s Redesign

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and John interview Airbnb Vice President of Product Marketing, Jud Coplan, and Vice President of Design, Teo Connor.

On AppStories+, Federico explores running LLMs locally on an M3 Ultra Mac Studio.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools
  • TRMNL – Clarity, at a glance. Get $15 off for 1 week only.

NPC: Next Portable Console

This week, Federico and John round up the many new Switch 2 details that have emerged as launch day draws near. Plus, they share two new and interesting handheld rumors from Anbernic and Miyoo and more.

This episode is sponsored by:

  • Inoreader – Boost Productivity and Gain Insights with AI-Powered Intelligence Tools

On NPC XL, Federico shares tablet gaming recommendations with John, reevaluates whether he’s hung up on specs, and looks at what Lenovo is doing to integrate its tablets with two all-new controllers.

Read more


Notes on Early Mac Studio AI Benchmarks with Qwen3-235B-A22B and Qwen2.5-VL-72B

I received a top-of-the-line Mac Studio (M3 Ultra, 512 GB of RAM, 8 TB of storage) on loan from Apple last week, and I thought I’d use this opportunity to revive something I’ve been mulling over for some time: more short-form blogging on MacStories in the form of brief “notes” with a dedicated Notes category on the site. Expect more of these “low-pressure”, quick posts in the future.

I’ve been sent this Mac Studio as part of my ongoing experiments with assistive AI and automation, and one of the things I plan to do over the coming weeks and months is playing around with local LLMs that tap into the power of Apple Silicon and the incredible performance headroom afforded by the M3 Ultra and this computer’s specs. I have a lot to learn when it comes to local AI (my shortcuts and experiments so far have focused on cloud models and the Shortcuts app combined with the LLM CLI), but since I had to start somewhere, I downloaded LM Studio and Ollama, installed the llm-ollama plugin, and began experimenting with open-weights models (served from Hugging Face as well as the Ollama library) both in the GGUF format and Apple’s own MLX framework.

LM Studio.

LM Studio.

I posted some of these early tests on Bluesky. I ran the massive Qwen3-235B-A22B model (a Mixture-of-Experts model with 235 billion parameters, 22 billion of which activated at once) with both GGUF and MLX using the beta version of the LM Studio app, and these were the results:

  • GGUF: 16 tokens/second, ~133 GB of RAM used
  • MLX: 24 tok/sec, ~124 GB RAM

As you can see from these first benchmarks (both based on the 4-bit quant of Qwen3-235B-A22B), the Apple Silicon-optimized version of the model resulted in better performance both for token generation and memory usage. Regardless of the version, the Mac Studio absolutely didn’t care and I could barely hear the fans going.

I also wanted to play around with the new generation of vision models (VLMs) to test modern OCR capabilities of these models. One of the tasks that has become kind of a personal AI eval for me lately is taking a long screenshot of a shortcut from the Shortcuts app (using CleanShot’s scrolling captures) and feed it either as a full-res PNG or PDF to an LLM. As I shared before, due to image compression, the vast majority of cloud LLMs either fail to accept the image as input or compresses the image so much that graphical artifacts lead to severe hallucinations in the text analysis of the image. Only o4-mini-high – thanks to its more agentic capabilities and tool-calling – was able to produce a decent output; even then, that was only possible because o4-mini-high decided to slice the image in multiple parts and iterate through each one with discrete pytesseract calls. The task took almost seven minutes to run in ChatGPT.

This morning, I installed the 72-billion parameter version of Qwen2.5-VL, gave it a full-resolution screenshot of a 40-action shortcut, and let it run with Ollama and llm-ollama. After 3.5 minutes and around 100 GB RAM usage, I got a really good, Markdown-formatted analysis of my shortcut back from the model.

To make the experience nicer, I even built a small local-scanning utility that lets me pick an image from Shortcuts and runs it through Qwen2.5-VL (72B) using the ‘Run Shell Script’ action on macOS. It worked beautifully on my first try. Amusingly, the smaller version of Qwen2.5-VL (32B) thought my photo of ergonomic mice was a “collection of seashells”. Fair enough: there’s a reason bigger models are heavier and costlier to run.

Given my struggles with OCR and document analysis with cloud-hosted models, I’m very excited about the potential of local VLMs that bypass memory constraints thanks to the M3 Ultra and provide accurate results in just a few minutes without having to upload private images or PDFs anywhere. I’ve been writing a lot about this idea of “hybrid automation” that combines traditional Mac scripting tools, Shortcuts, and LLMs to unlock workflows that just weren’t possible before; I feel like the power of this Mac Studio is going to be an amazing accelerator for that.

Next up on my list: understanding how to run MLX models with mlx-lm, investigating long-context models with dual-chunk attention support (looking at you, Qwen 2.5), and experimenting with Gemma 3. Fun times ahead!


Is Apple’s AI Predicament Fixable?

On Sunday, Bloomberg’s Mark Gurman published a comprehensive recap of Apple’s AI troubles. There wasn’t much new in Gurman’s story, except quotes from unnamed sources that added to the sense of conflict playing out inside the company. That said, it’s perfect if you haven’t been paying close attention since Apple Intelligence was first announced last June.

What’s troubling about Apple’s predicament isn’t that Apple’s super mom and other AI illustrations looks like they were generated in 2022, a lifetime ago in the world of AI. The trouble is what the company’s struggles mean for next-generation interactions with devices and productivity apps. The promise of natural language requests made to Siri that combine personal context with App Intents is exciting, but it’s mired in multiple layers of technical issues that need to be solved starting, as Gurman reported, with Siri.

The mess is so profound that it raises the question of whether Apple has the institutional capabilities to fix it. As M.G. Siegler wrote yesterday on Spyglass:

Apple, as an organization, simply doesn’t seem built correctly to operate in the age of AI. This technology, even more so than the web, moves insanely fast and is all about iteration. Apple likes to move slowly, measuring a million times and cutting once. Shipping polished jewels. That’s just not going to cut it with AI.

Having studied the fierce competition among AI companies for months, I agree with Siegler. This isn’t like hardware where Apple has successfully entered a category late and dominated it. Hardware plays to Apple’s design and supply chain strengths. In contrast, the rapid iteration of AI models and apps is the antithesis of Apple’s annual OS cycle. It’s a fundamentally different approach driven by intense competition and fueled by billions of dollars of cash.

I tend to agree with Siegler that given where things stand, Apple should replace a lot of Siri’s capabilities with a third-party chatbot and in the longer-term make an acquisition to shake up how it approaches AI. However, I also think the chances of either of those things happening are unlikely given Apple’s historical focus on internally developed solutions.

Permalink

Hands-On with Sound Therapy on Apple Music

I’ve always been envious of people who can listen to music while they work. For whatever reason, music-listening activates a part of my brain that pulls me away from the task at hand. My mind really wants to focus on the lyrics, the style, the mix – all distractions from whatever it is I’m currently trying to do. It just doesn’t work for me.

But under the right circumstances and with the right kind of music, you can create an environment that is conducive to focus. At least, that’s the idea behind Apple’s recent collaboration with Universal Music Group. It’s called Sound Therapy, a research-based collection of songs meant to promote not only focus, but also relaxation and even healthy sleep.

The effort comes out of UMG’s Sollos venture, a group of scientists and music professionals focused on the relationship between music and wellness. Founded in 2023, the London-based incubator has used its findings to put together a library of music that, as Apple says, “harnesses the power of sound waves, psychoacoustics, and cognitive science to help listeners relax or focus the mind.”

Read more


Google Brings Its NotebookLM Research Tool to iPhone and iPad

Google’s AI research tool NotebookLM dropped on the App Store for iOS and iPadOS a day earlier than expected. If you haven’t used NotebookLM before, it’s Google’s AI research tool. You feed it source materials like PDFs, text files, MP3s, and more. Once your sources are uploaded, you can use Google’s AI to query the sources, asking questions and creating materials that draw on your sources.

Of all the AI tools I’ve tried, NotebookLM’s web app is one of the best I’ve used, which is why I was excited to try it on the iPhone and iPad. I’ve only played with it for a short time, but so far, I like it a lot.

Just like the web app, you can create, edit and delete notebooks, add new sources using the native file picker, view existing sources, chat with your sources, create summaries, timelines, and use the Studio tab to generate a faux podcast of the materials you’ve added to the app. Notebooks can also be filtered and sorted by Recent, Shared, Title, and Downloaded. Unlike the web app, you won’t see predefined prompts for things like a study guide, a briefing document, or FAQs, but you can still generate those materials by asking for them from the Chat tab.

NotebookLM’s native iOS and iPadOS app is primarily focused on audio. The app lets you generate audio overviews from the Chats tab and ‘deep dive’ podcast-style conversations that draw from your sources. Also, the audio generated can be downloaded locally, allowing you to listen later whether or not you have an Internet connection. Playback controls are basic and include buttons to play and pause, skip forward and back by 10 seconds at a time, control playback speed, and share the audio with others.

Generating an audio overview of sources.

Generating an audio overview of sources.

What you won’t find is any integration with features tied to App Intents. That means notebooks don’t show up in Spotlight Search, and there are no widgets, Control Center controls, or Shortcuts actions. Still, for a 1.0, NotebookLM is an excellent addition to Google’s AI tools for the iPhone and iPad.

NotebookLM is available to download from the App Store for free. Some NotebookLM features are free, while others require a subscription that can be purchased as an In-App Purchase in the App Store or from Google directly. You can learn more about the differences between the free and paid versions of NotebookLM on Google’s blog.


Inside Airbnb’s App Redesign: An AppStories Interview with Marketing and Design Leads

Last week, I was in LA for Airbnb’s 2025 Summer Release. As part of the day’s events, Federico and I interviewed Jud Coplan, Airbnb’s Vice President of Product Marketing, and Teo Connor, Airbnb’s Vice President of Design, for AppStories to talk about the new features and app the company launched. It was a great conversation that you can watch on YouTube:

or listen to the episode here:

Last week’s launch was a big one for Airbnb. The company debuted Services and reimagined and expanded Experiences. Services are the sort of things hotels and resorts offer that you used to give up when booking an Airbnb stay. Now, however, you can book a chef, personal trainer, hair stylist, manicurist, photographer, and more. Better yet, you don’t have to book a stay with an Airbnb host to take advantage of services. You can schedule services in your hometown or wherever you happen to be.

Experiences aren’t entirely new to Airbnb, but have been expanded and integrated into the Airbnb app in a way that’s similar to Services. Services allow you to get the most out of a trip from locals who know their cities best, whether that’s a cultural tour, dining experience, outdoor adventure, or something else.

Chef Grace explaining how to serve sadza.

Chef Grace explaining how to serve sadza.

While I was in LA, I prepared a meal alongside several other media folks from around the world. Our instructor was Chef Kuda Grace from Zimbabwe at Flavors from Afar. We made sadza with peanut butter and mustard greens and then sat down together to compare notes from the day’s events, tell stories about our dining experiences, and get to know each other better.

The evening was a lot of fun, but what struck me most about it was something we touched upon in this week’s episode of AppStories. The goal of Airbnb’s redesigned app is to get you to leave it and go out into the world to try new things. It reduces the friction and anxiety of taking the plunge into something new and emphasizes social interactions in the real world instead of on a screen. In 2025, that’s unusual for an app from a big company, and it was fascinating to talk to Teo and Jud about how they and their teams set out to accomplish that goal.

I like Airbnb’s redesigned app a lot. It’s playful, welcoming and easy to use. What remains to be seen is whether Airbnb can pull off what it’s set out to accomplish. It isn’t the first company to try to pair customers with local services and experiences. Nor is it Airbnb’s first attempt at experiences. However, the app is a solid foundation, and if my experience at dinner in LA was any indication, I suspect Airbnb may be onto something with Services and Experiences.

Disclosure: The trip to LA to conduct my half of this interview was paid for by Airbnb.

Permalink