This Week's Sponsor:

Setapp

Start Your 30-day Free Trial Today


Posts in notes

LLMs As Conduits for Data Portability Between Apps

One of the unsung benefits of modern LLMs – especially those with MCP support or proprietary app integrations – is their inherent ability to facilitate data transfer between apps and services that use different data formats.

This is something I’ve been pondering for the past few months, and the latest episode of Cortex – where Myke wished it was possible to move between task managers like you can with email clients – was the push I needed to write something up. I’ve personally taken on multiple versions of this concept with different LLMs, and the end result was always the same: I didn’t have to write a single line of code to create import/export functionalities that two services I wanted to use didn’t support out of the box.

Read more


Testing Claude’s Native Integration with Reminders and Calendar on iOS and iPadOS

Reminders created by Claude for iOS after a series of web searches.

Reminders created by Claude for iOS after a series of web searches.

A few months ago, when Perplexity unveiled their voice assistant integrated with native iOS frameworks, I wrote that I was surprised no other major AI lab had shipped a similar feature in its iOS apps:

The most important point about this feature is the fact that, in hindsight, this is so obvious and I’m surprised that OpenAI still hasn’t shipped the same feature for their incredibly popular ChatGPT voice mode. Perplexity’s iOS voice assistant isn’t using any “secret” tricks or hidden APIs: they’re simply integrating with existing frameworks and APIs that any third-party iOS developer can already work with. They’re leveraging EventKit for reminder/calendar event retrieval and creation; they’re using MapKit to load inline snippets of Apple Maps locations; they’re using Mail’s native compose sheet and Safari View Controller to let users send pre-filled emails or browse webpages manually; they’re integrating with MusicKit to play songs from Apple Music, provided that you have the Music app installed and an active subscription. Theoretically, there is nothing stopping Perplexity from rolling additional frameworks such as ShazamKit, Image Playground, WeatherKit, the clipboard, or even photo library access into their voice assistant. Perplexity hasn’t found a “loophole” to replicate Siri functionalities; they were just the first major AI company to do so.

It’s been a few months since Perplexity rolled out their iOS assistant, and, so far, the company has chosen to keep the iOS integrations exclusive to voice mode; you can’t have text conversations with Perplexity on iPhone and iPad and ask it to look at your reminders or calendar events.

Anthropic, however, has done it and has become – to the best of my knowledge – the second major AI lab to plug directly into Apple’s native iOS and iPadOS frameworks, with an important twist: in the latest version of Claude, you can have text conversations and tell the model to look into your Reminders database or Calendar app without having to use voice mode.

Read more



I Have Many Questions About Apple’s Updated Foundation Models and the (Great) ‘Use Model’ Action in Shortcuts

Apple's 'Use Model' action in Shortcuts.

Apple’s ‘Use Model’ action in Shortcuts.

I mentioned this on AppStories during the week of WWDC: I think Apple’s new ‘Use Model’ action in Shortcuts for iOS/iPadOS/macOS 26, which lets you prompt either the local or cloud-based Apple Foundation models, is Apple Intelligence’s best and most exciting new feature for power users this year. This blog post is a way for me to better explain why as well as publicly investigate some aspects of the updated Foundation models that I don’t fully understand yet.

Read more


Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS

DeepSeek released an updated version of their popular R1 reasoning model (version 0528) with – according to the company – increased benchmark performance, reduced hallucinations, and native support for function calling and JSON output. Early tests from Artificial Analysis report a nice bump in performance, putting it behind OpenAI’s o3 and o4-mini-high in their Intelligence Index benchmarks. The model is available in the official DeepSeek API, and open weights have been distributed on Hugging Face. I downloaded different quantized versions of the full model on my M3 Ultra Mac Studio, and here are some notes on how it went.

Read more


Shareshot 1.3: Greater Image Flexibility, New Backgrounds, and Extended Shortcuts Support

If you have a screenshot you need to frame, Shareshot is one of your best bets. That’s because it makes it so hard to create an image that looks bad. The app, which is available for the iPhone, iPad, Mac, and Vision Pro, has a lot of options for tweaking the appearance of your framed screenshot, so your final image won’t have a cookie-cutter look. However, there are also just enough constraints to prevent you from creating something truly awful.

You can check out my original review and coverage on Club MacStories for the details on version 1.0 and subsequent releases, but today’s focus is on version 1.3, which covers three areas:

  • Increased image size flexibility
  • New backgrounds
  • Updated and extended Shortcuts actions
Adjusting sizes.

Adjusting sizes.

With version 1.3, Shareshot now lets you pick any output size you’d like. The app then frames your screenshot and fits it in the image size you specify. If you’re doing design work, getting the exact-size image you want out of the app is a big win because it means you won’t need to make adjustments later that could impair its fidelity.

A related change is the ability to specify a fixed width for the image that Shareshot outputs. That means you can pick the aspect ratio you want, such as square or 16:9, then specify a fixed width, and Shareshot will take care of automatically adjusting the height of the image to preserve the aspect ratio you chose. This feature is perfect if you publish to the web and the tools you use are optimized for a certain image width. Using anything wider just means you’re hosting a file that’s bigger than necessary, potentially slowing down your website and resulting in unnecessary bandwidth costs.

Shareshot is stripey now.

Shareshot is stripey now.

Shareshot has two new categories of backgrounds too: Solidarity and Stripes. Solidarity has two options styled after the Ukrainian and Palestinian flags, and Stripes includes designs based on LGBTQ+ colors and other color combinations in a variety of styles. All of the new categories allow you to adjust several parameters including the angle, color, saturation, brightness, and blur of the stripes.

Examples of angles.

Examples of angles.

Finally, Shareshot has revamped its Shortcuts actions to take advantage of App Intents, giving users control over more parameters of images generated using Shortcuts and preparing the app for Apple’s promised Smart Siri in the future. The changes add:

  • Support for outputting custom-sized images,
  • A scale option for fixed-width and custom-sized images, and
  • New parameters for angling and blurring backgrounds.

The progress Shareshot has made since version 1.0 is impressive. The app has grown substantially to offer a much wider set of backgrounds, options, and flexibility without compromising its excellent design, which garnered it a MacStories Selects Award last year. I’m still eager to see multiple screenshot support added, a feature I know is on the roadmap, but that’s more a wish than a complaint; Shareshot is a fantastic app that just keeps getting better.

Shareshot 1.3 is free to download on the App Store. Some of its features require a $1.99/month or $14.99/year subscription.


Notes on Mercury Weather’s New Radar Maps Feature

Since covering Mercury Weather 2.0 and its launch on the Vision Pro here on MacStories, I’ve been keeping up with the weather app on Club MacStories. It’s one of my favorite Mac menu bar apps, it has held a spot on my default Apple Watch face since its launch, and last fall, it added severe weather notifications.

I love the app’s design and focus as much today as I did when I wrote about its debut in 2023. Today, though, Mercury Weather is a more well-rounded app than ever before. Through regular updates, the app has filled in a lot of the holes in its feature set that may have turned off some users two years ago.

Today, Mercury Weather adds weather radar maps, which was one of the features I missed most from other weather apps, along with the severe weather notifications that were added late last year. It’s a welcome addition that means the next time a storm is bearing down on my neighborhood, I won’t have to switch to a different app to see what’s coming my way.

Zooming out to navigate the globe.

Zooming out to navigate the globe.

Radar maps are available on the iPhone, iPad, and Mac versions of Mercury Weather; they offer a couple of different map styles and a legend that explains what each color on the map means. If you zoom out, you can get a global view of Earth with your favorite locations noted on the map. Tap one, and you’ll get the current conditions for that spot. Mercury Weather already had an extensive set of widgets for the iPhone, iPad, and Mac, but this update adds small, medium, and large widgets for the radar map, too.

A Mercury Weather radar map on the Mac.

A Mercury Weather radar map on the Mac.

With a long list of updates since launch, Mercury Weather is worth another look if you passed on it before because it was missing features you wanted. The app is available on the App Store as a free download. Certain features require a subscription or lifetime purchase via an in-app purchase.


Notes on Early Mac Studio AI Benchmarks with Qwen3-235B-A22B and Qwen2.5-VL-72B

I received a top-of-the-line Mac Studio (M3 Ultra, 512 GB of RAM, 8 TB of storage) on loan from Apple last week, and I thought I’d use this opportunity to revive something I’ve been mulling over for some time: more short-form blogging on MacStories in the form of brief “notes” with a dedicated Notes category on the site. Expect more of these “low-pressure”, quick posts in the future.

I’ve been sent this Mac Studio as part of my ongoing experiments with assistive AI and automation, and one of the things I plan to do over the coming weeks and months is playing around with local LLMs that tap into the power of Apple Silicon and the incredible performance headroom afforded by the M3 Ultra and this computer’s specs. I have a lot to learn when it comes to local AI (my shortcuts and experiments so far have focused on cloud models and the Shortcuts app combined with the LLM CLI), but since I had to start somewhere, I downloaded LM Studio and Ollama, installed the llm-ollama plugin, and began experimenting with open-weights models (served from Hugging Face as well as the Ollama library) both in the GGUF format and Apple’s own MLX framework.

LM Studio.

LM Studio.

I posted some of these early tests on Bluesky. I ran the massive Qwen3-235B-A22B model (a Mixture-of-Experts model with 235 billion parameters, 22 billion of which activated at once) with both GGUF and MLX using the beta version of the LM Studio app, and these were the results:

  • GGUF: 16 tokens/second, ~133 GB of RAM used
  • MLX: 24 tok/sec, ~124 GB RAM

As you can see from these first benchmarks (both based on the 4-bit quant of Qwen3-235B-A22B), the Apple Silicon-optimized version of the model resulted in better performance both for token generation and memory usage. Regardless of the version, the Mac Studio absolutely didn’t care and I could barely hear the fans going.

I also wanted to play around with the new generation of vision models (VLMs) to test modern OCR capabilities of these models. One of the tasks that has become kind of a personal AI eval for me lately is taking a long screenshot of a shortcut from the Shortcuts app (using CleanShot’s scrolling captures) and feed it either as a full-res PNG or PDF to an LLM. As I shared before, due to image compression, the vast majority of cloud LLMs either fail to accept the image as input or compresses the image so much that graphical artifacts lead to severe hallucinations in the text analysis of the image. Only o4-mini-high – thanks to its more agentic capabilities and tool-calling – was able to produce a decent output; even then, that was only possible because o4-mini-high decided to slice the image in multiple parts and iterate through each one with discrete pytesseract calls. The task took almost seven minutes to run in ChatGPT.

This morning, I installed the 72-billion parameter version of Qwen2.5-VL, gave it a full-resolution screenshot of a 40-action shortcut, and let it run with Ollama and llm-ollama. After 3.5 minutes and around 100 GB RAM usage, I got a really good, Markdown-formatted analysis of my shortcut back from the model.

To make the experience nicer, I even built a small local-scanning utility that lets me pick an image from Shortcuts and runs it through Qwen2.5-VL (72B) using the ‘Run Shell Script’ action on macOS. It worked beautifully on my first try. Amusingly, the smaller version of Qwen2.5-VL (32B) thought my photo of ergonomic mice was a “collection of seashells”. Fair enough: there’s a reason bigger models are heavier and costlier to run.

Given my struggles with OCR and document analysis with cloud-hosted models, I’m very excited about the potential of local VLMs that bypass memory constraints thanks to the M3 Ultra and provide accurate results in just a few minutes without having to upload private images or PDFs anywhere. I’ve been writing a lot about this idea of “hybrid automation” that combines traditional Mac scripting tools, Shortcuts, and LLMs to unlock workflows that just weren’t possible before; I feel like the power of this Mac Studio is going to be an amazing accelerator for that.

Next up on my list: understanding how to run MLX models with mlx-lm, investigating long-context models with dual-chunk attention support (looking at you, Qwen 2.5), and experimenting with Gemma 3. Fun times ahead!


Faking ‘Clamshell Mode’ with External Displays in iPadOS 17

A simple setting can be used as a workaround for clamshell mode in iPadOS 17.

A simple setting can be used as a workaround for clamshell mode in iPadOS 17.

Fernando Silva of 9to5Mac came up with a clever workaround to have ‘clamshell mode’ in iPadOS 17 when an iPad is connected to an external display. The catch: it doesn’t really turn off the iPad’s built-in display.

Now before readers start spamming the comments, this is not true clamshell mode. True clamshell mode kills the screen of the host computer and moves everything from that display to the external monitor. This will not do that. But this workaround will allow you to close your iPad Pro, connect a Bluetooth keyboard and mouse, and still be able to use Stage Manager on an external display.

Essentially, the method involves disabling the ‘Lock / Unlock’ toggle in Settings ⇾ Display & Brightness that controls whether the iPad’s screen should lock when a cover is closed on top of it. This has been the iPad’s default behavior since the iPad 2 and the debut of the Smart Cover, and it still applies to the latest iPad Pro and Magic Keyboard: when the cover is closed, the iPad gets automatically locked. However, this setting can be disabled, and if you do, then sure: you could close an iPad Pro and continue using iPadOS on the external display without seeing the iPad’s built-in display. Except the iPad’s display is always on behind the scenes, which is not ideal.1

Still: if we’re supposed to accept this workaround as the only way to fake ‘clamshell mode’ in iPadOS 17, I would suggest some additions to improve the experience.

Read more