Posts tagged with "AI"


Six Colors’ Apple in 2025 Report Card

Average scores from the 2025 Six Colors report card.

Average scores from the 2025 Six Colors report card.

For the past 10 years, Six Colors’ Jason Snell has put together an “Apple report card” – a survey to assess the current state of Apple “as seen through the eyes of writers, editors, developers, podcasters, and other people who spend an awful lot of time thinking about Apple”.

The 2025 edition of the Six Colors Apple Report Card has been published, and you can find a summary of all the submitted comments along with charts featuring average scores for the different categories here.

I’m so grateful that Jason invited me, once again, to participate in the survey and share my thoughts on Apple’s 2025. As you’ll see from my comments – and as you know if you’ve been listening to AppStories or Connected lately – I’ve been focusing on AI agents, hybrid automation, and splitting my work between iPadOS and macOS for the past few months. The LLM takeoff in the productivity space is accelerating on a weekly basis, and modern AI tools are fundamentally changing the way I get work done. Case in point: this article was written before OpenClaw went viral, and the past month alone has seen so many of my habits and automations get upended by this incredible open-source tool. As I noted in my comments, however, one thing is not changing: iPadOS essentially gets no access to any of these modern AI tools, which are increasingly launching as Mac-only apps or features.

I’ve prepared the full text of my responses for the Six Colors report card, which you can find below.

Read more


OpenAI Launches Codex, a Mac App for Agentic Coding

Today, OpenAI released Codex, a Mac app for building software. Here’s how OpenAI describes the app in its announcement:

The Codex app changes how software gets built and who can build it—from pairing with a single coding agent on targeted edits to supervising coordinated teams of agents across the full lifecycle of designing, building, shipping, and maintaining software.

On first launch, Codex requests permission to access the file system. I granted it access to a subfolder where I stored all my projects, along with the folder that houses an app I’ve been building in my spare time. Those folders and projects live in the left sidebar, where each can be expanded to reveal chat sessions for that project.

Access to your other development tools.

Access to your other development tools.

In the toolbar is an Open button for accessing other development tools installed on your Mac, a Commit button for managing version control, a button that reveals a terminal view that expands up from the bottom of the window, and a diff panel for reviewing code changes. In settings, you’ll find additional customization options, along with tools to hook up MCP servers and integrate skills.

Some of Codex's customization options.

Some of Codex’s customization options.

Codex is not your traditional IDE. Agents are front and center, which in an app that is far more natural to use if you’re new to agentic coding, but the model is similar. While I write this article, Codex has been grinding away in the background performing a code review of my app. After spending time reviewing all the files, Codex asked permission to run commands to do things that it can’t accomplish inside its sandboxed environment.

Automations.

Automations.

The capabilities of Codex are enhanced by skills. OpenAI is kicking off the launch of Codex with a bunch of skills that you can access via its open-source GitHub repo. The app includes a selection of pre-built Automations for repetitive tasks, too.

All in all, Codex looks excellent, but it will take me some time to get a sense of its full capabilities. If you’re interested in trying Codex, you can download it from OpenAI here. For a limited time, the company is making the tool available to Free and Go subscribers, for whom rate limits have been temporarily doubled, as well as Plus, Pro, Business, Enterprise, and Edu users.


OpenClaw Showed Me What the Future of Personal AI Assistants Looks Like

Using OpenClaw via Telegram.

Using OpenClaw via Telegram.

Update, February 6: I’ve published an in-depth guide with advanced tips for secure credentials, memory management, automations, and proactive work with OpenClaw for our Club members here.

For the past week or so, I’ve been working with a digital assistant that knows my name, my preferences for my morning routine, how I like to use Notion and Todoist, but which also knows how to control Spotify and my Sonos speaker, my Philips Hue lights, as well as my Gmail. It runs on Anthropic’s Claude Opus 4.5 model, but I chat with it using Telegram. I called the assistant Navi (inspired by the fairy companion of Ocarina of Time, not the besieged alien race in James Cameron’s sci-fi film saga), and Navi can even receive audio messages from me and respond with other audio messages generated with the latest ElevenLabs text-to-speech model. Oh, and did I mention that Navi can improve itself with new features and that it’s running on my own M4 Mac mini server?

If this intro just gave you whiplash, imagine my reaction when I first started playing around with OpenClaw, the incredible open-source project by Peter Steinberger (a name that should be familiar to longtime MacStories readers) that’s become very popular in certain AI communities over the past few weeks. I kept seeing OpenClaw being mentioned by people I follow; eventually, I gave in to peer pressure, followed the instructions provided by the funny crustacean mascot on the app’s website, installed OpenClaw on my new M4 Mac mini (which is not my main production machine), and connected it to Telegram.

To say that OpenClaw has fundamentally altered my perspective of what it means to have an intelligent, personal AI assistant in 2026 would be an understatement. I’ve been playing around with OpenClaw so much, I’ve burned through 180 million tokens on the Anthropic API (yikes), and I’ve had fewer and fewer conversations with the “regular” Claude and ChatGPT apps in the process. Don’t get me wrong: OpenClaw is a nerdy project, a tinkerer’s laboratory that is not poised to overtake the popularity of consumer LLMs any time soon. Still, OpenClaw points at a fascinating future for digital assistants, and it’s exactly the kind of bleeding-edge project that MacStories readers will appreciate.

Read more


LLMs Have Made Simple Software Trivial

I enjoyed this thought-provoking piece by (award-winning developer) Matt Birchler, writing for Birchtree on how he’s been making so-called “micro apps” with AI coding agents:

I was out for a run today and I had an idea for an app. I busted out my own app, Quick Notes, and dictated what I wanted this app to do in detail. When I got home, I created a new project in Xcode, I committed it to GitHub, and then I gave Claude Code on the web those dictated notes and asked it to build that app.

About two minutes later, it was done…and it had a build error.

And:

As a simple example, it’s possible the app that I thought of could already be achieved in some piece of software someone’s released on the App Store. Truth be told, I didn’t even look, I just knew exactly what I wanted, and I made it happen. This is a quite niche thing to do in 2026, but what if Apple builds something that replicates this workflow and ships it on the iPhone in a couple of years? What if instead of going to the App Store, they tell you to just ask Siri to make you the app that you need?

John and I are going to discuss this on the next episode of AppStories about the second part of the experiments we did over our holiday break. As I’ll mention in the episode, I ended up building 12 web apps for things I have to do every day, such as appending text to Notion just how I like it or controlling my TV and Hue sync box. I didn’t even think to search the App Store to see if new utilities existed: I “built” (or, rather, steered the building of) my own progressive web apps, and I’m using them every day. As Matt argues, this is a very niche thing to do right now, which requires a terminal, lots of scaffolding around each project, and deeper technical knowledge than the average person who would just prompt “make me a beautiful todo app.” But the direction seems clear, and the timeline is accelerating.

I also can’t help but remember this old rumor from 2023 about Apple exploring the idea of letting users rely on Siri to create apps on the fly for the then-unreleased Vision Pro. If only the guy in charge of the Vision Pro went anywhere and Apple got their hands on a pretty good model for vibe-coding, right?

Permalink

Apple Confirms AI Partnership with Google

Apple has confirmed to CNBC that it has entered into a multi-year partnership with Google to use the search giant’s models and cloud technology for its own AI efforts. According to an unnamed Apple spokesperson:

After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users.

The report still leaves many questions unanswered, including how Gemini fits in with Apple’s own Foundation Models and whether and to what extent Apple will rely on Google hardware. However, after months of speculation and reports from Mark Gurman at Bloomberg that Apple and Google were negotiating, it looks like we’re on the cusp of Apple’s AI strategy coming into better focus.


UPDATE:

Subsequent to the statement made by Apple to CNBC, Apple and Google released a slightly more detailed joint statement that Google published on X:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

After careful evaluation, Apple determined that Google’s Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.

So, while the Apple Foundation Models that power Apple Intelligence will be based on Gemini and unspecified cloud technology, Apple Intelligence features themselves, including more personalized Siri, will continue to run locally on Apple devices and on Apple’s Private Cloud Compute to maintain user privacy.


How I Revived My Decade-Old App with Claude Code

Blink from 2017 (left) and 2026 (right).

Blink from 2017 (left) and 2026 (right).

Every holiday season, Federico and I spend our downtime on nerd projects. This year, both of us spent a lot of that time building tools for ourselves with Claude Code in what developed into a bit of a competition as we each tried to one-up the other’s creations. We’ll have more on what we’ve been up to on AppStories, MacStories, and for Club members soon, but today, I wanted to share an experiment I ran last night that I think captures a very personal and potentially far-reaching slice of what tools like Claude Code can enable.

Blink from 2017 running on a modern iPhone.

Blink from 2017 running on a modern iPhone.

Before I wrote at MacStories, I made a few apps, including Blink, which generated affiliate links for Apple’s media services. The app had a good run from 2015-2017, but I pulled it from the App Store when Apple ended its affiliate program for apps because that was the part of the app that was used the most. Since then, the project has sat in a private GitHub repo untouched.

Last night, I was sitting on the couch working on a Safari web extension when I opened GitHub and saw that old Blink code, which sparked a thought. I wondered whether Claude Code could update Blink to use Swift and SwiftUI with minimal effort on my part. I don’t have any intention of re-releasing Blink, but I couldn’t shake the “what if” rattling in my head, so I cloned the repo and put Claude to work.

Read more


OpenAI Opens Up ChatGPT App Submissions to Developers

Announced earlier this year at OpenAI’s DevDay, developers may now submit ChatGPT apps for review and publication. OpenAI’s blog post explains that:

Apps extend ChatGPT conversations by bringing in new context and letting users take actions like order groceries, turn an outline into a slide deck, or search for an apartment.

Under the hood, OpenAI is using MCP, Model Context Protocol, which was pioneered by Anthropic late last year and donated to the Agentic AI Foundation last week.

Apps are currently available in the web version of ChatGPT from the sidebar or tools menu and, once connected, can be accessed by @mentioning them. Early participants include Adobe, which preannounced its apps last week, Apple Music, Spotify, Zillow, OpenTable, Figma, Canva, Expedia, Target, AllTrails, Instacart, and others.

I was hoping the Apple Music app would allow me to query my music library directly, but that’s not possible. Instead, it allows ChatGPT to do things like search Apple Music’s full catalog and generate playlists, which is useful but limited.

ChatGPT's Apple Music app lets you create playlists.

ChatGPT’s Apple Music app lets you create playlists.

Currently, there’s no way for developers to complete transactions inside ChatGPT. Instead, sales can be kicked to another app or the web, although OpenAI says it is exploring ways to offer transactions inside ChatGPT. Developers who want to submit an app must follow OpenAI’s app submission guidelines (sound familiar?) and can learn more from a variety of resources that OpenAI has made available.

A playlist generated by ChatGPT from a 40-year-old setlist.

A playlist generated by ChatGPT from a 40-year-old setlist.

I haven’t spent a lot of time with the apps that are available, but despite the lack of access to your library, the Apple Music integration can be useful when combined with ChatGPT’s world knowledge. I asked it to create a playlist of the songs that The Replacements played at a show I saw in 1985, and while I don’t recall the exact setlist, ChatGPT matched what’s on Setlist.fm, a user-maintained wiki of live shows. I could have made this playlist myself, but it was convenient to have ChatGPT do it instead, even if the Apple Music integration is limited to 25-song playlists, which meant that The Replacements’ setlist was split into two playlists.

We’re still in the early days of MCP, and participation by companies will depend on whether they can make incremental sales to users via ChatGPT. Still, there’s clearly potential for apps embedded in chatbots to take off.


Adobe Announces Image and PDF Integration with ChatGPT

Source: Adobe.

Source: Adobe.

Adobe announced today that it has teamed up with OpenAI to give ChatGPT users access to Photoshop, Express, and Acrobat from inside the chatbot. The new integration is available starting today at no additional cost to ChatGPT users.

Source: Adobe.

Source: Adobe.

In a press release to Business Wire, Adobe explains that its three apps can be used by ChatGPT users to:

  • Easily edit and uplevel images with Adobe Photoshop: Adjust a specific part of an image, fine tune image settings like brightness, contrast and exposure, and apply creative effects like Glitch and Glow – all while preserving the quality of the image.
  • Create and personalize designs with Adobe Express: Browse Adobe Express’ extensive library of professional designs to find the best one for any moment, fill in the text, replace images, animate designs and iterate on edits – all directly inside the chat and without needing to switch to another app – to create standout content for any occasion.
  • Transform and organize documents with Adobe Acrobat: Edit PDFs directly in the chat, extract text or tables, organize and merge multiple files, compress files and convert them to PDF while keeping formatting and quality intact. Acrobat for ChatGPT also enables people to easily redact sensitive details.
Source: Adobe.

Source: Adobe.

This strikes me as a savvy move by Adobe. Allowing users to request image and PDF edits and design documents with natural language prompts makes its tools more approachable. That could attract new users who later move to an Adobe subscription to get more control over their creations and Adobe’s other offerings.

From OpenAI’s standpoint, this is clearly a response to the consumer-facing Gemini features that Google has begun releasing, which include new image and video generation tools and reportedly caused Sam Altman to declare a “code red” inside the company. I understand the OpenAI freakout. Google has a huge user base and has been doing consumer products far longer than OpenAI, but I can’t say I’ve been very impressed with Gemini 3. Perhaps that’s simply because I don’t care for generative images and video, but these latest moves by Google and OpenAI make it clear that they see them as foundational to consumer-facing AI tools.