Posts tagged with "developer tools"

Remodex Is the Best Codex Remote Client for iOS (Until OpenAI Releases an Official Codex Mobile App)

Remodex for iOS.

Remodex for iOS.

Various OpenAI employees and members of the Codex team have been hinting at a native Codex app for iOS lately. While I very much hope that’s in the cards – especially if the project involves connecting to a remote Mac running the full Codex app – I wanted to highlight an indie utility I’ve been using a lot lately to access my Codex setup on my Mac Studio server from my iPhone.

The app is called Remodex, and it was created by Italian indie developer Emanuele Di Pietro. Remodex, as the name suggests, acts as a remote for the Codex CLI installed on a macOS computer, and it lets you operate your existing projects and chats with a UI that is reminiscent of the official Codex app for Mac. Even better, Remodex is not based on some hack-y workaround: it’s entirely powered by OpenAI’s official (and open-source) Codex App Server.

Read more


Hands-On with Anthropic Labs’ Claude Design Preview

Last week, Anthropic introduced Claude Design, a new research preview product from the equally new Anthropic Labs. Claude Design, which is currently available to Pro, Max, Team, and Enterprise subscribers through the Claude web app, can prototype apps and websites, design presentation materials, generate marketing materials, and more. As someone who has felt as though Claude’s design skills noticeably lagged behind its coding, I was eager to give it a try. So, over the weekend, I tasked Claude Design with coming up with a brand new progressive web app and helping me design a new feature for an existing project.

I’m always looking for a way to resurface articles, apps, products, and other links I save in a variety of places, so my first test of Claude Design was to build an iPad-first web app that would deliver those things to me automatically using a magazine-style design. Claude Design is organized into a sidebar and canvas with tabs in the sidebar for creating prototypes, slide decks, template-based designs, and blank designs. To get started, I named my project and picked a “high fidelity” prototype. Then, I dragged some screenshots of a similar AI assisted reading app I’d seen on social media into Claude Design and described what I wanted, answered some follow-up questions, and let Claude get at it.

Read more


Apple Overhauls App Store Connect

Source: Apple.

Source: Apple.

Overnight, Apple rolled out a big update to App Store Connect with new sales and analytics tools for developers. App Store Connect is the online portal that developers use to manage everything related to selling apps from TestFlight betas, to managing their App Store listings and tracking sales data and analytics.

It’s that last piece that was overhauled with this release. In fact, Apple’s post on its developer site says there are over 100 new metrics developers can use to measure the performance of their apps, all of which have been designed in a privacy-first way to protect users.

Source: Apple.

Source: Apple.

The granularity is impressive. For example, developers can track where their sales are coming from, including search, App Store browsing, web sources, and more. Conversion rates are a big part of the analytics, allowing developers to see how many people have seen their apps on the Store and downloaded them, breaking down first-time downloads and re-downloads. Analytics also tie into App Store features like In-App Events, custom product pages, and developer marketing efforts across a multitude of channels using campaign links. There’s a lot more, including metrics that track app pre-orders, user engagement and retention, and good old-fashioned sales data sliced and diced to allow developers to better understand the sources of their income.

And that’s really just the tip of the iceberg of what has changed in App Store Connect. So if you’re a developer, it’s worth spending some time with your app data and reading the new guide Apple published that covers it all.

Some data reported in App Store Connect is being deprecated later this year and next.

Some data reported in App Store Connect is being deprecated later this year and next.

Since the changes rolled out, a couple of concerns I’ve seen expressed online are that there will no longer be a single place to view the aggregate performance of multiple apps and that the new default reporting period is three months. Those concerns are well founded. The changes are organized on an app-by-app basis, and as Apple says in a banner on App Store Connect, the Dashboards in the Trends section of Connect and related reports where that data was available are being deprecated later this year and next. So, while the data Apple offers is deep for each app, the aggregate data falls short by not providing a birds-eye view of a developer’s entire app catalog.

For what it’s worth, Apple is aware of the feedback regarding cross-app reporting. Also, the shorter sales reporting periods, such as the past 24 hours and seven days, are still available, but they’re less visible because three months is the new default.

This is a big update to App Store Connect that will take developers time to get used to, but it’s also a welcome change that provides meaningful new insights into App Store performance. I expect that there will be more areas where the changes fall short of developers’ expectations. However, it’s also clear to me that Apple has heard the early feedback, so I wouldn’t be surprised if adjustments are made in the future. On balance, though, I think the changes give developers valuable new ways to think about and manage their businesses across the increasingly competitive app landscape, which is welcome.


A Developer’s Month with OpenAI’s Codex

An eye-opening story from Steve Troughton-Smith, who tested Codex for a month and ended up rewriting a bunch of his apps and shipping versions for Windows and Android:

I spent one month battle-testing Codex 5.3, the latest model from OpenAI, since I was already paying for the $20 ChatGPT Plus plan and already had access to it at no additional cost, with task after task. It didn’t just blow away my expectations, it showed me the world has changed: we’ve just undergone a permanent, irreversible abstraction level shift. I think it will be nigh-impossible to convince somebody who grows up with this stuff that they should ever drop down and write code the old way, like we do, akin to trying to convince the average Swift developer to use assembly language.

From his conclusion:

This story is unfinished; this feels like a first foray into what software development will look like for the rest of my life. Transitioning from the instrument player to the conductor of the orchestra. I can acknowledge that this is both incredibly exciting, and deeply terrifying.

I have perused the source code of some of these projects, especially during the first few days. But very quickly I learned there’s simply nothing gained from that. Code is trivial, implementations are ephemeral, and something like Codex can chew through and rewrite a thousand lines of code in a second. Eventually, I just trusted it. Granted, I almost always had a handwritten source of truth, as detailed a spec as any, so it had patterns and structure to follow.

The models are good now. A year ago, none of them could do any of this, certainly not to this quality level. But they don’t do it alone. A ton of work went into everything here, just a different kind of work to before. Above all, what mattered most in all of the above examples was taste. My taste, the human touch. I fear for the companies, oblivious to this, that trade their priceless human resources for OpenClaw nodes in a box.

The entire story is well-documented, rich in screenshots, and full of practical details for developers who may want to attempt a similar experiment.

It’s undeniable that programming is undergoing a massive shift that has possibly already changed the profession forever. Knowing what code is and does is still essential; writing it by hand does not seem to be anymore. And it sounds like the developers who are embracing this shift are happier than ever.

I’ve been thinking about this a lot: why are some of us okay with the concept of AI displacing humans in writing code, but not so much when it comes to, say, writing prose or music? I certainly wouldn’t want AI to replace me writing this, and I absolutely cannot stand the whole concept of “AI music” (here’s a great Rick Beato video on the matter). I don’t think I have a good answer to this, but the closest I can get is: code was always a means to an end – an abstraction layer to get to the actual user experience of a digital artifact. It just so happened that humans created it and had to learn it first. With text and storytelling, the raw material is the art form itself: what you read is the experience itself. But even then, what happens when the human-sourced art form gets augmented by AI in ways that increasingly blur the lines between what is real and artificial? What happens when a videogame gets enhanced by DLSS 5 or an article is a hybrid mesh of human- and AI-generated text? I don’t have answers to these questions.

I find what’s happening to software development so scary and fascinating at the same time: developers are reinventing themselves as “orchestrators” of tools and following new agentic engineering patterns. The results, like with Steve’s story, are out there and speak for themselves. I wish more people in our community were willing to have nuanced and pragmatic conversations about it rather than blindly taking sides.

Permalink


Apple’s Xcode 26.3 Release Candidate Adds Agentic Coding Tools for Developers

Source: Apple.

Source: Apple.

The Release Candidate of Xcode 26.3 is rolling out as we publish this with brand new agentic coding capabilities. With the update, developers can connect to either OpenAI’s Codex or Anthropic’s Claude Agent to assist their development work.

The agent sits in Xcode’s sidebar where developers can use it to plan new features, implement them, and review the results. As developers work, the agent generates a transcript of its actions, which lets developers follow along and interact with it. For example, code snippets will appear in the sidebar that can be clicked to take developers directly to the spot in the file where the agent made a change. Code updates can also be simultaneously previewed. Plus, by building OpenAI and Anthropic’s agents into Xcode, the agents have the benefit of the latest Apple APIs and documentation, which should result in better, more modern code.

Xcode 26.3 also integrates with Model Context Protocol, the open standard used to connect coding agents to other tools. That will allow any MCP-compatible agents to work with Xcode, running inside or outside Xcode, which broadens the possibilities for developers even further. Having watched a short demo of Codex assisting in the development of features in Xcode, version 26.3 looks as though it has a lot of potential to make experimentation easier and to iterate more quickly on ideas.

I’ve been building a couple of different apps recently, mostly from Terminal, which works but isn’t ideal. There are lots of apps out there that try to solve the agentic coding problem, including Visual Studio Code and Codex, which I covered yesterday. However, having seen a demo of Xcode in action, it’s jumped to the top of my list to try. After all, it’s an app with which I’m already familiar, and with Apple’s documentation and API knowledge baked in, my hope is that it speeds up my workflow even further.

For developers who want to learn more, Apple is holding a code along session Thursday covering the update.


OpenAI Launches Codex, a Mac App for Agentic Coding

Today, OpenAI released Codex, a Mac app for building software. Here’s how OpenAI describes the app in its announcement:

The Codex app changes how software gets built and who can build it—from pairing with a single coding agent on targeted edits to supervising coordinated teams of agents across the full lifecycle of designing, building, shipping, and maintaining software.

On first launch, Codex requests permission to access the file system. I granted it access to a subfolder where I stored all my projects, along with the folder that houses an app I’ve been building in my spare time. Those folders and projects live in the left sidebar, where each can be expanded to reveal chat sessions for that project.

Access to your other development tools.

Access to your other development tools.

In the toolbar is an Open button for accessing other development tools installed on your Mac, a Commit button for managing version control, a button that reveals a terminal view that expands up from the bottom of the window, and a diff panel for reviewing code changes. In settings, you’ll find additional customization options, along with tools to hook up MCP servers and integrate skills.

Some of Codex's customization options.

Some of Codex’s customization options.

Codex is not your traditional IDE. Agents are front and center, which in an app that is far more natural to use if you’re new to agentic coding, but the model is similar. While I write this article, Codex has been grinding away in the background performing a code review of my app. After spending time reviewing all the files, Codex asked permission to run commands to do things that it can’t accomplish inside its sandboxed environment.

Automations.

Automations.

The capabilities of Codex are enhanced by skills. OpenAI is kicking off the launch of Codex with a bunch of skills that you can access via its open-source GitHub repo. The app includes a selection of pre-built Automations for repetitive tasks, too.

All in all, Codex looks excellent, but it will take me some time to get a sense of its full capabilities. If you’re interested in trying Codex, you can download it from OpenAI here. For a limited time, the company is making the tool available to Free and Go subscribers, for whom rate limits have been temporarily doubled, as well as Plus, Pro, Business, Enterprise, and Edu users.


LLMs Have Made Simple Software Trivial

I enjoyed this thought-provoking piece by (award-winning developer) Matt Birchler, writing for Birchtree on how he’s been making so-called “micro apps” with AI coding agents:

I was out for a run today and I had an idea for an app. I busted out my own app, Quick Notes, and dictated what I wanted this app to do in detail. When I got home, I created a new project in Xcode, I committed it to GitHub, and then I gave Claude Code on the web those dictated notes and asked it to build that app.

About two minutes later, it was done…and it had a build error.

And:

As a simple example, it’s possible the app that I thought of could already be achieved in some piece of software someone’s released on the App Store. Truth be told, I didn’t even look, I just knew exactly what I wanted, and I made it happen. This is a quite niche thing to do in 2026, but what if Apple builds something that replicates this workflow and ships it on the iPhone in a couple of years? What if instead of going to the App Store, they tell you to just ask Siri to make you the app that you need?

John and I are going to discuss this on the next episode of AppStories about the second part of the experiments we did over our holiday break. As I’ll mention in the episode, I ended up building 12 web apps for things I have to do every day, such as appending text to Notion just how I like it or controlling my TV and Hue sync box. I didn’t even think to search the App Store to see if new utilities existed: I “built” (or, rather, steered the building of) my own progressive web apps, and I’m using them every day. As Matt argues, this is a very niche thing to do right now, which requires a terminal, lots of scaffolding around each project, and deeper technical knowledge than the average person who would just prompt “make me a beautiful todo app.” But the direction seems clear, and the timeline is accelerating.

I also can’t help but remember this old rumor from 2023 about Apple exploring the idea of letting users rely on Siri to create apps on the fly for the then-unreleased Vision Pro. If only the guy in charge of the Vision Pro went anywhere and Apple got their hands on a pretty good model for vibe-coding, right?

Permalink

How I Revived My Decade-Old App with Claude Code

Blink from 2017 (left) and 2026 (right).

Blink from 2017 (left) and 2026 (right).

Every holiday season, Federico and I spend our downtime on nerd projects. This year, both of us spent a lot of that time building tools for ourselves with Claude Code in what developed into a bit of a competition as we each tried to one-up the other’s creations. We’ll have more on what we’ve been up to on AppStories, MacStories, and for Club members soon, but today, I wanted to share an experiment I ran last night that I think captures a very personal and potentially far-reaching slice of what tools like Claude Code can enable.

Blink from 2017 running on a modern iPhone.

Blink from 2017 running on a modern iPhone.

Before I wrote at MacStories, I made a few apps, including Blink, which generated affiliate links for Apple’s media services. The app had a good run from 2015-2017, but I pulled it from the App Store when Apple ended its affiliate program for apps because that was the part of the app that was used the most. Since then, the project has sat in a private GitHub repo untouched.

Last night, I was sitting on the couch working on a Safari web extension when I opened GitHub and saw that old Blink code, which sparked a thought. I wondered whether Claude Code could update Blink to use Swift and SwiftUI with minimal effort on my part. I don’t have any intention of re-releasing Blink, but I couldn’t shake the “what if” rattling in my head, so I cloned the repo and put Claude to work.

Read more