This Week's Sponsor:

Kolide

Ensure that if a device isn’t secure it can’t access your apps.  It’s Device Trust for Okta.


Posts tagged with "UI"

The History of Cover Flow

A few months ago when I was writing about Widgetsmith’s new music widgets in my iOS 17 review, I told my buddy Stephen Hackett I couldn’t believe there was no Cover Flow retrospective on 512 Pixels. Yesterday, Stephen delivered:

Over the last decade or so, Apple has been hard at work in simplifying the user interfaces that power its myriad platforms. I’ve welcomed most of that work, but it’s hard to deny that we’ve all lost some things along the way.

Today, we look at a UI element that started life in iTunes, but spread to the iPod, iPhone and Mac over time: Cover Flow.

I had completely forgotten that Cover Flow eventually found its way to Safari as well. I miss Cover Flow more today than I ever used it at the time; I wonder if a similar 3D interface could be revived for the age of visionOS and Vision Pro.

Permalink

Fiery Feeds for iOS Added an In-App Split View Mode That I Wish More iPhone Apps Offered

Vertical split view in Fiery Feeds.

Vertical split view in Fiery Feeds.

A few weeks ago on Mastodon, I shared a simple feature request: a split-screen mode for iPhone RSS readers that would allow me to scroll headlines in the one half of the screen and preview actual articles in the other.

If this sounds familiar, you’re not alone: back in 2007, Steve Jobs demoed pretty much the same thing for the first version of the Mail app for iPhone OS 1.0. That layout mode never shipped, and probably rightfully so at the time given the limited screen real estate of the first iPhone.

Read more


macOS Isn’t as Small as You Think

Interesting comparison of macOS and iPadOS interface elements by Matt Birchler:

I will say that there are certainly some macOS UI elements that could be tricky to use with touch, but I think they’re the exception, not the rule. Still, Apple will certainly make some UI changes to accommodate touch as an officially-supported input method on the platform.

And:

There’s a narrative out there that touch is just so incompatible with macOS and that in order to make it work, the macOS UI would have to get blown up to comical proportions, but I don’t think that’s the case. Changes will be made, but I think macOS is more touch-friendly today than many people give it credit for.

I don’t disagree, and count me among those who think Apple should consider bringing touch support to the Mac.

I’ve seen this argument regarding the concern of “blowing up” the macOS UI in recent years too, and I think it’s shortsighted. Look no further than the iPad Pro: in a single device, Apple was able to let touch, pointer, and now even hover interactions coexist. Even without display scaling, I don’t think iPadOS has a comically large interface, as some believe.

There is a lot of work to be done to achieve a similar kind of input balance on macOS (think of all the elements that haven’t been redesigned in recent years, like drag controls for windows; the list is long), but it is possible, and I hope Apple gets there in the near future.

Permalink

‘Command-K Bars’ as a Modern Interface Pattern

Maggie Appleton (via Michael Tsai) has written about one of the UI trends I’ve seen pop up more and more lately, and which we mentioned on AppStories several times over the past year: the so-called ‘Command-K’ bars inside apps.

Command bars are command-line bars that pop up in the middle of the screen when you hit a certain keyboard shortcut.They’re also known as ‘command palettes’, ‘command launchers’, or ‘omniboxes’ Traditionally CMD + K, hence the moniker “Command K bars.” But CMD + E and CMD + / have also been strong shortcut contenders.

[…]

They don’t even have to remember its exact name. Fuzzy search can help them find it by simply typing in similar names or related keywords. For example, if I type “make” into a command bar, it’s likely to show me any actions related to creating new items. Even if “make” isn’t part of the action name.

[…]

These bars also do double duty as universal search bars. You’re not only searching through the available actions in an app. You can also search through content like documents, file names, and tasks.

You’ve probably seen these command bars in apps like Obsidian, Craft, Todoist, Arc, Cron, Notion, and lots of others. (On Apple platforms, Things did something similar all the way back in 2018 with a feature called ‘Type Travel’.) It feels like every modern productivity app – especially on desktop – has its own flavor of this interface element nowadays. In a way, this visual trend reminds me of pull-to-refresh before it was standardized by Apple and became a native iOS UI component.

I’m intrigued by Command-K bars as a feature that speeds up keyboard-driven interactions on iPad and Mac while at the same time serving as a search box for an app’s own commands. Think of the typical Command-K bar as a mix of Spotlight, the macOS menu bar, and iPadOS’ keyboard shortcut menu, but as an element that can be invoked from anywhere in an app and dismissed with just a keystroke. As the examples in Maggie’s article show, Command-K bars can be genuinely useful to surface hidden commands and allow power users to save time when using complex apps.

There are plenty of cases where Apple’s apps could benefit from this kind of in-app search makeover. Here’s Notes, for instance, when you activate the ‘Note List Search’ command:

Search inside Notes.

Search inside Notes.

And here’s the rather complex list of keyboard shortcuts supported by Safari:

Keyboard shortcuts in Safari for iPad.

Keyboard shortcuts in Safari for iPad.

I said this on AppStories and I’ll say it again: I think Apple should consider an in-app version of Spotlight that replicates the functionality of Command-K bars and is optimized for keyboard usage on iPadOS and macOS. Modern productivity software is clearly moving in this direction on desktop and the web; I’d like to see Apple apps offer faster keyboard navigation and command discoverability too.

Permalink

Spatial Interfaces

I believe the best software is an extension of the human brain. It lets us think naturally, and conforms to us, not the other way around. Translation of information should be the computer’s job, not ours. It’s what we built these digital slaves for. A great Spatial Interface meets our expectations of a physical model. Designed for human beings, it supports a mind, living in the dimensions of space and time. They are Interfaces that are sensible about where things lay. Like a well designed building, they’re easy to traverse through. One space flows into the other, without surprise.

Great article by Pasquale D’Silva on the role of space in software interfaces, with a special focus on some popular iPhone apps. Make sure to check out the GIFs and videos.

Permalink

Why Pull-To-Refresh Isn’t Such A Bad Guy

Nick Arnott:

Carr would like to see more developers experiment with new interactions for swipe down gestures, and I cautiously agree. Pull-to-refresh is one of those ideas that seems so obvious in hindsight, but took a talented engineer to think of it. The brilliance of pull-to-refresh is just how well it fit into the existing design. When users try scrolling past the top of a table cell view, they’re trying to view newer content. Pull-to-refresh intelligently extends that scrolling to have an app refresh the content to load any new data. A perfectly logical and intuitive extension of the existing functionality.

This, in response to an article by Austin Carr from December. I especially agree with Nick when he says that fast, reliable data connections that never fail aren’t a reality yet, though they have gotten better over the years.

Pull-to-refresh may seem simple and obvious today, but it was a great design challenge for its inventor back in 2010. Nick makes a lot of valid points about its existence and evolution.

Permalink

Shortcat: Spotlight For The User Interface

In my daily workflow, I rely on Alfred for Mac to find files, folders, and apps for me. Since July 2012, I have used Alfred 4,326 times for an average of 46.5 times a day. I use Alfred for a variety of tasks which include (but are not limited to) accessing favorite folders, launching Google search, acting on multiple files through the Buffer, and executing AppleScripts. Alfred is one of my favorite pieces of Mac software, ever.

I like launchers. They simplify my workflow while allowing me to save time and be more efficient. This is why I’ll keep an eye on the development of Shortcat, a new Mac app – currently in public beta – that aims at becoming a launcher for interface elements.

The developers describe Shortcat as “Spotlight for the user interface”, and that’s a fairly accurate description. Essentially, Shortcat relies on support for Assistive Devices (an Accessibility feature of OS X) to be able to “see” the labels of buttons and menus and “click” on them. So, for instance, instead of moving your cursor on the trackpad, you’ll be typing “back” to make Shortcat click the Back button in Safari.

Shortcat works better with apps that leverage Accessibility features and have properly labeled buttons and interface elements. In the app’s Help menu there’s already a list of apps that don’t work properly with Shortcat as it’s unable to “read” (and thus let you find) their interface elements. I am no Accessibility expert, but my guess is that these apps don’t support VoiceOver either.

In my tests, Shortcat was a pleasant surprise. If you don’t know about Accessibility, the app will look like a fantastic trick – how can it click for me when I’m just typing? In actual usage, there are some things to be considered. When you invoke the app with a shortcut (it’s customizable from the Terminal in this version) and you start typing, it will highlight areas of an app that match the letters you typed. The best match is highlighted in green, other possible “destinations” for the mouse are yellow. You can click on a button or menu by typing its full name or an abbreviation, such as “Add Action” or “AD” for the toolbar button in OmniFocus.

You can also tell Shortcat to show “hidden” results. By preceding your query with a dot, you’ll be able to reach interface elements that, at first glance, don’t have a label. I tested this with several apps, and, for instance, I was able to type “.1p” to click on the unlabeled 1Password extension in Safari, or “.n” to open the compose box in Tweetbot. Speaking of Safari, you can also use Shortcat to click on website navigation elements such as buttons or text. “Clicking” on hyperlinks with Shortcat will, just like a regular click, open them in a new tab.

Shortcat is an interesting experiment, but it needs more work before being ready for primetime. Its text matching algorithm is good, but still not perfect: sometimes, it associates things like “SYNZ” to “sync”, which isn’t particularly nice to see. I would also like to see a more polished graphical representations of highlights and selected regions of the UI, as right now the highlighting process seems more a “hack” than a consumer product. Also, I’m still not completely sure how, in every day usage, Shortcat could come in handy. Is it a utility to navigate large documents without typing? Or is it an app navigator? In a world of buttons associated with keyboard shortcuts, are virtual clicks really that necessary? Shortcat makes for a cool demo, but it needs to find a stronger message to make people “get” what it’s all about.

You can check out the Shortcat beta for free here.


Spotify for iPad: A UI Analysis with A Rdio Perspective

Spotify for iPad, released this morning, is a beautiful app. It’s solid, providing fast and reliable access to Spotify’s huge catalogue of songs and artists, and it presents content inside a great-looking package that, as I’ve already written, reminds me of Reeder’s sepia tones.

What follows is a brief examination of the app’s interface and functionalities. I have been using Rdio to completely fill my music listening needs for the past six months, but I was looking forward to trying Spotify for iPad to give it a fair shot. Read more


The (Semi)Skeuomorphism

Last night’s release of Paper, a new drawing and sketching app for iPad (The Verge has a good review and interview with its developers), got me thinking about a trend I’m seeing lately in several high-profile iOS apps from third-party developers and, to an extent, Apple itself. That is, drifting away from the forced skeuomorphism of user interfaces to embrace a more balanced approach between imitating real-life objects to achieve familiarity, and investing on all-digital designs and interfaces to benefit from the natural and intuitive interactions that iOS devices have made possible.

John Gruber says that the tension “between simplicity and obviousness” can be seen in developers getting rid of UI chrome (buttons, toolbars) to make simpler apps, and Apple, which has adopted UI chrome – often, in the form of skeuomorphic elements – to bring obviousness and familiarity to its applications. iCal’s bits of torn paper and Address Book’s pages are obvious, but are they simple?

The subject is complex, and the scope of the discussion is too broad to not consider both ends of the spectrum, and what lies in between. Ultimately, simplicity vs. obviousness brings us to another issue with user interfaces: discovery vs. frustration.

Apple’s (and many others’, with Apple being the prominent example) approach is clearly visible: familiar interfaces are obvious. Everybody knows how a calendar looks. Or how to flip pages in a book. People are accustomed to the physical objects Apple is trying to imitate in the digital world. But are they aware of the limitations these objects carry over when they are translated to pixels? As we’ve seen, this can lead to frustration: why can’t I rip those bits of torn paper apart? Why can’t I grab multiple pages at once, as I would do with a physical book? And so forth. Interfaces that resemble real-life objects should be familiar; it is because of that very familiarity, however, that constraints become utterly visible when pixels can’t uphold the metaphor.

On the other hand, a number of applications are trying to dismantle the paradigm of “skeuomorphism mixed with buttons” by leveraging the inner strength of the iOS platform, and in particular the iPad: the device’s screen. Impending’s Clear, for instance, famously avoided buttons and toolbars to focus its interaction exclusively on gestures. Paper, for as much as its name implies a real-life feeling of actual paper, is the least real-life-looking (and behaving) sketching app of all: sure it’s got paper and a tool palette, but there are no buttons and navigation elements when you are drawing. In Paper, you pinch to go back one level (like Clear); you rotate two fingers on screen to undo and redo your actions. I assume the developers had to use standard sharing and “+” buttons only because they couldn’t come up with a significant breakthrough in associating these commands with equally intuitive gestures.

Which brings me to the downside of simplicity: discovery. Pinch to close and rotate to undo make for a pretty demo and elegant implementations for the iOS nerds like us, but are they discoverable enough by “normal people”? Would my dad know he can pinch open pages and rotate an undo dial? Are these gestures obvious enough to avoid confusion and another form of frustration? Intuitive software shouldn’t need a manual.

There are several ways to look at this debate. For one, we could argue that Apple was “forced” to use skeuomorphic elements to get us “comfortable” with these new devices, easing the transition from computers by imitating other objects and interfaces we already knew how to use. With time, they have realized people are now familiar with the previously unfamiliar, and they are now slowly introducing elements that subtly drift away from real-life interactions. Like the full-screen mode in iBooks, or the sidebar in Mountain Lion’s Contacts app. But there are still some graphical elements decorating Apple’s interfaces that don’t have a clear functional purpose: the leather in Find My Friends, the green table in Game Center, the iPad’s Music app. I think there is also a tension between functionality and appearance, and I believe Apple sees some skeuomorphic UI designs as simply “cool”, rather than necessary means to convey interaction: it’s branding.

The “simple and elegant” interfaces, though, reside in a much wider gray area that’s still largely unexplored. Clear and Instapaper, by foregoing real-life resemblances of any sort, have dodged the bullet of frustration by creating their own standards. You can be mad at Clear’s use of gestures, but you can’t be frustrated because its paper doesn’t act like paper. There is no faux paper in there. The “frustration” this new breed of iOS apps generates can be traced back to the novelty of their interfaces and interactions, not to their legacy. But then there’s a certain selection of apps, like the aforementioned Paper, that are still somewhat bounded to their real-life counterparts and, partly because of technological limitations and established UI patterns, aren’t completely distancing themselves from the familiarity of real objects.

We’re at a point in software history where balance is key. Balance between simplicity and obviousness, discovery and frustration, innovation and familiarity. We’re using software that wants to remember where it came from, but that also strives to touch the emotional cords of a natural extension it didn’t know was reachable: us.

As iOS devices and the ecosystem of apps and developers around them mature and evolve, these dichotomies will increasingly define the interactions of today, and the software of tomorrow.