The reason why I use it is that I’m used to it. I used Final Cut Pro from the very beginning of my career. For now — regarding sound mix and the workflow — it’s really simple to use. However, I haven’t been able to update my OS for four years.
BushelScript is open-source and community-driven, meaning it can undergo necessary changes and gain useful features rather than remain stagnant as a side project on life support.
The real tragedy of AppleScript is not its becoming obsolete or irrelevant; tons of Apple-supported macOS apps still have healthy scripting interfaces. No, the tragedy is that the language through which such functionality is presented, with all its quirks and weak points and even utter failures, is extremely unlikely to receive any badly needed improvements in the future, if any changes at all. It is stuck in maintenance (read: bugfix and security hole-filling) mode and will be for years to come if we, the users, don’t replace it with something better whose fate we can control.
On the WebKit blog, Apple announces they’re putting a “7-Day Cap on All Script-Writeable Storage” for privacy reasons:
Back in February 2019, we announced that ITP would cap the expiry of client-side cookies to seven days. That change curbed third-party scripts’ use of first-party cookies for the purposes of cross-site tracking.
However, as many anticipated, third-party scripts moved to other means of first-party storage such as LocalStorage. If you have a look at what’s stored in the first-party space on many websites today, it’s littered with data keyed as various forms of “tracker brand user ID.” To make matters worse, APIs like LocalStorage have no expiry function at all, i.e. websites cannot even ask browsers to put a limit on how long such storage should stay around.
In iPadOS 13.4 and Safari 13.1, all forms of offline storage for web apps will be purged after seven days. When I wrote about how iPadOS’s new mouse support will benefit web apps, there’s an additional observation that I neglected to mention: There are only two ways apps can target that iPad, either as a native app available through the App Store, or as a web app accessed through a browser. Since App Store rule 2.5.6 prevents browsers from using third-party rendering engines on iOS, this effectively means Apple has complete control over all methods of software distribution on iOS. And, since native apps are more of a differentiating feature for iPadOS than web apps, as the coming iPadOS mouse support starts to make desktop-class web apps usable on the iPad for the first time, we can expect Apple to correspondingly start hindering web apps in order to give native apps an advantage on their platforms.
Alex Russell, a software engineer on the Google Chrome team, has a devastating presentation about how little people are using the mobile web. What struck me while I was watching was that, while Russell gives some optimistic suggestions for increasing the mobile web’s relevance, it’s clear that this battle is already over and the mobile web lost. The web simply isn’t an important platform for most of the world on mobile, and it’s especially unusable for anyone who doesn’t own a high-end phone.
Russell provides some stats to support this, including these 2014 numbers from Flurry showing that only 14% of time on mobile is spent on the web, while the other 86% of time is spent in apps. He calls out 10% as a meaningful threshold, that if a platform ever drops below that number, it doesn’t get the investment it needs in terms of tooling and libraries; you enter what he calls a “doom loop”. According to Russell, Google has internal metrics that show mobile web usage has now dropped below 7%1.
Russell talks about the reasons this it the case, the argument boils down to two issues. The first is performance, you just can’t get around the fact that native apps are more performant, and requires less bandwidth. The other is that Apple leverages their control over iOS to deemphasize the web, Russell cites two App Store Review Guidelines in particular:
This is the rule that blocks third-party rendering engines like Firefox’s Gecko and Google Chrome’s Blink.
The second piece that Russell references is the introduction to section 4.2 Minimum Functionality, which starts out by saying “Your app should include features, content, and UI that elevate it beyond a repackaged website.” In other words, Apple just comes out and says that just a website isn’t good enough for the App Store.
The losing battle for the mobile web brought about some personal reflection, because I’m also fighting a battle that’s already been lost: The battle for a macOS desktop of Cocoa2 apps. From 2001–2010, it felt like we were headed in that direction. Mac exclusive utilities and productivity software took off right away from the release of OS X, with companies like Panic and The Omni Group leading the charge, and TextMate and Apple’s iWork being important milestones along the way. Then, in the latter part of the decade, the real holy grails began to emerge: The groundwork Apple had been laying for visual tools at the framework level started to bear fruit first with Acorn and Pixelmator, both released in 2007 and built on Core Image, and Sketch released in 2010 and built on Core Graphics. The vision of an entire desktop of “Mac-like” apps seemed within reach, finally offering an alternative that doesn’t include cross-platform behemoths like Adobe’s Creative Suite and Microsoft Office.
Affinity Designer comes in at %2 for user-interface design in Uxtools.com’s poll. It’s worth noting that Affinity Designer is about 1/20th the cost of its most similar competitor, Adobe Illustrator, which is also about nine times as popular in the same poll. Even sandboxed applications competing with Adobe’s widely disliked subscription model have difficulty gaining significant market share.
With a single gesture, Apple assured that a desktop of all Cocoa apps would never be a reality, at least not for users of professional creative apps. The creative apps that abide by Apple’s sandboxing rules have been marginalized, unable to compete with the more popular and powerful apps outside of the sandbox.
Apps can still use Cocoa without being sandboxed, by choosing not be in the Mac App Store, but by doing so they lose access to Apple’s marketing clout3, which removes one of the factors that helped apps like Pixelmator and Sketch get off the ground in the first place. Both benefited from heavy promotion by Apple early on. All of these factors change the tradeoffs of using Cocoa, and there’s little upside at this point.
This isn’t just about sandboxing and the Mac App Store, the frameworks Apple releases today for developers to build on, like ARKit and Core ML, also just aren’t as useful for Mac apps as the pre-2010 frameworks like Core Graphics and Core Image were. ARKit is obviously intended to make apps for iOS, due to its camera and form factor requirements. While Core ML does have some desktop uses, it’s still more useful for iOS because it’s harder to accomplish things manually on that platform4.
Finally, Apple’s own fleet of pro apps has been paired down to just:
Final Cut Pro X (Acquired in 1999)
Logic Pro X (Acquired in 2002)
Motion (first released in 2003)
While the following have been shuttered:
Aperture (first released in 2005, discontinued 2014)
Soundtrack Pro (first released as part of Finale Cut Pro in 2003, discontinued in 2011)
The last time Apple introduced a new pro app was in 2004, and the last time they acquired a new pro app was in 2002, a strategy that had worked well for Apple in the past, resulting in both of their pro crown jewels, Final Cut and Logic. Aperture, Shake, and Soundtrack Pro were all canceled, ceding their market share back to the cross-platform behemoths. In addition, Final Cut Pro X has gone the prosumer route. While this isn’t necessarily a bad thing, it is more evidence of a pattern of Apple not supporting their pro users. Here’s Adam Lisagor of Sandwichcomparing the release of the release of Final Cut Pro X to previous versions:
When Apple pushed FCP to the industry pros five or six years ago, they did some hardcore outreach. They brought out Walter Murch, for God’s sake. The man cut Cold Mountain on it for God’s sake. They evangelized by showing what had been done, not by what could be done. But this time out, there is no evangelizing. No Murch. They do a dog and pony with vapid car footage or a Pixar trailer or something. This is meaningless to industry pros who need to know one thing, and it’s a very simple thing: can I edit a _____ on it? You know what I want to know? Can Louie CK edit his show on FCP X? Would he? Would he be happy to do it? Would he speak to a crowd of people about the experience? Would he plan on the product getting better? At what point does Apple ever even hint at admitting that they’ve released a product that will improve with age? Do they owe it (or anything) to their pro user base to acknowledge even a transition period? I want to be emailed a questionnaire and I want my Apple rep to write to me and invite me to a seminar called “Let’s cut a commercial”.
You know how many licenses of FCP Murch and Cold Mountain sold? Millions. Know how many licenses the most beautifully-crafted, tastefully-shot home movie of your family trip to Lake Havasu will sell? @#$%& all. Nobody wants to make the best home movie ever. It’s just not an aspirational thing anymore, the way it was in the early days of hub computing, when the Mac was aspirationally this centered hub of creation. We don’t want to do that anymore, our eyes are bigger. We all want to think we can make The Social Network now. So show me Fincher cutting The Social Network on FCP X and you’ll have me on board.
Across the board Apple’s support for creative apps is fading, whether it’s their own creative apps, new frameworks for pro apps, or supporting third-party apps investing in their platform. The dream of a desktop of consistent Cocoa apps is farther away than ever.
I’m using the term “Cocoa”, instead of the more precise “AppKit”, or the more understandable “Mac-like”, for historical reasons. If you followed Mac software during the 2000s, then you heard a lot of discussion about Cocoa and its benefits, in particular in contrast to Carbon. A Cocoa Mac app connotes a certain set of characteristics: sharing UI components with the rest of the OS (with consistent text editing in particular), support for system-wide features like Services, a customizable the toolbar, and, if you’re really lucky, AppleScript support. ↩︎
Apple appears to market two types of Mac apps: Mac App Store apps, and apps that aren’t in the store, but are important enough to move Macs. For example, their macOS page currently lists Adobe Illustrator, Cinema 4D, Maya, and Zbrush. What Apple seems to never do is market new apps that aren’t in the Mac App Store, which are exactly the most important apps for the future of the Mac as a platform. ↩︎
To illustrate how Core ML is more useful for iOS apps than macOS apps, consider the most natural way you’d edit photos on each platform: On the Mac, you’d use a complex app like Adobe Photoshop or Adobe Lightroom to edit photos entirely manually, whereas on iOS, it’s more common to just select from a few preset options that edit your photo automatically. This whole approach of letting the machine make the decisions for you, leveraging tools like machine learning, stems from input being more limited on iOS and iPadOS. ↩︎
Once upon a time desktop apps reigned supreme, they were the only game in town. When the calendaring web app 30 Boxes was released in 2006, a couple of months before Google Calendar, the idea of a web-based calendar was still novel. Back then, your calendar was managed by a native desktop app like Microsoft Outlook or iCal. Now Google Calendar is probably1 the most popular calendar app there is, and desktop and laptop sales are declining overall. People are using their mobile devices for tasks they once would have used a desktop or a laptop for.
The post-PC era means the desktop, and to a lesser extent, the web, are in decline, and mobile is on the rise. But what do we make of the fact that some areas of native desktop software seemingly have a gigantic lead that doesn’t seem to be budging?
We now have three separate, distinct platforms: web, mobile, and desktop. Desktop was here first, so it’s inevitably losing marketshare as the other two gain it. The web came next, so it’s also losing it as mobile gains it. But things aren’t as simple as they seem when you look closely: Desktop software hasn’t really changed much since the web and mobile came along. The top 10 free apps in the Mac App Store include both the iWork and Microsoft Office suites, and the top paid apps include both Logic Pro X and Final Cut Pro X2. These are the same apps that would have been the most popular 20 years ago, long before mobile, and before the web really took off as an app platform. The iOS App Store, on the other hand, is dominated by games and social media apps (although both Google Docs and Gmail make the top ten).
The Types of Apps in Transition
If the desktop app market hasn’t changed that much, then where is the transition to mobile coming from? The simplest answer is that many of the new use cases that arose with the web, the biggest example being social networking, have transitioned from the web to mobile3.
This categorization isn’t perfect, for example, it doesn’t account for chat apps like AIM and ICQ which were native desktop apps, not web apps, and have now been replaced by mobile apps like Messages, Slack, and WhatsApp. “Network-enabled apps” is probably a more precise term, but web apps is fine for shorthand. The collaborative nature of the web still captures the spirit of the native desktop chat apps.
The contrast between the decline of native desktop chat apps, and social media on the web, compared to the continued relevance of traditional desktop use cases, like Logic Pro X, and Final Cut Pro X, where it’s mobile that’s struggling for relevance, highlights a flaw in the post-PC narrative of a declining desktop. If that narrative were accurate, you’d expect to see a decline in all desktop use cases that looks something like the desktop chat apps, but we don’t.
The Platform Advantage Matrix
What’s happening isn’t a transition, it’s a migration. Apps are migrating to the platform whose advantages best fit their use case. I’ve tried to summarize the advantage of each platform in a single word4:
The desktop is for apps with long lists of features, the defining characteristic of powerful apps is that they support an ecosystem of third-party plugins. The web has the best features for allowing people to view and edit the same content, the URL is the easiest way to share anything. Mobile is the gold standard of making apps easy to install, easy to run, easy to use, and has convenience features like prefetching.
Simplicity is desirable in all apps, except for those used for creation, where it runs contrary to the flexibility necessary for expression. So mobile apps are the baseline, and the best the platform for an app, unless one of the advantages of the other two platforms is more important: If its main purpose is to be powerful or to facilitate collaboration. The reason so many apps support both mobile and web, without having a native desktop app, is because collaboration and simplicity complement each other, while power is at odds with both6.
Apps & Their Platforms
Here are some examples of apps categorized by the platform advantage that’s their highest priority:
Facebook, Instagram, and Twitter all end up in simplicity, because while social media is inherently collaborative, the space is so competitive that zero compromises in the all important simplicity category ends up being the highest priority. Trello and Slack are the inverse, with less competition for their categories, their essences is reflected as collaborative software. Of course, Slack and Trello also have mobile apps, but their native apps feel more like web apps than truly native first apps, because collaboration is their highest priority. Almost all of the collaboration and simplicity apps have both mobile and web apps (the only exception is Figma), while none of the power apps7 have either mobile or web apps, because while simplicity and collaboration are in alignment, power conflicts with both.
Figma is actually quite powerful, but it ends up in the collaboration category because that’s it’s defining trait. You could say that Figma’s marketplace bet is that the interface design industry will sacrifice some power in order to collaborate more effectively. Notably, an app like Figma still leaves room for other more powerful desktop apps in the same category, because there will always be some people that want more of the power that Figma sacrifices in order to be more effective at collaboration.
There’s also a special category for apps that don’t fit anywhere else. This mostly illustrates flaws in this exercise. The software landscape is messy. Every app is made up of many small decisions, and they reflect their creators, just as much as they reflect the marketplace. Any attempt to pigeonhole them is bound to run into some problems. But the idea is, if you zoom out far enough, some patterns emerge that can help us better understand the platform migration that’s underway today.
The goal of this piece is to predict where software is going. It’s common today to predict that mobile, and to a lesser extent, the web, are replacing the desktop. Steve Jobs famously said the desktop is going to be like trucks8, but I think a more direct comparison is that the desktop is going to be like the command line. The command line was once the only interface computers had, then the GUI came along, and now that’s the main way the vast majority of users interact with their computers. But that wasn’t the end for the command line. It’s continued to be an indispensable tool for developers, to the degree that software development is almost impossible without it9.
The future of the desktop might be like the command line. That may sound like faint praise, but that depends on the prism you’re looking through. If the desktop continues to be the best place to do the most exciting things you can do with a computer, things like 3D, audio, motion graphics, programming, and video—all of which haven’t budged since the introduction of mobile—then it will be the most important platform to do the things I care about the most. Sure, I’ll still have an iPhone and an iPad, and I’m sure there will be some great creative tools on those platforms, just like there are some great GUI and web tools for programming that aren’t on the command line today. But if the desktop continues to be the heart and soul of where creative work is done, then that’s the platform that will have won to me.
Presumably more apps like Final Cut Pro and Logic Pro X would be in the Mac App Store if Apple eased up on the sandboxing restrictions for Mac App Store apps. ↩︎
There are some new categories of app that the mobile form factor, and economic model, have enabled to emerge. Some examples are the wonderful Procreate, and the explosion of mobile gaming. ↩︎
Summarizing the advantages of each platform with a single word is inherently flawed, because it doesn’t account for a bunch of secondary characteristics that each platform has. For example, the web is also the easiest way to make an app available on any device, and mobile grants access to sensors and data that aren’t available anywhere else. But the goal here is just to distill the essence of a platform into a single word, in order to create a framework that we can use to look for broad patterns. ↩︎
Collaboration is a much broader category than it at first appears, encompassing not just obvious examples like Google Docs, but also, content management systems, and any kind of employee portal. The majority of software used to run businesses is collaboration software. ↩︎
Power is at odds with collaboration because, the more powerful an app is, the harder it is to use, and the harder it is to use, the fewer people who can use it, which means fewer people to colloborate with. ↩︎
This kind of comparison has been done before with Final Cut Pro and Adobe Premiere with similar results (Final Cut renders video at well over twice the speed of Premiere), but I’ve never seen it done with Motion and After Effects. I don’t think there are nearly as many people who know both of the motion graphics programs, relative to the video editors, in particular Motion doesn’t even show up when polling motion graphics professionals.
Jason Snellshared a bunch of charts that trace Apple’s growth in various product lines from 2009 to 2019. Per John Gruber, that last one’s a doozy. If you’re trying to figure out why Apple is neglecting their products for creative users, then there’s your reason. This also means that any argument for why Apple should give more attention to the Mac, that’s based on Mac revenue, isn’t going to fly, because that’s just not enough revenue. A better argument is that the entire iOS ecosystem, from the OS itself, to all of the apps, to the majority of the content in the apps is created on Macs. Could all this be moved to other platforms, without hurting the iPhone cash cow? Maybe, but I sure wouldn’t want to risk it.
Jeff Han’s multi-touch interface demo for his 2006 TED Talk, ‘The Radical Promise of the Multi-Touch Interface’, includes many innovations that are often attributed to the iPhone, announced a year later, including pinch to zoom. He also refers to the software as “apps”, and “the interface disappears” is a direct quote. This is a great example of what I’m getting at with my previous post about how the best paradigms for a new platform are usually discovered early.
I think that in every era there has to be some kind of simplified programming environment for the quiet majority of developers who don’t need fancy administration features for their code, like git branches or multistep deployment processes; they just want to write code and have it run. Glitch is aimed at those developers.
Advocates of simpler creative software usually brush off the fact that a whole bunch of peripheral technology would need to be re-written to support their vision, usually by saying that that stuff wasn’t written right anyway. Spolsky takes a more measured approach, which I appreciate, by instead saying most people just don’t need those features1.
Glitch is far from the only product that simplifies software by making it impossible to do complex things. Apple themselves are advocates of this approach, GarageBand and iMovie being prime examples; and the entire design of iOS is based on the premise that software, and especially operating systems, are too complicated2. But it’s difficult to determine how popular these simpler versions of apps are relative to their more complicated counterparts. On the Mac App Store, Final Cut Pro and iMovie are the #1 paid and free apps in the video category respectively. In music, it’s Logic Pro X and GarageBand. Logic Pro X is the most popular paid app overall in the Mac App Store, and iMovie is the most popular free app. While this does indicate that both approaches are popular, it comes with the caveat that the free apps are subsidized by other parts of Apple’s business3.
The professional offerings like Final Cut Pro and Logic Pro X show up much more online: Their forums are more active, they get reviewed more thoroughly by the press, and they show up more prominently in onlinesurveys. In some ways this to be expected, online communities attract the most passionate users, which will also tend to be the users who want the most features. But it gives the impression that the simpler apps don’t have fans, and they’re used because they’re free, not because they’re good.
The argument for simple creative software feels very hand wavy to me. It’s based on the premise is that there’s another group of active users, that doesn’t show up in online communities and surveys, who has the ability to do complex creative work like making movies, music, or software, while not being able to learn the complex software to perform those tasks. The only way to know for sure would be to see the usage statistics for those apps, which aren’t available. Based on the information that is available, I’m not convinced that simple creative software has an audience of anything more than people looking for a free alternative to expensive software4, and there aren’t enough free users to build a business around. The only company that should be making these types of apps is Apple, in the interest of commoditizing the complements of their hardware products. So why are other companies making these kinds of apps?
Apple approached ManvsMachine as one of six renowned artists challenged with pushing the iMac Pro to its limits.
We created a base algorithmic structure that served as a blueprint for each form and movement in the film. The CGI architectural structures were covered in high‑resolution photographic textures and embellished using a custom‑designed system that controls the foliage, architectural details, and other nuances. The result is a monolithic structure of massive scale and incredible detail.
There’s a short behind the scene video that explains a bit about how it was created on the Mac Pro with Houdini.
Matt Derbyshire, ex-Head of Product & Marketing at Ampify
Fecher also talked to a few indie iOS audio app developers who were willing to share sales numbers about their apps. This section starts at 25:36, I’ve summarized the sales numbers below.
Indie Developer 1: Audio Damage
Most of the developers Fecher spoke to chose to stay anonymous, but not Chris Randall from Audio Damage. Audio Damage has a suite of plugins, most of which are available on both iOS and desktop. Here’s how their revenue breaks down:
There’s a 50/50 split of revenue between desktop and iOS.
The mobile apps are priced at 1/8 the price of their desktop counterparts.
So that means there is eight times the sales on iOS at 1/8 the price.
Indie Developer 2: New Synth
The second developer Fecher spoke to had just released a new synth on both iOS and desktop. Here’s how revenue broke down for the first two months after launch:
33% of revenue came from iOS.
Desktop revenue was $13,000, and iOS was $6,922.
The iOS app version was priced at 1/10 the cost of the desktop version.
Indie Developer 3: iPad Synth
The third developer had a synth that’s only available on iPad, their revenue broke down as follows:
$125,000 in total revenue.
$80,500 in proceeds, so that’s after factoring in Apple’s 30% cut and other App Store costs.
$6,000 per month in average sales for the last three months.
This app was not featured by Apple.
Indie Developer 4: Adding AUv3
The fourth developer shared their data after adding AUv3 to their app, which resulted in a boost to sales.
$4,000 in revenue for the previous month before adding AUv3.
$15,000 in revenue the month after adding AUv3.
They calculated that adding AUv3 resulted in ~$33,000 in extra revenue over three months after adding it.
Indie Developer 5: iOS to Mac
The final developer had one music app app that was initially released for iPad, that was then expanded to iPhone and Mac. Their sales breakdown on each platform is as follows:
25% of sales come from iPhone, 35% from iPad, and 40% from Mac.
$500,000 in total sales over the last three years.
The iOS version is priced at 1/2 the price of the Mac version.
There’s Something Happening Here
There’s undoubtably something happening with iOS audio apps, it’s one of the most active creative app markets. The Audiobus Forum for example is not just the most active community for iOS music making, but possibly the most active community for music making period, fueled by a continuous stream of new apps being released.
There are some unique characteristics of this market relative to other creative app markets. For example, unlike illustration apps, a clear winner hasn’t emerged like Procreate, which is the #1 paid iPad app in the App Store. The closest to an overall winner so far is probably BeatMaker 3 which is currently the #3 paid Music app on iPad and #19 overall.
It’s fascinating to see an app like this come out on top for iOS, because the platform itself goes so far to emphasize the single app experience. That instead we have many small apps working together is likely the result of the App Store’s many restrictions making it difficult for a big winner to emerge1. Big winners thrive on creating their own ecosystem, and that’s hard to do when Apple has so many restrictions about what and how items can be sold, and how apps can be extended.
The variety of the iOS audio app market reminds me of the golden age of OS X indie development before the iPhone. It was a time when lots of new and innovative apps were being released by small companies, often consisting of just one person. The problem with an ecosystem like that is that it’s incredibly fragile, as the release of the iPhone, and Apple’s corresponding change in priorities, has shown. Many of the apps from the 2000s indie development scene are gone now, precious few developers from that era were able to transition to healthy businesses to maintain their apps over time. The true legacy of the OS X indie app movement is the iOS App Store itself, where more developers are competing over a larger market, but it’s a market where it’s notoriously difficult to make a living.
While the iOS audio market has found a way to thrive by embracing variety, it simultaneously has some of the same problem as other creative app markets on the iPhone and Mac app stores: There aren’t enough steady businesses to serve as anchors in the industry. What I’d like to see are more successes like Adobe or Ableton on the app stores.
I’m happy to introduce FS Bookmarks, a shortcut that lets you create direct launchers for files and folders stored in the Files app. FS Bookmarks is a hybrid Shortcuts-Scriptable tool that takes advantage of a native Files API (which I will call “bookmarks”) to expose the filesystem path of any file or folder stored in the Files app.
At this point I stopped reading and started scanning for how he did this because I’ve never been able to figure it out myself. Here’s the trick:
Under the hood on both iOS and iPadOS, files and folders stored in the Files app have paths such as this one:
That’s one ugly file path, but it’s how the system points to an app’s folder. In this case, the file path above is pointing to a folder called ‘Image Assets’ located under iCloud Drive/iA Writer. Similarly, a PDF document named ‘Expenses.pdf’ stored in your iCloud Drive Downloads folder should have this kind of filesystem path:
By themselves, these paths are useless as you cannot launch them in any way. However, I’ve recently discovered that if you combine the Files app’s shareddocuments:// URL scheme with an encoded version of the filesystem path, the file or folder can be reopened directly in the Files app. The launcher URL looks something like this:
You can get the part of the path that comes after /private/var/mobile/Library/Mobile Documents/ on your Mac by using the Terminal to cd first to ~/Library/Mobile\ Documents, then into the subdirectory you want to make a URL to, and finally using pwd to print the path1. Viticci continues by describing how the FS Bookmarks shortcut simplifies creating these URLs, but at this point I’d stopped reading to go see if this actually works (it does).
For me this is a breakthrough in the usefulness of iOS because jumping to a specific folder is important for retrieving information from the file system hierarchy2. There are two fast methods of retrieving information from the file system: launching and searching. LaunchBar is an example of a launcher, it’s a user-defined index of commonly used actions, whereas Spotlight, the search built-in to macOS and iOS, is an index of your entire file system. Both present a text box that you can type into to find what you’re looking for, but their use cases are different. A launcher is better if you already know exactly what you’re looking for, because its index is smaller and faster. Search is better if you’re not sure exactly what you’re looking for, because its index is larger.
A launcher does not have to have text-based interface, icons are commonly used instead3. The Dock is a launcher in macOS, and SpringBoard, the default screen on iOS, is a launcher for apps. Shortcuts makes a great launcher for everything else on iOS, and that’s what I use to open these URLs.
The key to using the file system’s hierarchy effectively is to use a launcher to jump to a standard base folder, instead of jumping directly to your final destination4. For example, if you have a “Projects” base folder, you’d jump to that with your launcher instead of jumping directly to a specific project folder like “Repla”. You’d jump to “Projects”, then navigate to the “Repla” subfolder inside it. Jumping to a standard base folder instead of trying to jump to the final destination is better for the following reasons:
Keeping all of the individual project folders in your launcher means micromanaging its index.
Individual project names often have too many hits, e.g., if you’re working on “Repla”, you probably have a lot of files with “Repla” in the name.
By jumping into individual project folders, you’re not building long-term reusable muscle memory, you lose your muscle memory when you switch projects.
With Viticci’s URLs we can jump to a standard base folder on iOS for the first time.
The location of the Mobile Documents directory itself is not the same between iOS and Mac, so everything that comes after Mobile Documents has to be appended to the root path /private/var/mobile/Library/Mobile Documents/. ↩︎
There’s been a decade long attempt to replace hierarchy with another form of organization, usually tagging. The argument being hierarchy is confusing. Which may be true, but hierarchy also fits nicely into the visual metaphor of folders. Tagging, on the other hand, is an entirely abstract concept, so it’s likely even more confusing. As far as I can tell, every attempt at replacing hierarchy has been a failure (when was the last time Apple talked about tagging?). The replacements are used less, harder to understand, and less effective than using hierarchy. This is usually what happens when you try to replace something ubiquitous that’s been refined over decades with something new, you get something that’s worse by every metric. ↩︎
A search interface, on the other hand, is essentially always text based. ↩︎
The only exception to this I’ve found is at the terminal. The z utility tracks which folders you’ve visited recently, and makes it easy to jump quickly to those. I use fasd combined with fzf to fuzzy find recent folders. This is the most effective way of traversing the file system I’ve found, but this approach isn’t available in any GUI environment that I’m aware of. ↩︎
The new 16” MacBook Pro still has a Touch Bar, but it now has a physical Esc key. I think this will quell most of the complaints about the Touch Bar, the touch Esc button was a particularly lousy replacement for a couple of reasons:
Esc is a commonly used key, especially for dismissing dialogs.
As a physical key, Esc is easy to type without looking because it’s in the upper left corner.
The remaining keys in the function row are also commonly used, but they were never easy to type without looking: Brightness, Mission Control, Launchpad, rewind, play/pause, fast forward, mute, and volume. For those keys the Touch Bar feels like a wash: The physical keys were slightly easier to type, but the Touch Bar is significantly cooler. So with the physical Esc key back, I suspect the complaints will die down, but why wasn’t the Touch Bar more successful?
Function keys have their proponents, but they’re rarely used on the Mac. They’re much more common on Windows because the Windows key, unlike the macOS command key, isn’t a modifier1. This means application and user-defined shortcuts2 end up in the function row. But function keys are worse than modifier-based keyboard shortcuts: They aren’t mnemonic and they’re too far from the home row to type easily without looking3.
Unfortunately, the Touch Bar is usually just used to add buttons, which effectively means it’s used as function keys, but with visual indication of each buttons purpose. Here the Touch Bar ends up stuck between the mouse and the keyboard, if you know the modifier-based keyboard shortcut, than that’s going to be easier to type without looking, and, if you don’t, using the mouse to click on something is easier because you’re already looking at the screen.
An ideal use case for the Touch Bar would appear to be selecting from a range of values. Choosing exactly the right color, or adjusting the many sliders in Lightroom Classic’s Develop module. Here the keyboard is clearly deficient, the keys have to move the value by a fixed increment, which might be much larger or smaller than the size of the intended change. The mouse fairs much better, but it’s still not a great fit because the mouse works best when you know in advance exactly what you want to click on, it works less well if you want to warm an image up and stop when it looks right4. The proliferation of custom controllers with knobs and faders for audio, photo, and video editing speaks to a need that isn’t being met by the keyboard and mouse.
Unfortunately, the Touch Bar isn’t a great fit for this use case either. Touch has lousy fidelity, it’s difficult to select a precise value. The Touch Bar’s minimal screen real estate exasperates the problem. It ends up being worse than the mouse and keyboard used as a pair: The mouse to get close, and then making fine-grained adjustments with the keyboard.
The last issue with the Touch Bar is that it isn’t on the desktop. This conflicts with the essence of what a laptop is: A desktop that you can take with you. Contrast this with an iPad, which is a truly mobile device. The applications we run on a laptop are not designed to preserve battery, handle intermittent internet, and startup quickly like mobile apps are. And the touch form factor itself trades the accuracy and ergonomics of the keyboard and mouse for being easier to use quickly with touch.
The Touch Bar doesn’t work with an iMac, or with an external keyboard, or when the laptop is in clamshell mode. It doesn’t work with desktop workflows. Selecting a value from a range comes up often in creative apps, and while the keyboard and mouse do come out on top for those adjustments, it’s still one of best use cases for touch. But audio, photo, and video editing are desktop tasks that take advantage of desktop workflows, the people doing these tasks often use external peripherals that makes the Touch Bar inaccessible5.
In summary, the Touch Bar’s problems are:
The Touch Bar is usually used for touch buttons, but keyboard shortcuts are better than touch buttons because they’re easier to type without looking.
When selecting values from a range, the Touch Bar is worse than the keyboard and mouse because touch has poor fidelity.
Even if the Touch Bar did have better fidelity for selecting values from a range, most of use cases for those types of edits are desktop workflows, and the Touch Bar isn’t available on desktop.
Touch in general hasn’t had the impact many of us expected and hoped for. Its main advantage has turned out to be that integrating the display and input method is an efficient use of physical space, which is at a premium when computing on the go6. There have been some interesting attempts at using touch-based controls in creative fields. When iPad was first released, it seemed like Lemur7 would gain traction in music production. But the software hasn’t turned out to be very popular, and the app is rarely updated. Instead the industry has gravitated towards control surfaces with physical buttons and knobs, and that integrate with desktop software, like Native Instruments’ Maschine and Ableton Push.
It’s hard not to wonder what Apple themselves think the advantages of the Touch Bar are. My suspicion is that Apple, like a many of us, overestimated the promise of touch based on the success of the iPhone and iPad. But touch is a mobile technology. Outside of mobile, when space isn’t at such a premium, the tactile benefits of physical controls win every time.
The Windows key not being a modifier causes problems with copy and paste at the Windows Command Prompt. Window standard copy and paste shortcuts, ⌃C and ⌃V, can’t be used because those keys are used to send signals to the shell, most notably control-C to abort the current task. The Mac sidesteps issue by using the command key for copy and paste. ↩︎
Many of the differences in keyboard shortcuts between Excel for Mac and Windows are moving shortcuts from the function row to modifier-based shortcuts on the Mac. Reliance on function keys on the Mac is a common tell for a cross-platform application that’s primarily developed on Windows. ↩︎
On a full-size keyboard with a dedicated function row (of which Apple hasn’t made in over a decade), function keys do have the benefit over modifier-based shortcuts of only requiring one key to be typed, they don’t require also holding down a modifier. ↩︎
To use the mouse to select from a range of values, you can hold the mouse button down, and then drag until you’ve selected the correct value, but this a more strenuous way to use the mouse, and you lose fidelity when releasing the mouse button. ↩︎
Sidecar might be Apple’s strategy for bringing the Touch Bar to the desktop. It shows a Touch Bar by default, and pairing it with Apple Pencil is interesting to also serve as a precise input device. ↩︎
The overarching marketing message is about listening to pro customers. Schiller also shared some details with Roger Cheng, at CNET, about how customer feedback shaped the design of the new keyboard:
A few years back, we decided that while we were advancing the butterfly keyboard, we would also – specifically for our pro customer – go back and really talk to many pro customers about what they most want in a keyboard and did a bunch of research. That’s been a really impressive project, the way the engineering team has gotten into the physiology of typing and the psychology of typing – what people love. […]
As we started to investigate specifically what pro users most wanted, a lot of times they would say, “I want something like this Magic Keyboard, I love that keyboard.” And so the team has been working on this idea of taking that core technology and adapting it to the notebook, which is a different implementation than the desktop keyboard, and that’s what we’ve come up with [for] this new keyboard. We’re doing both in advancing the butterfly keyboard, and we’re creating this new Magic Keyboard for our Pro notebooks.
This all sounds great, but the question remains about what had changed at Apple, that caused them to ship the butterfly keyboard after they’d been shipping laptops that everyone loved for years before that1.
If I had to wager a guess, I’d say what changed is caution. The iPad, and above all the iPhone, have an aura of caution around them. While the iPhone does take an occasional risk, like removing the headphone jack, for the most part it feels like they aren’t released until they are perfect. Removing the home button from the iPhone X is a great recent example2. I’ve yet to see anyone point out what a triumph that was. The iPhone’s core mechanical button was replaced by gestures, and people barely noticed. Gestures! The iPhone X’s home gestures are the satisfying crack of the ball meeting the sweet spot when you’ve hit a home run. It’s unequivocal proof that Apple’s still got it, and the fact that nobody’s even talking about it is illustrative of how much of a grand slam it was.
Other times caution means holding back, the iPad’s clumsy multi-tasking gestures haven’t been allowed touch the iPad’s core user experience of using one app at a time3. Because they just aren’t ready yet, putting them front-and-center in the iPad user experience would be a butterfly-keyboard-like catastrophe.
It feels to me like whatever mechanism Apple uses to refine great ideas, and determine when to hold the lousy ones back, was absent from the Mac products that shipped from 2015-2018. Since then it’s been put back in place, the guard rails are back up.
Another great example of Apple making sure something is perfect before shipping it is Face ID. It’s really hard to take an existing excellent feature, and replace it with another one that solves the same problem in a completely different way, without bungling the switch. ↩︎
You can use the iPad’s multi-tasking if you know about them, but you can also just use an iPad one app at a time, without ever even knowing they are there. ↩︎
Ultimately, it’s the ecosystem that explains why I can’t stop raving about the iPad. When it came out, the big knock on the iPad was that it was just a big phone; today, that’s what I love about it — like the Watch or AirPods, the iPad feels intuitive and natural to me because it works just like the device I use most often, my phone.
Like a phone, in most scenarios I find the iPad to be faster, more portable and easier to use and maintain than any traditional P.C. I’ve ever owned. The iPad’s limited screen space and emphasis on full-screen apps also makes for fewer distractions than on a traditional personal computer. The iPad, like my phone, lets me log in to my bank using my face; the Mac, in 2019, doesn’t even have a touch screen.
I’m convinced that perception of the iPad online is warped by it being a great device for writing1. The thing about writing is that it’s a baseline use case for a device, because everyone does it (even if it’s just sending text messages). As a result, it’s one of the most well-supported use cases on the device, and by extension it’s one of the most common types of apps developers create. This is why, for example, there are a seeminglyendlesssupply of greatwritingapps on the iPad, but only a couple of great imageeditors, and only one great video editor. For each of those tasks you’re walking up a scale of how common the task is, and how much out-of-the-box support you get for it as a developer. So it’s a great device for writing, and it’s natural for writers to share their opinions online, because hey, that’s a great use of writing! So when you read opinions around online, it’s usually writers saying they’ve managed to make it work for themselves, who have one of the best supported workflows on the device.
The only time Manjoo breaks out from his own workflow is in references a conversation with Dan Seifert, deputy editor at The Verge:
The iPad still can’t do everything a laptop can, and I still have to log in to a “real” computer sometimes. I had a long chat recently with Dan Seifert, the deputy editor of the Verge, who uses an iPad every day on the subway but often finds the device infuriating.
“For someone like me, who’s been using a desktop operating system for a long time, there’s a lot of built-in conventions that I’m used to that can be frustrating,” Seifert said. In particular, the iPad doesn’t work with antiquated work flows that are built for PCs. Say you need to log into your company’s bespoke publishing system or expense program? It’s possible those won’t work on your iPad — at least not yet — because they were built for much older devices.
“Antiquated work flows”2? Really? What about the state-of-the-art way that the very website that his article is being shared on is designed and built? Sure, there’s going to be some organizational padding between an opinion columnist and those behind The New York Times digital products, but does that mean he’s really unaware that none of the tools they use to do their jobs even run on iPads?
Along with writing, illustration is another field where the iPad really shines. And, to complete the trio of great iPad use cases, the third is “in the field” work such as reviewing footage for a video shoot on site, or photos for a photo shoot. In other words, cases where the form factor benefits of the iPad outweigh its workflow deficiencies. ↩︎
You can make the argument that most people don’t want to do the type of tasks that an iPad still isn’t suited for, such as graphic design, 3D, heavy photo editing, video editing, motion graphics, making games, or any kind of programming, including building websites–but referring to them as antiquated workflows is just plain false, those are arguably some of the fastest moving fields in existence. ↩︎
When the new Apple Pencil came out a year ago, I integrated it into my iPad editing workflow. I can edit podcasts with the Apple Pencil at a pretty impressive rate of speed, and the precision of the Pencil means that I’m more inclined to make detailed edits on the iPad than I am when I’m editing on my Mac with Logic Pro X and a trackpad. In fact, every episode of The Incomparable that I’ve edited in the past four months has been done on my iPad Pro.
One of the great things about Ferrite is that it doesn’t have a preferred interface mode—you can use touch, keyboard shortcuts… or Apple Pencil. While I was trying out the new iPad Pros and the new Pencil, I decided to try to edit a podcast in Ferrite, and my mind was blown. Now I was tapping and sliding the pencil to delete extraneous audio. The latest update added support for double-tapping on the new Pencil, which I mapped to a play/pause toggle so I could edit more quickly without putting the Pencil down.
This is a great sign for the iPad; more than anything the success of the platform hinges on the interaction model. The only interaction I personally prefer on the iPad is scrolling, for everything else I prefer the Mac. I have an Apple Pencil, but only really use it occasionally for quick sketches. For me, the problem with the pencil for me is that, while you do get additional precision with it, everything else gets worse. Pinch to zoom, two finger tap to undo, and of course scrolling itself all range from difficult to impossible with the pencil in your hand. This combination of tradeoffs mirrors the trackpad on macOS. I use a Magic Trackpad 2 and I find it to also be poor for precise edits. But I still prefer it to a trackball or mouse, because, with multi-touch gestures, I find it to be a more comfortable device in sum, despite its shortcomings when making precise edits. Snell says in the post he’s using a trackpad when he edits on the Mac, so my follow-up question would be how does he thinking editing on the Mac with a mouse stacks up?
Snell is going to continue doing most of his podcast editing on iPad, but he still needs to use his Mac as the final step in his workflow because the audio plugins that give him the best results simply aren’t available on iOS:
Now here’s the tough one, one I don’t have a good answer for as yet. As cool as it is that I edit every episode of The Incomparable on my iPad, the fact is that all the audio files for that episode are prepped on my Mac before they get to my iPad. I sync audio tracks using a proprietary tool, then use iZotope RX to remove background noise, and finally use a compressor (currently it’s Klevgrand’s Korvpressor, but it’s the latest in a string of ones I’ve used, they’re like Spın̈al Tap drummers) to balance the volume of audio across different tracks.
Ferrite includes a compressor plug-in and a volume-leveling preprocessing feature, neither of which can I get to generate the output I desire. Korvpressor has an iOS version that I can use as a plug-in in Ferrite as I do on the Mac with Logic Pro X, but the iPad version crashes reliably, so I can’t use it. And there’s absolutely nothing I’ve found on iOS that can match the quality of noise and echo removal that iZotope provides on the desktop.
There are some apps that replicate this workflow in perhaps the most iOS-like way currently possible, most notably AUM, but it’s still not a workflow that feels as at home on iOS as it does on macOS. ↩︎