Handling file pairs in A Better Finder Rename 11

Many photographers working with RAW files often end up with file pairs along the lines of “my RAW image file” + “my JPEG preview image file”, e.g.

image1.cr3image1.jpg
image2.cr3image2.jpg

Naturally, when it comes to changing the names of these files, they want to apply the same changes to both file types, e.g.

Current NameNew Name
image1.cr3beach1.cr3
image2.cr3beach2.cr3
..
image1.jpgbeach1.jpg
image2.jpgbeach2.jpg
..

A Better Finder Rename has no built-in support for file pairings, but you can use Filter Actions to make separate “flows” for different file types. As long as you use the same settings for both flows you will end up with the same file names.

Filter actions are special actions that do not change the file name, but decide which files or folders the actions below it in the action table apply to.

You can find out all about filter actions by going to “Help” -> “How to Use Filter Actions” within A Better Finder Rename 11.

  • click on “Show Advanced Sidebar” if it is not yet visible
  • click on the “Add Filter Action” icon on the bottom right of the “Actions” sidebar to add a filter action
  • change the settings to something along these lines:

The filter action above limits all renaming actions below it to only jpg files.

  • add a second filter action to process only your RAW files

The second filter action limits the actions to just “CR3” files.

To recapitulate:

  1. all the actions between the first and the second filter action only apply to JPG files
  2. all actions between the second filter and the end of the action list apply only to CR3 files

In practice, you will probably almost always filter by “First sort by name and sequence number” as this is the way that the files normally come over from your camera, but you also use sorting by EXIF shooting date as long as both your JPEG and RAW image files have the correct meta-data.

I hope that this helps.

Some things I wish I had know before starting to automate Mac Developer ID Notarization

It’s Day 5 of Notarization Week and it’s time to wrap up and write down my experiences.

Notarization itself is not incredibly difficult. You can learn the basics by watching the 40 minutes talk from WWDC 2019. Unlike sandboxing, notarization should not have any detrimental effects for most Mac apps.

As always the real trouble starts when you are trying to inject Notarization into the tangled web of modern Mac software development: entitlements, certificates, automated Xcode build chains, build settings, etc..

First you need to adopt the “Hardened Runtime” for your application. For the two apps that I tested with, this was simply a matter of switching it on in the “Capabilities” tab of your target. By default, all the hardened runtime features are switched on and I was able to leave them all on without any problem.

The first gotcha is that you can’t really test your application’s compatibility with the hardened runtime in Xcode, because it will run in debug mode. Since the hardened runtime would not allow inspection of your code, the default “CODE_SIGN_INJECT_BASE_ENTITLEMENTS=YES” build setting   will inject the “com.apple.security.get-task-allow” entitlement into the debug version of your build product. This is a “normal” entitlement, just like those used for sandboxing.. and no the sandbox does not need to be turned on for notarization to work (sigh of relief).

Another gotcha is that your app will not be notarized as long as this entitlement is switched on, so we need to turn it off for the release build. This should not be a worry, but you will probably spend many frustrating hours chasing down this very problem nonetheless..

The next thing on the compliance list is that secure timestamps for codesign need to be turned on. Many developers have a “–timestamp=none” flag somewhere in their build settings.. because the Apple timestamp servers are slow and often down (at least here in Luxembourg) and you can no longer build a release without an internet connection. So if you have a build server without internet connection.. that is about to change. To make doubly sure, you should probably add “OTHER_CODE_SIGN_FLAGS=’$(inherited)  –timestamp” to your build settings.

In this context, it would have saved me a lot of time if I had known how to find out whether a product has in fact been signed with a secure timestamp. Executing “codesign –verify –deep –strict –verbose=4 –display  -r- /path/to/my/product” will display loads of things. If there is a line with “Signed Time” among it, that means that you did not sign with a secure timestamp. If you have a line with “Timestamp” in it, it means you do have a secure timestamp. It’s another brilliant example of how an Apple engineer’s language choice can cost tens of thousands of lost developer hours. “Signed Time (insecure)” would have been a great help.

In a similar vein, “codesign -d –entitlements :- /path/to/my/product” displays all the entitlements for the product and will reveal the dreaded “com.apple.security.get-task-allow” entitlement if it is still present.

Once you have a build product, you can send it to Apple for notarization with the “xcrun altool –notarize-app -f /my/archive –primary-bundle-id my.primary.bundle.id -u username -p apps-peci-fic-pass-word”.

This is where things get a little weird. You can send either a disk image or a zip archive, but not an application bundle. I distribute my software as disk images and my software updates as zips. If you send a zip file, make sure that you use the “ditto” tool as instructed by Apple, so that you don’t run into problems with extended attributes. You need to supply your username (email address) and a password. You can generate an application specific password and that worked fine for me straight away.

The command line will upload the archive and then return a “request-id” which is a UUID that you can use to look up the state of the notarization. This is not a real-time, synchronous affair. It was fairly quick when I used it, taking usually only a few minutes, but it is obviously a challenging problem for automation. You could write a script that extracts the request-id and then polls the Apple servers for its status before continuing, but realistically you probably want to have a two or three stage build process now.

I subdivided my own build process from one into three phases: build, request notarization and a combined stapling and verification phase.

Which brings us to stapling, which is the fun and easy part. You just type “xcrun stapler staple my.dmg” or “xcrun stapler staple my.app” and that’s that.

One thing to note is that the entire notarization process is completely free of build and version numbers, which is so wonderful. If only app review worked this way! There is no mention on how it works; it could be that Apple uses the entire archive as a hash code or that they create a hash of your upload. In any event, there is zero problem building a thousand different versions of your program and getting them all notarized.

The second thing to notice is that you can either staple app bundles or disk images, but not zip archives. Not sure which is weirder, but it kind of makes sense. In practical terms, this means that you can staple your notarization receipt to a dmg without having to open it, which is super easy. If I have understood this correctly, this means that both the dmg and the app are stapled and will open without any funny user warnings. Not being able to staple zip files, however, complicates things somewhat, because you now have to zip the app bundle to notarize it, staple the original unzipped app bundle and then re-zip it.

So far so good. Now enter the much dreaded Sparkle.framework, the foundation of all automated software updates across the Developer ID world, maintained by a clever, intrepid group of volunteers that deserve our eternal gratitude.. and the bane of my developer life.

For most of my products, Sparkle is the only framework that I bundle, so I blame it for the entire dreaded complexity and wasted time of framework signing.. which is a lot of blame. Signing frameworks is hell.. or used to be hell.. and now is hell again.

I don’t use Carthage or other “download stuff from all over the internet written by who knows who, run buggy build and install scripts and then ship the whole lot with my product” build systems. I actually just place a binary copy of the framework into the /Library/Frameworks/ folder and work with that. If you are using one of those build systems, you probably will have different problems.

The current (as of 26/July/2019) binary distribution of Sparkle is neither signed, nor built with the hardened runtime, so is unusable for notarizated apps. Downloading the source as a zip archive leaves out crucial files. So I did a “git clone –recursive https://github.com/sparkle-project/Sparkle” to get what I assume must be the master branch version (I have some deeply strange git expertise that overlaps with nobody else’s).

Building it with “make release”, despite affirmations to the contrary, did not result in a hardened version. One of worst things (I’m pretty sure it’s unavoidable and I’m not dissing its developers at all, but it is still absolutely dreadful) about Sparkle is that it includes two executables as well as the framework. Autoupdate.app and fileop always cause incredible signing headaches. The default option of just ticking the “Sign upon copy” option in Xcode, won’t sign these properly and you inevitably end up with gatekeeper problems.. even though it had just gone through a phase of actually working.. but no more.

I’m sure that at the heart of all my signing problems is a lack of understanding, aka ignorance. The thing is that I’m a Mac developer, not a cryptography geek. Knowing just enough to get by in the context of cryptography means knowing quite a lot about quite a few things, followed by endless trial and error that eventually ends for unknowable reasons.

After a very long time, I finally got a Sparkle build that I could use by opening the project in Xcode, adding the “OTHER_CODE_SIGN_FLAGS=’$(inherited)  –timestamp”, “CODE_SIGN_INJECT_BASE_ENTITLEMENTS=YES” to every relevant target and manually adding my Developer ID signing identity to all targets. I have no idea why this was necessary; as far as I understand the framework does not need to be signed at all, and will in any event be re-signed when it is copied into my app, but it would not work without this. Perhaps the entitlements only get added during signing?

I then spent most of a day chasing down the origin of the “com.apple.security.get-task-allow” entitlement on the “fileop” executable that steadfastly refused to go away, despite having no debug build and having plastered the “CODE_SIGN_INJECT_BASE_ENTITLEMENTS=YES” build settings everywhere throughout the Sparkle project. Around 11PM, I decided to just delete Xcode’s “Derived” folder (what else was there left?).. and that promptly solved the problem.

With the Sparkle problems solved, the rest was fairly straightforward.

All told I’ve spent an entire week on learning about Notarization and integrating it into my build system.  It’s not badly designed. In fact it works fairly well and I would even go as far as calling some of the design decisions enlightened. It is certainly a lot better thought through than either App Review or the Sandbox.

Unfortunately, it adds yet more complexity to an already incredibly complex development environment. Today’s apps are not much more complex than those from the 1990s. Phone apps are mostly much less complex. It should be easier to develop for the Mac today than it was back in the 1990s. Yet nearly 30 years of development tool, framework and API progress has yielded a development context that is no more productive and far more complex. Notarization adds yet another layer of complexity and yet another time sink for Mac developers.

There are some positives: Apple can now revoke specific builds of an app, rather than just turning off all apps from the same developer id. The hardened runtime gives the developer the possibility of shielding his/her software from malicious modification, but allows him/her to decide which “holes” need to be blasted into runtime for the program to continue working. Actually scanning apps for malware adds peace of mind when you release a program into the world.

In an ideal world, Apple would turn around and ditch its Mac App Store sandbox requirement. It could even offer notarization as a way to side-load software on the iPad. After all, notarization gives it the tools to prevent malware from being published and to switch off on every single Mac in the world should it get through anyway.

As a long time Mac developer (since 1994), however, I can’t help thinking though that the security people at Apple would have done better ironing out the bugs and limitations of the sandbox to get it work properly and be less of a nuisance, rather than adding yet another security approach.

If early reports about Catalina are to be believed, it looks like there are so many people working on Mac security that they have to roll out new security features at each release, whether they are a net benefit to users or not. Perhaps, these people could be tasked with making macOS great again instead?

Book Review: Ian McDonald’s Luna: Wolf Moon

Ian McDonald’s “Wolf Moon” manages this rarest of tricks: a middle book that is clearly superior to its predecessor.

Luna: New Moon” was a tough read. For the first few hundred pages, I was waiting for the story to start. After another few hundred, I was anxiously hoping that there would, in fact be a story. A hundred pages before the end, I just kept going, because if you having gone to the trouble of reading the entire phone book, you’re not going to stop at “W”, now are you? Then suddenly and without warning all hell broke lose, and positively made the red wedding look like a minor plot twist. Then it ends.

The first book meticulously introduced us to a new world, a new society, its cast of movers and shakers, its ugly underbelly, its family rivalries, architecture, legal system and short history.  Lady Luna is indeed a harsh mistress. Where other books introduce you to the one or two main protagonists and a small cast of supporting roles before getting on with the plot, New Moon introduces a cast of several dozen fully fleshed out characters, all with their own narratives, agendas, history, alliances, strengths and foibles. Moreover, those character introductions are not all clustered at the beginning of the book, but continue throughout; often at the most inconvenient of times.

Ultimately, this is a frustrating conceit that makes you want to throw the novel (or in my case my Kindle) against the wall. Just when something is finally about to happen, McDonald switches to another storyline entirely. While this works great as a narrative vehicle when you have two or three intertwining narrations, once you pass the point of having a dozen or more, the pent up frustration of all those interrupted arcs quickly adds up. This is in no way aided by McDonald’s propensity for endless of tedious descriptions. The texture and scent of the amuse bouches at a society event escapes his attentions no more than the subtleties of their attire and (most egregiously) their bodies and sexual exploits. There is a deeply voyeuristic quality to the endless, tediously anthropological descriptions of body shaving, oiling and (admittedly original and varied) sex acts and toys, which however never, ever crosses the line to being even mildly erotic. This voyeuristic tone is, however, just as evident and very much more enjoyable in the descriptions of lunar architecture and Lady Luna herself.

So I hated it, right?

Perhaps a little, but in the end it is all totally worth it. The very same qualities that make the first book an agonizingly slow read, also make it into a towering achievement. The Luna books are more than a plot set in an alternate universe; more than just “a star wars story”. They feel like all the plots set in one of the most fully realized worlds ever attempted. Tolkien decided to put much of his world building into the appendices to at least keep the plot moving forwards at a leisurely pace. McDonald decided to work every last detail of the appendices into the story itself.

Ultimately the first book was the price that you pay for setting up the plot and that is largely why the second volume is so much better. There is a large cast of characters that you care for, none of them either truly good nor truly bad. There are games of thrones being playing out, personal vendettas, empires forming and collapsing, lives being lead and lives coming to an abrupt and often gruesome end.

The second book is overwhelmingly about revenge and conflict; the old order that was painstakingly constructed in the first book is torn down and everything is in flux.

In this second book, McDonald manages to tame his instinct to go off on wild tangents and frustrate the reader by leaving every storyline as soon as it becomes interesting, often deciding instead to stay on the individual strands of the story for several chapters until its tensions have slackened enough to make the transition to another strand more bearable for the reader. Furthermore. McDonald does not constantly introduce new characters from out of nowhere, but instead visits the viewpoints of previously minor characters and gives them more depth. Towards the end of the second book, everything is set up a third volume.

The Luna cycle is well worth your time and I personally cannot wait to get my hands on the third, and presumably final, installment. I wish I had known when I started reading that the books would eventually pay back all my efforts.

This is one of the finest exercises in world building ever attempted by any author and like the Mackenzies it “pays back three times”.

Quick Reaction to Mac Pro Leaks

Daring Fireball has a story about Apple sharing their plans for the future of the Mac Pro. It is weird to communicate with Pro users via a blogger, but what the hell: it’s Apple.

Apple are working on new Mac Pros with a completely new design, but they won’t ship “this year” and there’s no firm commitment to shipping them in 2018 either. In the meantime, they have processor bumped the existing machines but without USB-C and thus without LG 5K monitor support.

My gut reaction is relief. At least they are working on it and they haven’t whole sale abandoned Pro users. There’s also hints about new Pro displays and new iMacs.

It is also nice to see that Apple has realized that the existing Mac Pro design is a complete failure. Dual GPUs are no good, integrated components and a small form factor are incompatible with fast, low cost updates. They are talking about a “modular system” now.

All things said, however, nobody outside of Apple ever thought that their new Mac Pros were anything other than the product of a deranged mind and it took them nearly 3 years to acknowledge that. Furthermore, a “modular” system is exactly what the original Mac Pro and all but its latest incarnation already were. Building this fabled new modular Mac Pro is thus as easy as slapping an industry-standard motherboard with dual Xeons into the old Mac Pro enclosure and supporting it in software. There is literally nothing to it. If they don’t like the old cheese grater enclosure just spray it Jet Black already.

All this makes me worry about what that “new” direction is: worryingly there has been no acknowledgement that a Mac Pro needs dual CPUs. There’s only talk about dual GPUs, which nobody has asked for. Are they going to mess it up by overthinking it again?

All I want is a significantly faster Mac. Something that beats my no longer supported 2009 Mac Pro.. and it’s not hard to deliver now.

 

Re-Learning Touch Typing with the Workman Layout

I learned touch typing when I was in my mid-teens and WordPerfect was the new hotness on DOS. It got me into a fair amount of trouble more than a decade later when I was writing up my PhD thesis and I developed my first proper RSI symptoms. As I mentioned in the previous post, it was the combination of two main ingredients, switching to my beloved Kinesis Advantage keyboard and the Dvorak keyboard layout that saved my hands and career.

I have used that combo to write and code for two decades now.. and yet I’m writing these words on a laptop keyboard using the Workman layout.

First things first. I’m not an über-typist. I think at the peak of my Dvorak typing I got to 80-90 words per minute, which is fast but not exceptional. I measured myself at 75wpm before starting my new adventures in touch-typing, which is just fine, because I can’t think at more than perhaps 60 wpm anyway.

Something I have realised over time is that maximum performance is not nearly as important than comfort when typing. My main success criteria for a keyboard arrangement are:

  • must feel comfortable
  • must minimise strain on my body, thus preventing injury
  • must let me concentrate on what I’m writing, not how I’m writing
    • for me that means that I need to be able to keep my eyes on the screen at all times and my fingers need to be able to find the keys without distracting me
  • must be able to keep up with my thoughts
  • must be easy to navigate and edit text
  • must enable me to use keyboard shortcuts easily

My current quest for a new keyboard layout was triggered by the fact that I want to be less dependant on my desktop setup and be able to work effectively in coffee shops and similar settings.

Unfortunately laptops come with the standard crappy staggered key arrangements and there is precisely zero hope that Apple is ever going to come out with a matrix keyboard on a laptop. So you’ve got to make do with what they give you..

I have at various times tried to use Dvorak on a MacBook keyboard, but never with any real success. My fingers have memorised the key positions on the Kinesis’ straight rows of keys and I mishit the keys on the bottom row almost constantly. All this is made worse by the fact that I’m subconsciously peeking at the QWERTY labels on the keys because the screen is right above the keyboard. The actual labels on the keyboard become especially irresistible when I reach for a keyboard shortcut when my fingers have not been resting on their home row positions.

Eventually I settled on using hunt and peck on the laptop and just live with QWERTY. It wasn’t a huge deal because I was a very occasional laptop user.

A year ago, however, all that changed because I fell in love with the 12″ MacBook. It quickly pushed my iPad out of my day bag and I found myself writing code and answering email on the go.. and with that came my dissatisfaction with not being able to touch type on it. On my desktop my thoughts just magically flow through my fingers onto the virtual paper while on the laptop I’m plodding along at quarter speed..

So I decided to bite the bullet and re-learn QWERTY touch typing. I got to 45 wpm after about a week and I felt that this would be perfectly fine. While it still felt slow I felt certain that by just sticking with it I was surely going to get faster with time. Six weeks later, however, I was still no faster and more importantly it still felt awful. All those finger contortions; the fact that the most frequently used keys are in the least accessible places and most importantly: the God awful rhythm.

One thing that most Dvorak users note is the nicely flowing rhythm of the layout. Most of the time when you type in Dvorak, successive keys are on alternating hands. One hand presses a key while the other is getting in position. The very fastest typists tend to be Dvorak users (sustained 150 wpm for 50 minutes, peaking at 212wpm) and I think that the fact that the hands alternate so often might be a key factor in that. Dvorak is to QWERTY in that respect as the Brandenburg Concertos are to slowly scratching chalk over a blackboard.

The Colemak partially optimized keyboard layout

The Colemak partially optimized keyboard layout

This got me started on learning the Colemak layout. This is a “modern” optimized keyboard layout and claims to be faster and “more optimal” than Dvorak. It scores over Dvorak in a number of ways, but most significantly it is much easier for QWERTY users to learn. Known as a “partial optimization”, it relocates only some keys, concentrating on getting the most frequently used keys under your finger tips on the home row. Particularly on the bottom row, keys stay pretty much where they were. This also means that the most common shortcuts stay in the same positions. So copy, cut, paste and undo problems as I experienced previously, simply do not arise. As far as possible all keys are also typed with the same hand as in QWERTY. This is especially significant for using the Shift key properly, as this requires coordination with the opposite hand.

I found Colemak much more difficult to learn than QWERTY because everyone has that layout stored somewhere in their brain. Still Colemak was an immediate and significant improvement, even though my typing speed went way down into the 10-15wpm initially. There are a lot fewer contorsions and you can type many words with home row keys only.

I persevered for 3 weeks, but even then I was struggling to get above 20 wpm. While feeling better than QWERTY, Colemak did not actually feel all that great. One particular annoyance was typing the “TH” combo. The T is just under the left index finger, which is just fine, but the H is reached by sliding the right index finger to the next key on the left. This is a very awkward manoeuvre in and of itself, but combining it with hitting the T key at speed is hard and just feels wrong. So every word containing a “th” becomes a little hiccup. I also found that in general the rhythm of the layout was an improvement only when compared to the low bar set by QWERTY.

I decided that perhaps I was barking up the wrong tree. Colemak might be easier to learn for QWERTY folk, but that actually worked against me: My beloved Dvorak has zero communality with either layout. Keeping the key under the same hand as in QWERTY actually slowed my progress for that very reason. What I really wanted was a “new improved” Dvorak version, not a better QWERTY, but I couldn’t find anything and wasn’t about to develop my own layout.

What I did find was a coherent criticism of Colemak that was insightful enough to clarify what I actually disliked about it, but hadn’t been able to put my finger on. The author of that criticism had also developed his own layout based on this analysis and that’s how I found Workman.

The Workman fully optimized keyboard layout

The Workman fully optimized keyboard layout

Apparently a lot of keyboard layout optimizers (yes, such things exist) consider all 4 fingers to have the same natural range of motion, mobility and strength. This explains why Colemak considers the H key to be in a prime position despite the fact that it is clearly much harder to reach sideway than upwards with your fingers. Colemak also does not consider the length of the fingers.

Workman gives each key a score based on how easy it is for an actual finger to hit it. I’m not certain that I would necessary have chosen the exact same scores, but it’s clearly an improvement over Colemak and the most common combos are easier to type. Workman keeps the Z and X keys in the same spot, but moves the CV keys one position to the right. With a sticker over the keys I find that I can live with that. There are also Mac implementations freely available, including “Programmer’s” versions, which I’ll probably be using as they are similar to the “Programmer’s Dvorak” that I use on my desktop machines.

I have used the Xmas holidays to practise my Workman layout and about a month in, I’m getting towards the usable stage and this is the first lengthy document that I have written in it. I’m optimistic about this remaining my laptop layout for good.

There are, however, a few things I’m not so keen on. The first is minor and concerns the “ch” bigram, which is fairly awkward to type. It’s not nearly as bad or common as the TH issue on Colemak or the “ls -l” plaguing Dvorak Unix users.. but still..

The other is a potential deal breaker and concerns the design decision to favour “single hand utilisation“. Workman’s designer, OJ Bucao, claims that it is easier, faster and more comfortable to type multiple letters with the same hand rather than by alternating hands.

This is the reverse of my own experience with Dvorak. When typing in Workman I’m constantly performing two, three or even four letter runs with one hand. For Bucao this is a good thing. He claims that after a while those patterns become ingrained and you end up typing them as a single action. He is certainly correct that it minimizes hand movement, as the un-used hand can find its way back to the home row and take a breather. The most common bigrams and trigrams are also easy to type with very little reaching.

Still the jury is out on that particular feature. I have noticed that I’ve started typing with those semi-automatic finger rolls, but I find it fatiguing and I don’t (yet?) like the rhythm much.. but it’s early days yet and generally I’m much happier with Workman than with either QWERTY or Colemak.

If you are interested in learning a new layout, I would recommend giving Workman a try above Colemak. Colemak and Dvorak both come pre-installed on macOS, but installing Workman is very easy. If you are a developer, the programmer’s versions makes typing codes much simpler.

Long-Term Review: The Kinesis Advantage 2 Ergonomic Keyboard

In the mid-1990s, while working as a full time researcher, writing up my PhD thesis and starting publicspace.net, my arms suddenly started tingling after a good day’s (and night’s) work. Shortly afterwards, my fingers and forearms would be on fire at the end of every day. I started worrying.

Eventually I couldn’t work full days any longer and even just typing a few words or using a mouse would cause pain and discomfort. I started seriously worrying that I had managed to hamstring myself before even making it into a “proper” job.

That’s how my obsession with all things ergonomic started.

A good 20 years later, I’m much healthier and have suffered no RSI related symptoms for at least 15 of those years.

Probably the two most effective things I did back in the mid-90s was to buy an outlandishly weird ergonomic keyboard called the “Kinesis Ergo” and learning to touch type with the DVORAK keyboard layout.

Kinesis Advantage 2 Keyboard

The Kinesis Advantage 2 Keyboard

The Kinesis Ergo keyboard is now in its brand new “Advantage 2” generation, which is an opportunity for a long term review. It looks like something from an alternative (much geekier) universe, but is probably the single best piece of ergonomics I’ve ever bought.

Like all ergonomic keyboards, the Kinesis will do you absolutely no good if you don’t touch type.

Ergonomic keyboards enable you to type without pain and with greatly diminished effort, but you have to learn how to use them. Two finger-pecking at a split keyboard with your wrists fully bent, hammering your fingers into the keys with your keyboard resting on a desk that is 5 inches too high, obviously won’t work.

The point is that it is simply impossible to type on a traditional keyboard without some degree of discomfort, because you just can’t get your limbs into a pain-free position. With the Kinesis Advantage, you can.

A great keyboard, which the Advantage 2 certainly is, goes one step further: not only can you type without injuring yourself, but it also helps you forget about the keyboard, concentrate on what you are writing and makes it feel natural and fun.

Just like lesser “ergonomic” keyboards such as Microsoft’s much loved, but ultimately very half-hearted attempts, the Advantage is “split“, meaning that each hand gets its own separate area and both are physically separated.

This allows your wrists and shoulders to stay in a neutral, un-bent position and is instrumental in preventing carpal tunnel syndrome. CTS is caused by the tendons of your fingers rubbing against the gap between your wrist bones while typing. When your wrists are bent sideways or strongly upwards or downwards that gap narrows and.. ouch!

Also just like other ergonomic keyboards, the Advantage has a “tented” design. This means that both halves of the keyboard have a gently upwards slope starting with your little fingers and progressively rising as you move towards the index fingers. Again this allows for a more natural position of wrists and shoulders.

The Kinesis also uses mechanical key switches: the “Cherry Browns” for mechanical keyboard enthusiasts. There is a debate whether mechanical key switches are truly superior to their scissor counterparts, but it is probably telling that even die hard scissor switch aficionados only claim that they are “just as good”; while nobody claims scissor switches are better. I personally much prefer the mechanical kind.

This, however, is where the similarities between the Advantage and something like Microsoft’s Surface Ergonomic Keyboard or even Matias’ Ergo Pro stop.

Matrix Key Layout

Kinesis Advantage Matrix Key Layout

The Kinesis Advantage is part of only a handful of keyboards that don’t use the staggered key rows that originate in the requirements of the mechanical typewriter, but instead uses a columnar (also known as a matrix) layout. All this means is that the keys are arranged in straight columns just like on a number pad.

The sheer stupidity of doing anything else does not hit you until you have used a matrix keyboard for a day or two and go back to a “stupid” keyboard. Who would do this to themselves? Simply arranging the keys in columns eliminates the awkward finger contortions that are such a fun part of touch typing. Yes, our fingers can move sideways, but they really don’t want to, especially when you want to hit something.

There are other matrix keyboards out there, all with their own fan base.

The Truly Ergonomic Keyboard

The Truly Ergonomic Keyboard

The Truly Ergonomic is a mechanical keyboard but completely flat with neither tenting, nor enough of a split for my tall frame.

The Type Matrix Keyboard

The Type Matrix Keyboard

The Type Matrix is a very similar affair but with scissor switches.

The Latest Ergo Dox Keyboard Iteration

The Latest Ergo Dox Keyboard Iteration

The ErgoDox is an open source DIY keyboard that is mechanical, tented and fully separated. This is the only keyboard I mention here that I don’t own myself. I don’t like the fact that it is “straight” tented rather than Kinesis’ more organic shape, but I can imagine that it is pretty close to the Kinesis and is a real “split keyboard”.

The Maltron 3D Two-Handed Keyboard

The Maltron 3D Two-Handed Keyboard

The Maltron Two-Handed 3D keyboard is very close to the Kinesis Advantage in almost all respects and I have used it for a few years before going back to the Kinesis. My major gripe is the build quality which is more “bespoke custom job” than what you’d expect from a consumer product.

Kinesis has gone a step beyond simply adopting a matrix layout in the search for the perfect ergonomic fit. Your hands in fact rest in a completely natural “well” taking into account the length of your finger and their natural curvature. Moving your fingers up and down in a straight line always puts your finger tips straight on the keys with no reaching. The new Advantage 2 even has textured and molded home row keys that make it immediately obvious that your finger tips are dead center on their respective home row keys.

Over the years, I have tried to move away from the Kinesis design; mostly in order to have a cheaper and more mobile setup. I spent several agonizing months in 2014 trying to migrate to the Microsoft Surface Ergonomic keyboard after my second Maltron developed yet another dead key, but I could never get comfortable with it.

It took me a while to realize why my attempts to go back to a more standard keyboard were doomed. The real reason is what makes the Advantage so hugely superior to the TypeMatrix and the Truly Ergonomic: the thumb clusters and in-line cursor keys.

Behold the thumb keys.

Behold the thumb clusters.

The thumb clusters are such an obvious improvement once you get used to them, that is seems impossible that there are keyboards without this feature. The thumbs are the strongest and most mobile fingers and yet on a traditional keyboard both thumbs only hit one miserable key.

Not so on the Kinesis, where each thumb gets its own cluster of keys. You press Space, Backspace and Delete with your thumbs. In fact the Space and Backspace keys are right under your thumbs when your hand is completely relaxed. Your thumb also covers your Control, Option and Command keys, as well as the less important Home, End, Page Up & Page Down keys. The cursor keys are placed in a 4th row that does not exist on other keyboards.

What these design choices amount to is what makes typing on the Kinesis Advantage such a great experience: you never have to move your hands away from the home row.

In all other keyboard designs, some frequently used keys such as the Backspace, Delete, Enter or the cursor keys require you to move your hand, usually the right hand, away from its home row, feel for the key, press it and then awkwardly feel your way back onto the home row.

Not having done this for well over a decade of continuous Maltron and Kinesis keyboard use, this absolutely drove me nuts on the Surface keyboards and I went back to the Advantage.

On the Kinesis, if you’ve mistyped something, your fingers stay where they are and you tap your left thumb to hit backspace. If you need to go back a few characters, bend your fingers until they rest on the cursor keys. Bend them back and you are on the home row again. Your hands themselves do not move.

Personally, I do not use the thumb to hold the Control, Option and Command keys but move my hand to reach the top of the cluster with my index finger; I’m not even sure whether this is as was intended, but it works really well and I’m back on my home row in no time.

On a traditional keyboard, the keys that need to be reached by bending your index fingers laterally (e.g. G and the H key) are very awkward to press. The Advantage does not eliminate this awkwardness altogether, but just sliding the finger sideways places it at the optimal angle to press sideways, making it into more of a poking motion which feels much more natural.

The Kinesis keyboard has the full range of function keys, but they are not much easier to reach than on any other keyboard. For almost two decades, the small function keys were rubber domed atrocities that served their purpose, but felt really cheap, especially when compared to the bank-breaking mechanical key switches used in the rest of the keyboard. In the Advantage 2 iteration these keys are now also mechanical, but while appreciated, this does not genuinely make a world of difference.

The latest model makes a bunch of detailed improvements, but the basic design has been identical since the early 1990s. The on-board programmability, which has always been a selling point is much also much improved.

The only programability feature that I have really used is the ability to switch between QWERTY and Dvorak keyboard layouts automatically. This allows you take your keyboard anywhere and type in Dvorak whether your employer feels like installing that keyboard layout on your machine or not.

The Advantage 2 also lets you easily remap keys, define macros and much else besides. I haven’t had enough time with the latest iteration to play much with the new features.

My only gripe with the Advantage 2 is that it is not yet a fully split keyboard. That would be awesome, but I guess at roughly $350, Kinesis reckons that a hard price limit has been reached. I disagree.

The Dactyl Fully Split Keyboard

The Dactyl Fully Split Keyboard

There is clearly Advantage-inspired fully split keyboard design available for 3D printing called the Dactyl Keyboard and I wish Kinesis would take that final step, so that I could replace my 3 Advantage keyboards one more time 🙂

Think of the MacBook Pro 2016 as the pro version of the MacBook

Having owned both a MacBook Pro 15″ Retina and a new MacBook, it is crystal clear to me that the new MacBook Pro descends straight from the MacBook and is not (just) an updated version of last year’s MacBook Pro.

The MacBook was the most extreme Macintosh laptop since the introduction of the original MacBook Air; not the reasonably priced and still vastly popular one, but the amazingly expensive and very, very slow 2008 MacBook Air.

The MacBook is supremely opinionated. Something that Apple, for better and often for worse, is great at. Everything was sacrificed for thinness and weight: A single USB-C port that is also used for charging; a keyboard with almost zero key travel; a touchpad that does not move.

Sure the MacBook takes some getting used to. At first, the keyboard is awkward and the touch pad is a little “weird”. Things don’t run as quickly as you’re used to.. then you get used to it and discover the Zen factor: Hush. It’s completely quiet.

After a while, even as a confirmed mechanical keyboard fanatic, I started appreciating the crispness of the keyboard. After less than a year, I started hating the mushy keys on my 2012 MacBook Pro 15″ so much that I started praying for a MacBook Pro with a new style keyboard. The old moving MacBook Pro touchpad feels equally antiquated.

As a fan of wired mice, at first I carried around a USB 3 dock to plug my mouse into, but soon the mouse and the dock stayed in the bag. It’s the convenience dummy. It was annoying having to buy a USB-C to Thunderbolt cable, but hey.. it’s hardly the end of the world.

From the perspective of somebody who has grown to appreciate the MacBook over the past year, the 2016 MacBook Pro looks very different.

The new MacBook Pro is a much faster machine than the MacBook, but keeps many of the attributes that made me fall in love with the later. The keyboard allegedly retains the crisp feel of the MacBook but is somewhat less extreme. The trackpad is huge but also does not move. The 15″ version features no less than 4 ports supporting 4 external displays (or 2 at 5K: a laptop first) and are faster than the built-in SSD. Said SSD might well be the fastest ever to be put into a stock laptop.

I have always found it hard to develop on a laptop, but the portability of the MacBook invisibly changed my habits. The MacBook is underpowered for serious development and the screen is too small for comfort, especially if you are used to multi-screen development setups.. and yet, convenience wins out and today I’m doing most of my exploratory development on the tiny MacBook.

Sure, the 2016 MacBook Pro 15″ is not going to be as portable as the MacBook, but it’s going to be much more so than the old model. On paper, the weight and the bulk savings may not amount to much, but as so often with Apple products, they tend to be more than the sum of their parts.

Many people are upset about the specs. There are faster laptops, with more RAM and with higher resolution screens out there. I don’t know whether it matters.

Intel is the limiting factor. Gone are the days when every two years CPU speeds doubled. Today’s gains are much more modest. We are also already at a point where most current computer models are simply fast enough, even for professional use. Not that I don’t want the fastest CPU out there. In reality, however, even the most power hungry professionals can’t really tell the difference between a Skylake and a Kaby Lake CPU.

Designing the ultimate laptop is no longer a matter of simply putting all the latest and most powerful components into a chassis. With the possible exception of die hard gamers, nobody wants a two inch thick 17″ laptop that sounds like a leaf blower. That does not mean that I’m opposed to Apple making such a machine for those who long for it; but it’s not the machine that I would buy.

I, personally, am looking forward to taking delivery of my 15″ MacBook Pro in the coming weeks and I fully expect it to be a great machine. Shame it couldn’t be thinner and lighter and fan-less (yet).

Badminton Shuttlecock Spin & Aerodynamics

As well as being a Mac and iOS developer, I’m a keen Badminton player and have started playing again after a 10 year absence.

As a former academic researcher, one thing that has always intrigued me about badminton is just how the badminton shuttlecock actually behaves when in flight and especially precisely how it reacts to spin.

As in many other racquet sports, players regularly slice shots, but because of the unique shape of the shuttlecock and its sharp deceleration it behaves very differently to a ball.

The aerodynamics of Tennis or Table Tennis balls are well understood and have been studied extensively, but the same is not true of the Badminton shuttlecock, which is surprising given that it is the most popular racquet sport in the wold by far (a fact that is not easily understood in the Western World where Badminton is still fairly niche).

In 2006, I asked this question on a badminton forum and was shocked to find that even people who were directly involved in designing shuttlecocks did not seem to have a really good understanding of what is actually happening when you slice a shuttle cock. 10 years on there have been some aerodynamic wind tunnel studies and I’ll summarize what I’ve found:

Firstly, the aerodynamics of actual feather shuttle cocks and synthetic ones are very different and advanced players will use only feature shuttle cocks, so we won’t go into the details of the synthetic ones.

All feather shuttle cocks are constructed so that they have a natural counterclockwise spin as seen by the hitter when the shuttle cock is moving away from him/ her. This is due to the overlapping of the feathers, which creates an asymmetrical shape. This “natural” spin stabilizes the shuttle cock while it flies and is caused by the air passing over the feathers. This spin across the central axis of the shuttle cock gets faster as shuttle cock travels faster. When the shuttle cock slows down, so does the spinning and it becomes less stable.

So far, this “natural” spin is present simply through the shuttle cock construction and is not due to player intervention. You can observe this spin by dropping the shuttle cock from a raised platform, e.g. a balcony.

As far as understand it, until a certain speed is reached the spin of the shuttle cock has little effect on its drag coefficient but simply stabilizes the shuttle cock much like a spinning top. Once the spinning goes over a threshold, however, the centrifugal force that it exerts on the shuttle cock pushes the “skirt” outwards thus increasing drag and leading to a significantly faster deceleration of the shuttle cock. I’m not sure from the studies I’ve seen whether this is due to the feathers themselves bending or only the strings that keep them together “giving” a little.

When a right handed player slices the shuttle cock in the “normal” left to right direction (clockwise), this will add to the “natural” counterclockwise rotation of the shuttle as it inverses its path. This rotation will thus be faster than it would be at the same speed without the slicing action.

Under some circumstances, the slicing action will thus cause the shuttle to decelerate more sharply due to the skirt deformation increasing its drag and the shuttle will then fall shorter. If I understand correctly, this will only be the case if the shuttle rotates quickly enough to cause this skirt deformation. If the counterclockwise spin is increased but still remains under the skirt deformation threshold, the shuttle should simply travel in a more stable trajectory. Whether this stability increase is significant or not, I don’t know and can’t find any research on.

It is clear though that applying spin to the shuttle cock through the racquet slicing action will have a significant influence  on the trajectory of the shuttle cock when shuttle is hit at great speed. The skirt will then deform and increase drag, resulting in a shorter distance travelled.

When applied to a flat drive or to an attacking clear, the slicing action will allow the player to hit the shuttle much harder while still being able to keep it inside of the court where a straight shot leaving the racquet at the same speed would go long.

In this scenario, the shuttle will travel faster on average and thus overall until it comes to a stop and drops straight down towards the floor. The slowing effect will be the strongest initially and cut off altogether at some stage during its flight path when the natural spin rate will reimpose itself due to the construction of the shuttle cock. So the later part of the shuttle’s flight path will be identical between the sliced and straight shot. The increased deceleration effect will cut off when the rotational forces become too small to result in skirt deformation.

The difference in speed thus stems entirely from the higher initial speed of the shuttle cock.

Some people believe that the rotation of the shuttle cock itself could provide a propeller-like speed increase, but this is not true. The rotation only influences its drag coefficient but does not provide forward or backwards momentum.

In ball sports, top spin and slice work by creating pressure differentials around the ball. It looks like the “gap” between the base (cork) of the shuttle cock and the skirt (feathers) produces a pressure differential that is crucial to generating the strong deceleration of the shuttle cock, but I haven’t seen any evidence that pressure differentials are influenced by the axial rotation of the shuttle cock.

So “normally” sliced shuttles decelerate quicker than when hit “straight” and they might move somewhat more stably, but what happens when a “reverse slice” is applied through the racquet head moving right-to-left over the shuttle?

Well, I haven’t been able to find any research on this at all. In forum discussions some people claim that there is no difference, but this is obviously false because shuttle cocks are constructed with a “natural” anti-clockwise rotation and the “reverse slice” will apply a clockwise rotation.

There also certainly seems to be a difference when you actually reverse slice a shuttle cock in normal play, but everything happens so fast that it is impossible to observe exactly what is happening. I use reverse slice almost exclusively for left rear court cross court drop shots, particularly because of the deceptive element of the racquet moving in the opposite direction to the actual shot. I also feel that reverse slicing the shuttle on deep net pushes (such as when taking serves in doubles) makes it less likely to go out.

Interestingly, left handed players slice the shuttle in the opposite direction, meaning their “straight slices” are in fact “reverse slices” (imparting clockwise rotation) and their “reverse slices” are “straight slices” (imparting counterclockwise rotation). When you watch Lin Dan play for instance, his shuttles certainly seem to be taking a different trajectory from that of most other players and perhaps this is one explanation for this.

Unfortunately, there seems to be no firm evidence on this at all, so the remainder is mostly speculation, some of it inspired by forum posts.

It would seem logical that making the shuttle spin in the opposite direction to its natural spin would cause it to move less stably. At high speeds, the effect would likely be insignificant, but at lower speeds there should be more tumbling.

It would also be logical that the “natural” spin imposed by shuttle construction and air resistance would counteract the “reverse” spin and might (and probably would) cause the rotation of the shuttle to move from clockwise to counterclockwise at some stage along its trajectory.

We would thus be left with a high degree of drag as the shuttle leaves the racket, followed by a drop in drag as the centrifugal forces become too small to cause the skirt to deform, followed by a stop of the clockwise rotation and finally a re-establishment of the natural counterclockwise rotation of the shuttle. Only at the beginning of the shot could the centrifugal forces be great enough to decelerate the shuttle quicker than for a straight shots.

The big unknown in all of this is whether the amount of skirt deformation is the same for clockwise or counterclockwise rotation. If it is the same, then a straight sliced shot will decelerate for longer and thus always fall shorter. If it is greater, it depends on how much greater it is. If it is a lot greater, this would compensate for the shorter amount of time that it is effective and the shuttle would fall shorter.

As far as I can see, there have been no studies on this, but just looking at the shuttle cock construction, it certainly seems possible that the clockwise rotation against the “grain” of the features would significantly impact the airflow around them and create turbulence. This will definitely cause it to stop spinning clockwise rapidly, but whether it increases or reduces drag and by how much I wouldn’t want to hazard a guess at.

Of course, whether you want to play a shot straight, sliced or reverse sliced depends on more than just the flight characteristics that it imparts. Body mechanics make it much easier to “straight slice” than to “reverse slice” for almost all shots. “Reverse slicing” may still be justified because it can be deceptive both in terms of racquet swing and flight path.

For maximum power, slicing smashes is probably not a good idea as it will make the shot slower. For check smashes or half-smashes using “straight slicing” is probably most effective, but “reverse slicing” may have a different flight path and deceleration characteristics which might inconvenience opponents.

Attacking clears can be heavily sliced so that they can get to the back faster because more initial speed can be imparted. Reverse slicing a clear is probably not a good idea as it reduces the amount of power that can be put into the shuttle as it presents inferior body mechanics.

The situation is less clear for drop shots, where both approaches seem to make sense. The reverse slicing action is more deceptive than the straight slicing action and deception is very important for drop shots. Slicing the drop shot will allow it to be played faster than if played straight, so drop shots should generally be sliced.

The body mechanics of the forehand cross court drop shot would make it hard to use reverse slice and just hitting the shuttle with a straight swing but angled racquet head provides a great way of playing “straight” sliced cross-court drop shots and thus seems the only way to go.

Similarly, playing a left-of-the-head cross-court drop shot with a straight slice would be very hard to do and the reverse slice is much easier to execute and more deceptive and thus the obvious choice.

When it comes to straight drops, things are rather more finely balanced. As we suspect that the straight slice is more effective at slowing down the shuttle, you can probably produce a more effective shot using this technique and its advantage will increase with the speed. So the closer we are getting a half-smash the more we should prefer the straight slice. At lower velocities, however, it is not clear whether the any slice actually decelerates the shuttle at all; it might only make it more stable.

The reverse slice at lower speeds probably makes the shuttle less stable in the middle of its flight path as the “natural” spin reimposes itself and the shuttle briefly tumbles. This might be an advantage as the shuttle will travel under perfect control while it still rotates clockwise, letting you place it precisely. Then if the timing is right, it will start becoming unstable after it crosses the net and thus inconvenience your opponent.

Clearly, the reverse slice motion, while harder to perform, is also much more deceptive. So slow drop shots should probably be executed using the reverse slice.

On the forehand side, fast mid-court drives have a high risk of going long, but body mechanics make it practical to hit them with both forms of slice. On the backhand side, it is hard to see how one would be able to play a hard reverse sliced drive and few players will have enough strength to have to worry about sending the shuttle out anyway. So we only really have a choice on the forehand side, but there seems to be no advantage to trying to execute the harder reverse slice.

In summary then, sliced shots definitely decelerate faster in the beginning of their trajectory and thus fall shorter than straight shots. Reverse sliced shots definitely decelerate faster than straight shots, but probably decelerate both differently and probably less so than straight sliced shots. Even straight shots cause the shuttle to spin counterclockwise at high speeds.

Any insights or corrections would be very much welcome, as I’m keen to understand this whole area better. Any pointers to relevant research or articles would also be much appreciated.

Tools of the Trade: AppCode, a breath of fresh air from the Xcode monoculture.

If you are a Mac or iOS developer for better or for worse there is no way around Xcode.

Xcode is free and full-featured, so why would you ever want to use anything else? This is the main reason why there are practically no other Mac OS X or iOS developer tools on the market today. There just isn’t any room for third parties for it to make economic sense to develop expensive developer tools.

The only other serious IDE for Mac OS X and iOS development is JetBrain’s AppCode and I’d recommend that every serious Apple developer should own a copy. While Xcode has evolved into a powerful and mostly stable tool, Apple has a lot of blindspots and Xcode is in many areas (at least) 15 years behind the top of crop. AppCode isn’t.

JetBrains is the powerhouse of Java development tools and they represent everything that Apple does not. Where Apple is closed, secretive and has a very paternalistic approach to its developer community, JetBrains is open, transparent, friendly and as cross-platform as it is possible to be.

The advantages for an Apple developer such as myself is that you get a peak at the world beyond Apple’s strictly enforced white room monoculture. Using AppCode is as much about growing as a developer as it is about efficiently developing software.

JetBrains offers IDEs that support nearly every language that is available and the more outrageously new and niche a language is, the more likely that JetBrains has a tool for it. This means that once you get used to the basic IDE concepts, you can take that expertise and use it for developing in other languages, on other platforms (Android, Windows, Web) and with other technology stacks.

I use WebStorm for my own website development, RubyMine for web app stuff and IntelliJ IDEA for learning functional programming in Scala. If I ever wanted to learn CoffeeScript, Dart or Haskell I know I’d be covered there too. On top of this JetBrains’ plug-in technology makes adding support for the latest and greatest open source technologies a breeze and JetBrains are very good at keeping an eye open for exiting new technologies. There’s a good chance that the first you hear about a new technology is by looking at JetBrains’ product release notes.

The AppCode IDE itself is very much in the mold of other Java development environments. The IDE can do everything and more, but it is also very busy and a long way from the pared-down minimalistic Apple aesthetic. It’s a nerdy power tool more than a philosophical statement.

JetBrains is rightly famous for their language parsing and refactoring acumen, so their IDEs are chock full of “intelligent” features. Not the kind of “intelligent” that makes everything harder, but the actual intelligent kind.

Navigating in AppCode is much more powerful than in Xcode. The gutter contains a myriad of options that will take you from method implementation to declarations and vice versa. You can also click and hold from class definitions to jump to super- and sub-classes, get in-line help and auto-fixing for commons problems. The as-you-type code analyzer finds potential problems and standard fixes, the code reformatting options are powerful and easily accessible. The intelligence extends to seamlessly into finding all places a piece of code is actually used rather than having to rely on text searches.

Best of all, however, AppCode can make changes to associated files without leaving the current file. The annoying roundtrip between implementation and header files that keeps interrupting your train of thought in Xcode can be wholly avoided. You write the implementation for a method and AppCode just offers you the ability to declare said method in the header with a single click without ever taking your eyes of the code you are busy writing.

Working in AppCode you constantly find yourself wondering why Apple can’t just do this. If it seems obvious, it’s in AppCode. Unfortunately this is rarely true for Xcode.

Refactoring is part and parcel of the AppCode experience and backed so far into the IDE that it becomes a nearly invisible part of your development. If you are used to refactoring in Xcode, you are likely to be non-plussed by AppCode’s refactoring support. Where Xcode makes a huge deal out of every refactoring: taking a snapshot, making you validate a thousand changes and more likely than not failing bang in the middle of the refactoring, AppCode just makes the changes with no fuss whatsoever. The first time I used the renaming refactoring in AppCode, I was wondering what I was doing wrong. I typed the new name into the red highlighted area and nothing happened! How do you terminate the editing? In fact, AppCode had already done the project-wide refactoring. Why make a fuss about it? Why could it fail? Why beach-ball for a few seconds? Why indeed?

AppCode enables you to work in a completely different manner to Xcode. Say you are into Test-Driven Development. Write the test cases first. When you instantiate your target class in the test class, AppCode will tell you that the class does not yet exist. A single click solves the problem by creating the class for you. As you write your tests, you can one-click to add method declarations and empty implementations. When you’ve finished with your test cases, there’ll be .m and .h files with complete stub implementations all without you ever leaving the test case implementation file.

Another big differences with Xcode is that where Apple knows everything best and either offers no customization or forces you to comply with their guidelines, JetBrains puts you in charge. Almost every aspect of the IDE is fully customizable: you can define your own coding style, which will cause AppCode to use your specific style to create stubs. You can even decide to reformat your code automatically before checking it into source control. You can (obviously) choose your own source code management system, add CocoaPods support, edit and preview HTML, CSS, Compass, TypeScript, JavaScript, files or add your own selection of plug-ins. In short, JetBrains is for grown-ups that like taking their own decisions.

Similarly, if you’ve ever felt the frustration of never being able to talk to anybody at Apple about Xcode, you will find the JetBrains support team a breath of fresh air. Something not working? Something not supported? Something you’d like to see added? Just drop them a line and an actual person will reply to you; better yet that person will be an approachable, open-minded fellow developer intent on helping you out. With JetBrains you’re the customer and you know best.

Seriously, just give it a shot. If only for a breath of fresh air.

The unbearable fragility of modern Mac OS X development

There I’ve done it again: I shipped a broken A Better Finder Rename release despite doubling down on build system verification, code signing requirements validation and gatekeeper acceptance checks, automation, quality assurance measures, etc.

Only in October, I had a similar issue. Luckily that time around it only took a few minutes to become aware of the problem and a few hours to ship a fix so very few users were affected. Right now I don’t know how many users were affected by the “botched” A Better Finder Rename 10.01 release.

This didn’t use to happen. Despite the fact that I did not spend nearly as much time ensuring that everything worked properly with the release management. Nor am I alone in this situation. Lots of big as well as small developers have recently shipped similarly compromised releases.

The situation on the Mac App Store is much, much worse. Nobody other than Apple knows how many Mac App Store customers were affected by the recent MAS certificate fiasco that had the distinction of making it all the way into the pages of Fortune magazine.

The truth is that Mac OS X development has become so very fragile.

The reasons for this are manifold and diverse but boil down to: too much changetoo little communicationtoo much complexity and finally too little change management and quality control at Apple.

The recent Mac App Store (MAS) fiasco that left many (1% of Mac App Store users? 100%? Nobody knows) users unable to use their apps purchased from the Mac App Store was down to Apple’s root certificate expiring. This was a planned event: certificates are used for digitally signing applications and they are only valid for a particular period of time, after which they need to be replaced with new certificates.

When the Mac App Store certificate expired, it was replaced with a new certificate but there were two problems. First, the now expired certificate was still cached by some of Apple’s servers: when Mac OS X opens an application it checks its signature, which in the end is guaranteed by Apple’s root certificate. Since this was no longer valid, Mac OS X refused to launch them and reported them as “broken”, leaving users and developers equally baffled. After far too long, Apple investigated the problem and emptied their caches which made the problem go away.

The second problem which was not solved by updating the caches, was due to Apple also replacing the certificate with a new, higher security version; of course without telling anybody. The new certificate could not be verified with the old version of OpenSSL that was used in the receipt checking code of many shipping apps.

When Apple created the Mac App Store, it provided a “receipt” that each application should check to see whether it has been properly bought on the Mac App Store. This is just a signed file that contains details about what was bought and when. Instead of doing the obvious thing, which would have been to provide developers with an API for checking the validity of the receipt against Apple’s own rules, they just publishing snippets of sample code so that each developer could “roll their own” verification code. Supposedly this was for added security (by not providing a single point of failure), but it seems more likely that they couldn’t be bothered to ship an API just for the Mac App Store.

This decision came back to haunt them, because most developers are not crypto experts and so had to rely on developer contributed code to check their app’s receipts. Once this worked properly, most developers wouldn’t dream of touching the code again.. which is how it came to pass that many, quite possibly a majority, of Mac App Store apps shipped with the same old receipt checking code in 2015 that they originally shipped with in 2010(?). This was fixed by Apple revoking the new style certificate and downgrading it to the old standard.

For once, I had been ahead of the curve and had recently updated all the receipt code in my applications (no small feat) and I have yet to hear from any customers who had problems.

Just before the Mac App Store fiasco, however, many non-MAS had also shipped with broken auto-update functionality.

Apple does not offer any auto-update facility for applications that are not on the Mac App Store, which lead to Andy Matuschak’s “Sparkle” framework becoming the de-facto standard for adding an auto-update features to Mac applications.

Driven by abuse of some HTTP communications in iOS apps, Apple decided that in iOS 9 it would by default opt all developers into using only (more secure) HTTPS connections within their applications. What is good for iOS 9 can’t be bad for Mac OS X 10.11 El Capitan, so Mac applications also got opted into this scheme.

Unfortunately, that broke Sparkle for applications which do not point to HTTPS “app casts” such as mine. I have long resisted installing my own HTTPS certificates because I was worried about messing up the expiry periods, etc.. apparently just the way that Apple did with the Mac App Store certificates.

Most developers will have been unaware of the change since Apple never announced it, but I had happened to see the WWDC conference videos that mentioned this in passing. Unfortunately, nothing “clicked” in my head when I heard this. My applications do not communicate with a server anywhere and I thus thought that this was not something I had to worry about. I had forgotten that Sparkle might use this internally.

Fortunately, I caught this at 6AM when I released A Better Finder Rename 10 final. I was just doing a normally completely redundant check through all the features of the program when I noticed that the new version failed when trying to check for updates. By 8AM, I had identified and fixed the problem so that very few people indeed could have been caught out by it. That was luck though.

The nefarious element here was that applications were opted in automatically and silently. Before 10.11 El Capitan was installed on your Mac, my applications updated just fine. Afterwards, they no longer did. Just because they were on El Capitan. Gee thanks!

Of course, this would not have happened if I hadn’t built A Better Finder Rename 10 with the Mac OS X 10.11 SDK (Software Development Kit) at the last moment.

It is somewhat crazy for a developer to change the SDK that s/he builds a brand-new version of their software against in the middle of the beta phase. Changing the SDK always introduces errors because the entire environment in which the code executes is changed. This may bring out bugs that were already present; things that should never have worked, but worked just because the API happened not to trigger the bug. It also introduces bugs that are just part of the new SDK and that you now have to work around. Changing SDKs makes existing programs fragile.

I’m very conservative when it comes to changing SDKs because I’m well aware of the risks. That’s why I’ve been building my code against older SDKs for the past 15 years. A Better Finder Rename 10 was built against the Mac OS X 10.7 SDK which is forwards-compatible with newer versions of Mac OS X.

The main reason for doing so, is that I wanted to be certain that I didn’t accidentally break A Better Finder Rename on older systems, which brings us to the next problem with Mac OS X development.

Xcode lets you specify a “deployment target”, for instance 10.7, while building with a newer SDK. This is the recommended way of developing on Mac OS X and keeping backwards compatibility. Xcode will, however, happily let you use APIs that are not compatible with your deployment target and thereby ensure that your application will crash on anything other than the latest Mac OS X.

In fact, Xcode encourages you to use the latest features that are not backwards compatible and will rewrite your code for you if you let it, so that it will crash. It will give you “deprecation warnings” for any API usage that is not in the latest SDK and resolving those warnings is likely to break backwards compatibly as well. Of course, you won’t know this until you run it on the old Mac OS X version.

Now which developer can afford to keep testing rigs with 10.7, 10.8, 10.9 and 10.10? Never mind spend the time re-testing every change on multiple platforms for each change?

Thus I happily built with the 10.7 SDK. Apple did not make this easy by not shipping the old SDKs with Xcode, but you could manually install them and they would work just fine.

Imagine my surprise after installing Xcode 7 and finding out that this no longer worked. The only workable solution was to build against the 10.11 SDK, so jumping forwards not one but 4 SDK versions. A bunch of code wouldn’t compile any longer because the libraries were gone. Luckily the receipt checking code was amongst those, so it got modernised just in time to avoid the Mac App Store receipt fiasco.

Nonetheless, now my entire code base had become fragile and largely un-tested between the last beta release and the final shipping product. Nightmare!

On top of that was it still even 10.7 compatible? or indeed 10.10 compatible? Just quickly running it on older systems wouldn’t provide more than a little additional confidence since it’s impossible to go through every code path of a complex product.

After installing virtual machines to test on, I still couldn’t be 100% certain. The solution came in the form of deploymate, a now truly essential developer tool which does what Xcode can’t do: check that API usage is compatible with the deployment target.

I have since spent many weeks trying to ensure that I won’t run into the same problems again by adding (additional) automated verification processes to my build system. My build system now runs the built product through SDK compatibility checking courtesy of deploymate, code signing validation and gatekeeper verifications on each build. I’m still working though deprecation warnings and the like and my code base will soon be bullet proofed at least until the next forced changes arrive.

You’d think that this was a long enough list of problems for one year, but this still does not account for Apple also changing the code signing rules (once again) earlier in the year (in a point update of 10.10 no less). This time it affected how resources and frameworks are signed. So applications that were signed correctly for years, now suddenly became incorrectly signed and Mac OS X would refuse to launch them because they were “broken”.

All this points to the underlying issues with the current spade of fragility of Mac applications: Apple keeps changing the status quo and neither it, nor developers have any chance of keeping up.

Apple’s own applications are full of bugs now. None more so than Xcode, which is both the lynch pin of all Mac OS X, iOS, watchOS and tvOS development and no doubt Apple most fragile app offering. Xcode is in beta at least 6 months a year and never really stabilises in between. Each new version has new “improvements” to code signing, app store uploading, verification code, etc.. and each new version breaks existing code and introduces its very own new bugs and crashes. From one day to the next, you don’t know as a developer whether your code works or not. Existing code that had worked fine on Friday evening, no longer works on Monday morning. Worse, chances are that you are not hunting for your own bugs, but those in your development tools, the operating system or Apple supplied SDKs.

All this is driven by the one-release-a-year schedule that Apple has imposed on itself. This leaves all of Apple’s software in various stages of brokenness. When Apple’s own staff cannot deal with this constantly shifting environment, how are third party developers supposed to?

Case in point: Apple’s own apps are not all iOS 9 compatible yet. Many don’t support the iPad Pro’s new native resolution yet. Some have gained Apple Watch extensions, but most haven’t.

Reliability is a property of a system that is changed slowly and deliberately and where all constitute parts are themselves reliable. The Mac and all other Apple platforms are currently undergoing the worst dip in reliability since Mac OS X was introduced.

Apple is pushing out half-baked annual releases of all its software products, as well as apparently completely unmanaged changes to policies, external rules and cloud services at an ever more frenetic pace.

These could be written off as temporary “growing pains”, but the big question is: Do all these annual updates equate to real progress?

When I switch on my Mac today, I use it for much the same things that I used it for 10 years ago. A lot has changed. Cumulatively Mac OS X 10.11 El Capitan is somewhat better than 10.6 Snow Leopard.. and yet if you discount cosmetic changes and new hardware, nothing much has changed. Certainly nothing much has actually improved.

I can’t help thinking that if we had had 2 or possibly 3 Mac OS X updates instead of 5 over those last 5 years, we’d be in a much better shape now. Apple and developers would have time to provide user benefits and rock solid reliability rather than just endlessly chasing their own tail.

The beauty of the Mac used to be that it just worked. I want to get back to that.