Quick Reaction to Mac Pro Leaks

Daring Fireball has a story about Apple sharing their plans for the future of the Mac Pro. It is weird to communicate with Pro users via a blogger, but what the hell: it’s Apple.

Apple are working on new Mac Pros with a completely new design, but they won’t ship “this year” and there’s no firm commitment to shipping them in 2018 either. In the meantime, they have processor bumped the existing machines but without USB-C and thus without LG 5K monitor support.

My gut reaction is relief. At least they are working on it and they haven’t whole sale abandoned Pro users. There’s also hints about new Pro displays and new iMacs.

It is also nice to see that Apple has realized that the existing Mac Pro design is a complete failure. Dual GPUs are no good, integrated components and a small form factor are incompatible with fast, low cost updates. They are talking about a “modular system” now.

All things said, however, nobody outside of Apple ever thought that their new Mac Pros were anything other than the product of a deranged mind and it took them nearly 3 years to acknowledge that. Furthermore, a “modular” system is exactly what the original Mac Pro and all but its latest incarnation already were. Building this fabled new modular Mac Pro is thus as easy as slapping an industry-standard motherboard with dual Xeons into the old Mac Pro enclosure and supporting it in software. There is literally nothing to it. If they don’t like the old cheese grater enclosure just spray it Jet Black already.

All this makes me worry about what that “new” direction is: worryingly there has been no acknowledgement that a Mac Pro needs dual CPUs. There’s only talk about dual GPUs, which nobody has asked for. Are they going to mess it up by overthinking it again?

All I want is a significantly faster Mac. Something that beats my no longer supported 2009 Mac Pro.. and it’s not hard to deliver now.


Re-Learning Touch Typing with the Workman Layout

I learned touch typing when I was in my mid-teens and WordPerfect was the new hotness on DOS. It got me into a fair amount of trouble more than a decade later when I was writing up my PhD thesis and I developed my first proper RSI symptoms. As I mentioned in the previous post, it was the combination of two main ingredients, switching to my beloved Kinesis Advantage keyboard and the Dvorak keyboard layout that saved my hands and career.

I have used that combo to write and code for two decades now.. and yet I’m writing these words on a laptop keyboard using the Workman layout.

First things first. I’m not an über-typist. I think at the peak of my Dvorak typing I got to 80-90 words per minute, which is fast but not exceptional. I measured myself at 75wpm before starting my new adventures in touch-typing, which is just fine, because I can’t think at more than perhaps 60 wpm anyway.

Something I have realised over time is that maximum performance is not nearly as important than comfort when typing. My main success criteria for a keyboard arrangement are:

  • must feel comfortable
  • must minimise strain on my body, thus preventing injury
  • must let me concentrate on what I’m writing, not how I’m writing
    • for me that means that I need to be able to keep my eyes on the screen at all times and my fingers need to be able to find the keys without distracting me
  • must be able to keep up with my thoughts
  • must be easy to navigate and edit text
  • must enable me to use keyboard shortcuts easily

My current quest for a new keyboard layout was triggered by the fact that I want to be less dependant on my desktop setup and be able to work effectively in coffee shops and similar settings.

Unfortunately laptops come with the standard crappy staggered key arrangements and there is precisely zero hope that Apple is ever going to come out with a matrix keyboard on a laptop. So you’ve got to make do with what they give you..

I have at various times tried to use Dvorak on a MacBook keyboard, but never with any real success. My fingers have memorised the key positions on the Kinesis’ straight rows of keys and I mishit the keys on the bottom row almost constantly. All this is made worse by the fact that I’m subconsciously peeking at the QWERTY labels on the keys because the screen is right above the keyboard. The actual labels on the keyboard become especially irresistible when I reach for a keyboard shortcut when my fingers have not been resting on their home row positions.

Eventually I settled on using hunt and peck on the laptop and just live with QWERTY. It wasn’t a huge deal because I was a very occasional laptop user.

A year ago, however, all that changed because I fell in love with the 12″ MacBook. It quickly pushed my iPad out of my day bag and I found myself writing code and answering email on the go.. and with that came my dissatisfaction with not being able to touch type on it. On my desktop my thoughts just magically flow through my fingers onto the virtual paper while on the laptop I’m plodding along at quarter speed..

So I decided to bite the bullet and re-learn QWERTY touch typing. I got to 45 wpm after about a week and I felt that this would be perfectly fine. While it still felt slow I felt certain that by just sticking with it I was surely going to get faster with time. Six weeks later, however, I was still no faster and more importantly it still felt awful. All those finger contortions; the fact that the most frequently used keys are in the least accessible places and most importantly: the God awful rhythm.

One thing that most Dvorak users note is the nicely flowing rhythm of the layout. Most of the time when you type in Dvorak, successive keys are on alternating hands. One hand presses a key while the other is getting in position. The very fastest typists tend to be Dvorak users (sustained 150 wpm for 50 minutes, peaking at 212wpm) and I think that the fact that the hands alternate so often might be a key factor in that. Dvorak is to QWERTY in that respect as the Brandenburg Concertos are to slowly scratching chalk over a blackboard.

The Colemak partially optimized keyboard layout

The Colemak partially optimized keyboard layout

This got me started on learning the Colemak layout. This is a “modern” optimized keyboard layout and claims to be faster and “more optimal” than Dvorak. It scores over Dvorak in a number of ways, but most significantly it is much easier for QWERTY users to learn. Known as a “partial optimization”, it relocates only some keys, concentrating on getting the most frequently used keys under your finger tips on the home row. Particularly on the bottom row, keys stay pretty much where they were. This also means that the most common shortcuts stay in the same positions. So copy, cut, paste and undo problems as I experienced previously, simply do not arise. As far as possible all keys are also typed with the same hand as in QWERTY. This is especially significant for using the Shift key properly, as this requires coordination with the opposite hand.

I found Colemak much more difficult to learn than QWERTY because everyone has that layout stored somewhere in their brain. Still Colemak was an immediate and significant improvement, even though my typing speed went way down into the 10-15wpm initially. There are a lot fewer contorsions and you can type many words with home row keys only.

I persevered for 3 weeks, but even then I was struggling to get above 20 wpm. While feeling better than QWERTY, Colemak did not actually feel all that great. One particular annoyance was typing the “TH” combo. The T is just under the left index finger, which is just fine, but the H is reached by sliding the right index finger to the next key on the left. This is a very awkward manoeuvre in and of itself, but combining it with hitting the T key at speed is hard and just feels wrong. So every word containing a “th” becomes a little hiccup. I also found that in general the rhythm of the layout was an improvement only when compared to the low bar set by QWERTY.

I decided that perhaps I was barking up the wrong tree. Colemak might be easier to learn for QWERTY folk, but that actually worked against me: My beloved Dvorak has zero communality with either layout. Keeping the key under the same hand as in QWERTY actually slowed my progress for that very reason. What I really wanted was a “new improved” Dvorak version, not a better QWERTY, but I couldn’t find anything and wasn’t about to develop my own layout.

What I did find was a coherent criticism of Colemak that was insightful enough to clarify what I actually disliked about it, but hadn’t been able to put my finger on. The author of that criticism had also developed his own layout based on this analysis and that’s how I found Workman.

The Workman fully optimized keyboard layout

The Workman fully optimized keyboard layout

Apparently a lot of keyboard layout optimizers (yes, such things exist) consider all 4 fingers to have the same natural range of motion, mobility and strength. This explains why Colemak considers the H key to be in a prime position despite the fact that it is clearly much harder to reach sideway than upwards with your fingers. Colemak also does not consider the length of the fingers.

Workman gives each key a score based on how easy it is for an actual finger to hit it. I’m not certain that I would necessary have chosen the exact same scores, but it’s clearly an improvement over Colemak and the most common combos are easier to type. Workman keeps the Z and X keys in the same spot, but moves the CV keys one position to the right. With a sticker over the keys I find that I can live with that. There are also Mac implementations freely available, including “Programmer’s” versions, which I’ll probably be using as they are similar to the “Programmer’s Dvorak” that I use on my desktop machines.

I have used the Xmas holidays to practise my Workman layout and about a month in, I’m getting towards the usable stage and this is the first lengthy document that I have written in it. I’m optimistic about this remaining my laptop layout for good.

There are, however, a few things I’m not so keen on. The first is minor and concerns the “ch” bigram, which is fairly awkward to type. It’s not nearly as bad or common as the TH issue on Colemak or the “ls -l” plaguing Dvorak Unix users.. but still..

The other is a potential deal breaker and concerns the design decision to favour “single hand utilisation“. Workman’s designer, OJ Bucao, claims that it is easier, faster and more comfortable to type multiple letters with the same hand rather than by alternating hands.

This is the reverse of my own experience with Dvorak. When typing in Workman I’m constantly performing two, three or even four letter runs with one hand. For Bucao this is a good thing. He claims that after a while those patterns become ingrained and you end up typing them as a single action. He is certainly correct that it minimizes hand movement, as the un-used hand can find its way back to the home row and take a breather. The most common bigrams and trigrams are also easy to type with very little reaching.

Still the jury is out on that particular feature. I have noticed that I’ve started typing with those semi-automatic finger rolls, but I find it fatiguing and I don’t (yet?) like the rhythm much.. but it’s early days yet and generally I’m much happier with Workman than with either QWERTY or Colemak.

If you are interested in learning a new layout, I would recommend giving Workman a try above Colemak. Colemak and Dvorak both come pre-installed on macOS, but installing Workman is very easy. If you are a developer, the programmer’s versions makes typing codes much simpler.

Long-Term Review: The Kinesis Advantage 2 Ergonomic Keyboard

In the mid-1990s, while working as a full time researcher, writing up my PhD thesis and starting publicspace.net, my arms suddenly started tingling after a good day’s (and night’s) work. Shortly afterwards, my fingers and forearms would be on fire at the end of every day. I started worrying.

Eventually I couldn’t work full days any longer and even just typing a few words or using a mouse would cause pain and discomfort. I started seriously worrying that I had managed to hamstring myself before even making it into a “proper” job.

That’s how my obsession with all things ergonomic started.

A good 20 years later, I’m much healthier and have suffered no RSI related symptoms for at least 15 of those years.

Probably the two most effective things I did back in the mid-90s was to buy an outlandishly weird ergonomic keyboard called the “Kinesis Ergo” and learning to touch type with the DVORAK keyboard layout.

Kinesis Advantage 2 Keyboard

The Kinesis Advantage 2 Keyboard

The Kinesis Ergo keyboard is now in its brand new “Advantage 2” generation, which is an opportunity for a long term review. It looks like something from an alternative (much geekier) universe, but is probably the single best piece of ergonomics I’ve ever bought.

Like all ergonomic keyboards, the Kinesis will do you absolutely no good if you don’t touch type.

Ergonomic keyboards enable you to type without pain and with greatly diminished effort, but you have to learn how to use them. Two finger-pecking at a split keyboard with your wrists fully bent, hammering your fingers into the keys with your keyboard resting on a desk that is 5 inches too high, obviously won’t work.

The point is that it is simply impossible to type on a traditional keyboard without some degree of discomfort, because you just can’t get your limbs into a pain-free position. With the Kinesis Advantage, you can.

A great keyboard, which the Advantage 2 certainly is, goes one step further: not only can you type without injuring yourself, but it also helps you forget about the keyboard, concentrate on what you are writing and makes it feel natural and fun.

Just like lesser “ergonomic” keyboards such as Microsoft’s much loved, but ultimately very half-hearted attempts, the Advantage is “split“, meaning that each hand gets its own separate area and both are physically separated.

This allows your wrists and shoulders to stay in a neutral, un-bent position and is instrumental in preventing carpal tunnel syndrome. CTS is caused by the tendons of your fingers rubbing against the gap between your wrist bones while typing. When your wrists are bent sideways or strongly upwards or downwards that gap narrows and.. ouch!

Also just like other ergonomic keyboards, the Advantage has a “tented” design. This means that both halves of the keyboard have a gently upwards slope starting with your little fingers and progressively rising as you move towards the index fingers. Again this allows for a more natural position of wrists and shoulders.

The Kinesis also uses mechanical key switches: the “Cherry Browns” for mechanical keyboard enthusiasts. There is a debate whether mechanical key switches are truly superior to their scissor counterparts, but it is probably telling that even die hard scissor switch aficionados only claim that they are “just as good”; while nobody claims scissor switches are better. I personally much prefer the mechanical kind.

This, however, is where the similarities between the Advantage and something like Microsoft’s Surface Ergonomic Keyboard or even Matias’ Ergo Pro stop.

Matrix Key Layout

Kinesis Advantage Matrix Key Layout

The Kinesis Advantage is part of only a handful of keyboards that don’t use the staggered key rows that originate in the requirements of the mechanical typewriter, but instead uses a columnar (also known as a matrix) layout. All this means is that the keys are arranged in straight columns just like on a number pad.

The sheer stupidity of doing anything else does not hit you until you have used a matrix keyboard for a day or two and go back to a “stupid” keyboard. Who would do this to themselves? Simply arranging the keys in columns eliminates the awkward finger contortions that are such a fun part of touch typing. Yes, our fingers can move sideways, but they really don’t want to, especially when you want to hit something.

There are other matrix keyboards out there, all with their own fan base.

The Truly Ergonomic Keyboard

The Truly Ergonomic Keyboard

The Truly Ergonomic is a mechanical keyboard but completely flat with neither tenting, nor enough of a split for my tall frame.

The Type Matrix Keyboard

The Type Matrix Keyboard

The Type Matrix is a very similar affair but with scissor switches.

The Latest Ergo Dox Keyboard Iteration

The Latest Ergo Dox Keyboard Iteration

The ErgoDox is an open source DIY keyboard that is mechanical, tented and fully separated. This is the only keyboard I mention here that I don’t own myself. I don’t like the fact that it is “straight” tented rather than Kinesis’ more organic shape, but I can imagine that it is pretty close to the Kinesis and is a real “split keyboard”.

The Maltron 3D Two-Handed Keyboard

The Maltron 3D Two-Handed Keyboard

The Maltron Two-Handed 3D keyboard is very close to the Kinesis Advantage in almost all respects and I have used it for a few years before going back to the Kinesis. My major gripe is the build quality which is more “bespoke custom job” than what you’d expect from a consumer product.

Kinesis has gone a step beyond simply adopting a matrix layout in the search for the perfect ergonomic fit. Your hands in fact rest in a completely natural “well” taking into account the length of your finger and their natural curvature. Moving your fingers up and down in a straight line always puts your finger tips straight on the keys with no reaching. The new Advantage 2 even has textured and molded home row keys that make it immediately obvious that your finger tips are dead center on their respective home row keys.

Over the years, I have tried to move away from the Kinesis design; mostly in order to have a cheaper and more mobile setup. I spent several agonizing months in 2014 trying to migrate to the Microsoft Surface Ergonomic keyboard after my second Maltron developed yet another dead key, but I could never get comfortable with it.

It took me a while to realize why my attempts to go back to a more standard keyboard were doomed. The real reason is what makes the Advantage so hugely superior to the TypeMatrix and the Truly Ergonomic: the thumb clusters and in-line cursor keys.

Behold the thumb keys.

Behold the thumb clusters.

The thumb clusters are such an obvious improvement once you get used to them, that is seems impossible that there are keyboards without this feature. The thumbs are the strongest and most mobile fingers and yet on a traditional keyboard both thumbs only hit one miserable key.

Not so on the Kinesis, where each thumb gets its own cluster of keys. You press Space, Backspace and Delete with your thumbs. In fact the Space and Backspace keys are right under your thumbs when your hand is completely relaxed. Your thumb also covers your Control, Option and Command keys, as well as the less important Home, End, Page Up & Page Down keys. The cursor keys are placed in a 4th row that does not exist on other keyboards.

What these design choices amount to is what makes typing on the Kinesis Advantage such a great experience: you never have to move your hands away from the home row.

In all other keyboard designs, some frequently used keys such as the Backspace, Delete, Enter or the cursor keys require you to move your hand, usually the right hand, away from its home row, feel for the key, press it and then awkwardly feel your way back onto the home row.

Not having done this for well over a decade of continuous Maltron and Kinesis keyboard use, this absolutely drove me nuts on the Surface keyboards and I went back to the Advantage.

On the Kinesis, if you’ve mistyped something, your fingers stay where they are and you tap your left thumb to hit backspace. If you need to go back a few characters, bend your fingers until they rest on the cursor keys. Bend them back and you are on the home row again. Your hands themselves do not move.

Personally, I do not use the thumb to hold the Control, Option and Command keys but move my hand to reach the top of the cluster with my index finger; I’m not even sure whether this is as was intended, but it works really well and I’m back on my home row in no time.

On a traditional keyboard, the keys that need to be reached by bending your index fingers laterally (e.g. G and the H key) are very awkward to press. The Advantage does not eliminate this awkwardness altogether, but just sliding the finger sideways places it at the optimal angle to press sideways, making it into more of a poking motion which feels much more natural.

The Kinesis keyboard has the full range of function keys, but they are not much easier to reach than on any other keyboard. For almost two decades, the small function keys were rubber domed atrocities that served their purpose, but felt really cheap, especially when compared to the bank-breaking mechanical key switches used in the rest of the keyboard. In the Advantage 2 iteration these keys are now also mechanical, but while appreciated, this does not genuinely make a world of difference.

The latest model makes a bunch of detailed improvements, but the basic design has been identical since the early 1990s. The on-board programmability, which has always been a selling point is much also much improved.

The only programability feature that I have really used is the ability to switch between QWERTY and Dvorak keyboard layouts automatically. This allows you take your keyboard anywhere and type in Dvorak whether your employer feels like installing that keyboard layout on your machine or not.

The Advantage 2 also lets you easily remap keys, define macros and much else besides. I haven’t had enough time with the latest iteration to play much with the new features.

My only gripe with the Advantage 2 is that it is not yet a fully split keyboard. That would be awesome, but I guess at roughly $350, Kinesis reckons that a hard price limit has been reached. I disagree.

The Dactyl Fully Split Keyboard

The Dactyl Fully Split Keyboard

There is clearly Advantage-inspired fully split keyboard design available for 3D printing called the Dactyl Keyboard and I wish Kinesis would take that final step, so that I could replace my 3 Advantage keyboards one more time 🙂

Think of the MacBook Pro 2016 as the pro version of the MacBook

Having owned both a MacBook Pro 15″ Retina and a new MacBook, it is crystal clear to me that the new MacBook Pro descends straight from the MacBook and is not (just) an updated version of last year’s MacBook Pro.

The MacBook was the most extreme Macintosh laptop since the introduction of the original MacBook Air; not the reasonably priced and still vastly popular one, but the amazingly expensive and very, very slow 2008 MacBook Air.

The MacBook is supremely opinionated. Something that Apple, for better and often for worse, is great at. Everything was sacrificed for thinness and weight: A single USB-C port that is also used for charging; a keyboard with almost zero key travel; a touchpad that does not move.

Sure the MacBook takes some getting used to. At first, the keyboard is awkward and the touch pad is a little “weird”. Things don’t run as quickly as you’re used to.. then you get used to it and discover the Zen factor: Hush. It’s completely quiet.

After a while, even as a confirmed mechanical keyboard fanatic, I started appreciating the crispness of the keyboard. After less than a year, I started hating the mushy keys on my 2012 MacBook Pro 15″ so much that I started praying for a MacBook Pro with a new style keyboard. The old moving MacBook Pro touchpad feels equally antiquated.

As a fan of wired mice, at first I carried around a USB 3 dock to plug my mouse into, but soon the mouse and the dock stayed in the bag. It’s the convenience dummy. It was annoying having to buy a USB-C to Thunderbolt cable, but hey.. it’s hardly the end of the world.

From the perspective of somebody who has grown to appreciate the MacBook over the past year, the 2016 MacBook Pro looks very different.

The new MacBook Pro is a much faster machine than the MacBook, but keeps many of the attributes that made me fall in love with the later. The keyboard allegedly retains the crisp feel of the MacBook but is somewhat less extreme. The trackpad is huge but also does not move. The 15″ version features no less than 4 ports supporting 4 external displays (or 2 at 5K: a laptop first) and are faster than the built-in SSD. Said SSD might well be the fastest ever to be put into a stock laptop.

I have always found it hard to develop on a laptop, but the portability of the MacBook invisibly changed my habits. The MacBook is underpowered for serious development and the screen is too small for comfort, especially if you are used to multi-screen development setups.. and yet, convenience wins out and today I’m doing most of my exploratory development on the tiny MacBook.

Sure, the 2016 MacBook Pro 15″ is not going to be as portable as the MacBook, but it’s going to be much more so than the old model. On paper, the weight and the bulk savings may not amount to much, but as so often with Apple products, they tend to be more than the sum of their parts.

Many people are upset about the specs. There are faster laptops, with more RAM and with higher resolution screens out there. I don’t know whether it matters.

Intel is the limiting factor. Gone are the days when every two years CPU speeds doubled. Today’s gains are much more modest. We are also already at a point where most current computer models are simply fast enough, even for professional use. Not that I don’t want the fastest CPU out there. In reality, however, even the most power hungry professionals can’t really tell the difference between a Skylake and a Kaby Lake CPU.

Designing the ultimate laptop is no longer a matter of simply putting all the latest and most powerful components into a chassis. With the possible exception of die hard gamers, nobody wants a two inch thick 17″ laptop that sounds like a leaf blower. That does not mean that I’m opposed to Apple making such a machine for those who long for it; but it’s not the machine that I would buy.

I, personally, am looking forward to taking delivery of my 15″ MacBook Pro in the coming weeks and I fully expect it to be a great machine. Shame it couldn’t be thinner and lighter and fan-less (yet).

Badminton Shuttlecock Spin & Aerodynamics

As well as being a Mac and iOS developer, I’m a keen Badminton player and have started playing again after a 10 year absence.

As a former academic researcher, one thing that has always intrigued me about badminton is just how the badminton shuttlecock actually behaves when in flight and especially precisely how it reacts to spin.

As in many other racquet sports, players regularly slice shots, but because of the unique shape of the shuttlecock and its sharp deceleration it behaves very differently to a ball.

The aerodynamics of Tennis or Table Tennis balls are well understood and have been studied extensively, but the same is not true of the Badminton shuttlecock, which is surprising given that it is the most popular racquet sport in the wold by far (a fact that is not easily understood in the Western World where Badminton is still fairly niche).

In 2006, I asked this question on a badminton forum and was shocked to find that even people who were directly involved in designing shuttlecocks did not seem to have a really good understanding of what is actually happening when you slice a shuttle cock. 10 years on there have been some aerodynamic wind tunnel studies and I’ll summarize what I’ve found:

Firstly, the aerodynamics of actual feather shuttle cocks and synthetic ones are very different and advanced players will use only feature shuttle cocks, so we won’t go into the details of the synthetic ones.

All feather shuttle cocks are constructed so that they have a natural counterclockwise spin as seen by the hitter when the shuttle cock is moving away from him/ her. This is due to the overlapping of the feathers, which creates an asymmetrical shape. This “natural” spin stabilizes the shuttle cock while it flies and is caused by the air passing over the feathers. This spin across the central axis of the shuttle cock gets faster as shuttle cock travels faster. When the shuttle cock slows down, so does the spinning and it becomes less stable.

So far, this “natural” spin is present simply through the shuttle cock construction and is not due to player intervention. You can observe this spin by dropping the shuttle cock from a raised platform, e.g. a balcony.

As far as understand it, until a certain speed is reached the spin of the shuttle cock has little effect on its drag coefficient but simply stabilizes the shuttle cock much like a spinning top. Once the spinning goes over a threshold, however, the centrifugal force that it exerts on the shuttle cock pushes the “skirt” outwards thus increasing drag and leading to a significantly faster deceleration of the shuttle cock. I’m not sure from the studies I’ve seen whether this is due to the feathers themselves bending or only the strings that keep them together “giving” a little.

When a right handed player slices the shuttle cock in the “normal” left to right direction (clockwise), this will add to the “natural” counterclockwise rotation of the shuttle as it inverses its path. This rotation will thus be faster than it would be at the same speed without the slicing action.

Under some circumstances, the slicing action will thus cause the shuttle to decelerate more sharply due to the skirt deformation increasing its drag and the shuttle will then fall shorter. If I understand correctly, this will only be the case if the shuttle rotates quickly enough to cause this skirt deformation. If the counterclockwise spin is increased but still remains under the skirt deformation threshold, the shuttle should simply travel in a more stable trajectory. Whether this stability increase is significant or not, I don’t know and can’t find any research on.

It is clear though that applying spin to the shuttle cock through the racquet slicing action will have a significant influence  on the trajectory of the shuttle cock when shuttle is hit at great speed. The skirt will then deform and increase drag, resulting in a shorter distance travelled.

When applied to a flat drive or to an attacking clear, the slicing action will allow the player to hit the shuttle much harder while still being able to keep it inside of the court where a straight shot leaving the racquet at the same speed would go long.

In this scenario, the shuttle will travel faster on average and thus overall until it comes to a stop and drops straight down towards the floor. The slowing effect will be the strongest initially and cut off altogether at some stage during its flight path when the natural spin rate will reimpose itself due to the construction of the shuttle cock. So the later part of the shuttle’s flight path will be identical between the sliced and straight shot. The increased deceleration effect will cut off when the rotational forces become too small to result in skirt deformation.

The difference in speed thus stems entirely from the higher initial speed of the shuttle cock.

Some people believe that the rotation of the shuttle cock itself could provide a propeller-like speed increase, but this is not true. The rotation only influences its drag coefficient but does not provide forward or backwards momentum.

In ball sports, top spin and slice work by creating pressure differentials around the ball. It looks like the “gap” between the base (cork) of the shuttle cock and the skirt (feathers) produces a pressure differential that is crucial to generating the strong deceleration of the shuttle cock, but I haven’t seen any evidence that pressure differentials are influenced by the axial rotation of the shuttle cock.

So “normally” sliced shuttles decelerate quicker than when hit “straight” and they might move somewhat more stably, but what happens when a “reverse slice” is applied through the racquet head moving right-to-left over the shuttle?

Well, I haven’t been able to find any research on this at all. In forum discussions some people claim that there is no difference, but this is obviously false because shuttle cocks are constructed with a “natural” anti-clockwise rotation and the “reverse slice” will apply a clockwise rotation.

There also certainly seems to be a difference when you actually reverse slice a shuttle cock in normal play, but everything happens so fast that it is impossible to observe exactly what is happening. I use reverse slice almost exclusively for left rear court cross court drop shots, particularly because of the deceptive element of the racquet moving in the opposite direction to the actual shot. I also feel that reverse slicing the shuttle on deep net pushes (such as when taking serves in doubles) makes it less likely to go out.

Interestingly, left handed players slice the shuttle in the opposite direction, meaning their “straight slices” are in fact “reverse slices” (imparting clockwise rotation) and their “reverse slices” are “straight slices” (imparting counterclockwise rotation). When you watch Lin Dan play for instance, his shuttles certainly seem to be taking a different trajectory from that of most other players and perhaps this is one explanation for this.

Unfortunately, there seems to be no firm evidence on this at all, so the remainder is mostly speculation, some of it inspired by forum posts.

It would seem logical that making the shuttle spin in the opposite direction to its natural spin would cause it to move less stably. At high speeds, the effect would likely be insignificant, but at lower speeds there should be more tumbling.

It would also be logical that the “natural” spin imposed by shuttle construction and air resistance would counteract the “reverse” spin and might (and probably would) cause the rotation of the shuttle to move from clockwise to counterclockwise at some stage along its trajectory.

We would thus be left with a high degree of drag as the shuttle leaves the racket, followed by a drop in drag as the centrifugal forces become too small to cause the skirt to deform, followed by a stop of the clockwise rotation and finally a re-establishment of the natural counterclockwise rotation of the shuttle. Only at the beginning of the shot could the centrifugal forces be great enough to decelerate the shuttle quicker than for a straight shots.

The big unknown in all of this is whether the amount of skirt deformation is the same for clockwise or counterclockwise rotation. If it is the same, then a straight sliced shot will decelerate for longer and thus always fall shorter. If it is greater, it depends on how much greater it is. If it is a lot greater, this would compensate for the shorter amount of time that it is effective and the shuttle would fall shorter.

As far as I can see, there have been no studies on this, but just looking at the shuttle cock construction, it certainly seems possible that the clockwise rotation against the “grain” of the features would significantly impact the airflow around them and create turbulence. This will definitely cause it to stop spinning clockwise rapidly, but whether it increases or reduces drag and by how much I wouldn’t want to hazard a guess at.

Of course, whether you want to play a shot straight, sliced or reverse sliced depends on more than just the flight characteristics that it imparts. Body mechanics make it much easier to “straight slice” than to “reverse slice” for almost all shots. “Reverse slicing” may still be justified because it can be deceptive both in terms of racquet swing and flight path.

For maximum power, slicing smashes is probably not a good idea as it will make the shot slower. For check smashes or half-smashes using “straight slicing” is probably most effective, but “reverse slicing” may have a different flight path and deceleration characteristics which might inconvenience opponents.

Attacking clears can be heavily sliced so that they can get to the back faster because more initial speed can be imparted. Reverse slicing a clear is probably not a good idea as it reduces the amount of power that can be put into the shuttle as it presents inferior body mechanics.

The situation is less clear for drop shots, where both approaches seem to make sense. The reverse slicing action is more deceptive than the straight slicing action and deception is very important for drop shots. Slicing the drop shot will allow it to be played faster than if played straight, so drop shots should generally be sliced.

The body mechanics of the forehand cross court drop shot would make it hard to use reverse slice and just hitting the shuttle with a straight swing but angled racquet head provides a great way of playing “straight” sliced cross-court drop shots and thus seems the only way to go.

Similarly, playing a left-of-the-head cross-court drop shot with a straight slice would be very hard to do and the reverse slice is much easier to execute and more deceptive and thus the obvious choice.

When it comes to straight drops, things are rather more finely balanced. As we suspect that the straight slice is more effective at slowing down the shuttle, you can probably produce a more effective shot using this technique and its advantage will increase with the speed. So the closer we are getting a half-smash the more we should prefer the straight slice. At lower velocities, however, it is not clear whether the any slice actually decelerates the shuttle at all; it might only make it more stable.

The reverse slice at lower speeds probably makes the shuttle less stable in the middle of its flight path as the “natural” spin reimposes itself and the shuttle briefly tumbles. This might be an advantage as the shuttle will travel under perfect control while it still rotates clockwise, letting you place it precisely. Then if the timing is right, it will start becoming unstable after it crosses the net and thus inconvenience your opponent.

Clearly, the reverse slice motion, while harder to perform, is also much more deceptive. So slow drop shots should probably be executed using the reverse slice.

On the forehand side, fast mid-court drives have a high risk of going long, but body mechanics make it practical to hit them with both forms of slice. On the backhand side, it is hard to see how one would be able to play a hard reverse sliced drive and few players will have enough strength to have to worry about sending the shuttle out anyway. So we only really have a choice on the forehand side, but there seems to be no advantage to trying to execute the harder reverse slice.

In summary then, sliced shots definitely decelerate faster in the beginning of their trajectory and thus fall shorter than straight shots. Reverse sliced shots definitely decelerate faster than straight shots, but probably decelerate both differently and probably less so than straight sliced shots. Even straight shots cause the shuttle to spin counterclockwise at high speeds.

Any insights or corrections would be very much welcome, as I’m keen to understand this whole area better. Any pointers to relevant research or articles would also be much appreciated.

Tools of the Trade: AppCode, a breath of fresh air from the Xcode monoculture.

If you are a Mac or iOS developer for better or for worse there is no way around Xcode.

Xcode is free and full-featured, so why would you ever want to use anything else? This is the main reason why there are practically no other Mac OS X or iOS developer tools on the market today. There just isn’t any room for third parties for it to make economic sense to develop expensive developer tools.

The only other serious IDE for Mac OS X and iOS development is JetBrain’s AppCode and I’d recommend that every serious Apple developer should own a copy. While Xcode has evolved into a powerful and mostly stable tool, Apple has a lot of blindspots and Xcode is in many areas (at least) 15 years behind the top of crop. AppCode isn’t.

JetBrains is the powerhouse of Java development tools and they represent everything that Apple does not. Where Apple is closed, secretive and has a very paternalistic approach to its developer community, JetBrains is open, transparent, friendly and as cross-platform as it is possible to be.

The advantages for an Apple developer such as myself is that you get a peak at the world beyond Apple’s strictly enforced white room monoculture. Using AppCode is as much about growing as a developer as it is about efficiently developing software.

JetBrains offers IDEs that support nearly every language that is available and the more outrageously new and niche a language is, the more likely that JetBrains has a tool for it. This means that once you get used to the basic IDE concepts, you can take that expertise and use it for developing in other languages, on other platforms (Android, Windows, Web) and with other technology stacks.

I use WebStorm for my own website development, RubyMine for web app stuff and IntelliJ IDEA for learning functional programming in Scala. If I ever wanted to learn CoffeeScript, Dart or Haskell I know I’d be covered there too. On top of this JetBrains’ plug-in technology makes adding support for the latest and greatest open source technologies a breeze and JetBrains are very good at keeping an eye open for exiting new technologies. There’s a good chance that the first you hear about a new technology is by looking at JetBrains’ product release notes.

The AppCode IDE itself is very much in the mold of other Java development environments. The IDE can do everything and more, but it is also very busy and a long way from the pared-down minimalistic Apple aesthetic. It’s a nerdy power tool more than a philosophical statement.

JetBrains is rightly famous for their language parsing and refactoring acumen, so their IDEs are chock full of “intelligent” features. Not the kind of “intelligent” that makes everything harder, but the actual intelligent kind.

Navigating in AppCode is much more powerful than in Xcode. The gutter contains a myriad of options that will take you from method implementation to declarations and vice versa. You can also click and hold from class definitions to jump to super- and sub-classes, get in-line help and auto-fixing for commons problems. The as-you-type code analyzer finds potential problems and standard fixes, the code reformatting options are powerful and easily accessible. The intelligence extends to seamlessly into finding all places a piece of code is actually used rather than having to rely on text searches.

Best of all, however, AppCode can make changes to associated files without leaving the current file. The annoying roundtrip between implementation and header files that keeps interrupting your train of thought in Xcode can be wholly avoided. You write the implementation for a method and AppCode just offers you the ability to declare said method in the header with a single click without ever taking your eyes of the code you are busy writing.

Working in AppCode you constantly find yourself wondering why Apple can’t just do this. If it seems obvious, it’s in AppCode. Unfortunately this is rarely true for Xcode.

Refactoring is part and parcel of the AppCode experience and backed so far into the IDE that it becomes a nearly invisible part of your development. If you are used to refactoring in Xcode, you are likely to be non-plussed by AppCode’s refactoring support. Where Xcode makes a huge deal out of every refactoring: taking a snapshot, making you validate a thousand changes and more likely than not failing bang in the middle of the refactoring, AppCode just makes the changes with no fuss whatsoever. The first time I used the renaming refactoring in AppCode, I was wondering what I was doing wrong. I typed the new name into the red highlighted area and nothing happened! How do you terminate the editing? In fact, AppCode had already done the project-wide refactoring. Why make a fuss about it? Why could it fail? Why beach-ball for a few seconds? Why indeed?

AppCode enables you to work in a completely different manner to Xcode. Say you are into Test-Driven Development. Write the test cases first. When you instantiate your target class in the test class, AppCode will tell you that the class does not yet exist. A single click solves the problem by creating the class for you. As you write your tests, you can one-click to add method declarations and empty implementations. When you’ve finished with your test cases, there’ll be .m and .h files with complete stub implementations all without you ever leaving the test case implementation file.

Another big differences with Xcode is that where Apple knows everything best and either offers no customization or forces you to comply with their guidelines, JetBrains puts you in charge. Almost every aspect of the IDE is fully customizable: you can define your own coding style, which will cause AppCode to use your specific style to create stubs. You can even decide to reformat your code automatically before checking it into source control. You can (obviously) choose your own source code management system, add CocoaPods support, edit and preview HTML, CSS, Compass, TypeScript, JavaScript, files or add your own selection of plug-ins. In short, JetBrains is for grown-ups that like taking their own decisions.

Similarly, if you’ve ever felt the frustration of never being able to talk to anybody at Apple about Xcode, you will find the JetBrains support team a breath of fresh air. Something not working? Something not supported? Something you’d like to see added? Just drop them a line and an actual person will reply to you; better yet that person will be an approachable, open-minded fellow developer intent on helping you out. With JetBrains you’re the customer and you know best.

Seriously, just give it a shot. If only for a breath of fresh air.

The unbearable fragility of modern Mac OS X development

There I’ve done it again: I shipped a broken A Better Finder Rename release despite doubling down on build system verification, code signing requirements validation and gatekeeper acceptance checks, automation, quality assurance measures, etc.

Only in October, I had a similar issue. Luckily that time around it only took a few minutes to become aware of the problem and a few hours to ship a fix so very few users were affected. Right now I don’t know how many users were affected by the “botched” A Better Finder Rename 10.01 release.

This didn’t use to happen. Despite the fact that I did not spend nearly as much time ensuring that everything worked properly with the release management. Nor am I alone in this situation. Lots of big as well as small developers have recently shipped similarly compromised releases.

The situation on the Mac App Store is much, much worse. Nobody other than Apple knows how many Mac App Store customers were affected by the recent MAS certificate fiasco that had the distinction of making it all the way into the pages of Fortune magazine.

The truth is that Mac OS X development has become so very fragile.

The reasons for this are manifold and diverse but boil down to: too much changetoo little communicationtoo much complexity and finally too little change management and quality control at Apple.

The recent Mac App Store (MAS) fiasco that left many (1% of Mac App Store users? 100%? Nobody knows) users unable to use their apps purchased from the Mac App Store was down to Apple’s root certificate expiring. This was a planned event: certificates are used for digitally signing applications and they are only valid for a particular period of time, after which they need to be replaced with new certificates.

When the Mac App Store certificate expired, it was replaced with a new certificate but there were two problems. First, the now expired certificate was still cached by some of Apple’s servers: when Mac OS X opens an application it checks its signature, which in the end is guaranteed by Apple’s root certificate. Since this was no longer valid, Mac OS X refused to launch them and reported them as “broken”, leaving users and developers equally baffled. After far too long, Apple investigated the problem and emptied their caches which made the problem go away.

The second problem which was not solved by updating the caches, was due to Apple also replacing the certificate with a new, higher security version; of course without telling anybody. The new certificate could not be verified with the old version of OpenSSL that was used in the receipt checking code of many shipping apps.

When Apple created the Mac App Store, it provided a “receipt” that each application should check to see whether it has been properly bought on the Mac App Store. This is just a signed file that contains details about what was bought and when. Instead of doing the obvious thing, which would have been to provide developers with an API for checking the validity of the receipt against Apple’s own rules, they just publishing snippets of sample code so that each developer could “roll their own” verification code. Supposedly this was for added security (by not providing a single point of failure), but it seems more likely that they couldn’t be bothered to ship an API just for the Mac App Store.

This decision came back to haunt them, because most developers are not crypto experts and so had to rely on developer contributed code to check their app’s receipts. Once this worked properly, most developers wouldn’t dream of touching the code again.. which is how it came to pass that many, quite possibly a majority, of Mac App Store apps shipped with the same old receipt checking code in 2015 that they originally shipped with in 2010(?). This was fixed by Apple revoking the new style certificate and downgrading it to the old standard.

For once, I had been ahead of the curve and had recently updated all the receipt code in my applications (no small feat) and I have yet to hear from any customers who had problems.

Just before the Mac App Store fiasco, however, many non-MAS had also shipped with broken auto-update functionality.

Apple does not offer any auto-update facility for applications that are not on the Mac App Store, which lead to Andy Matuschak’s “Sparkle” framework becoming the de-facto standard for adding an auto-update features to Mac applications.

Driven by abuse of some HTTP communications in iOS apps, Apple decided that in iOS 9 it would by default opt all developers into using only (more secure) HTTPS connections within their applications. What is good for iOS 9 can’t be bad for Mac OS X 10.11 El Capitan, so Mac applications also got opted into this scheme.

Unfortunately, that broke Sparkle for applications which do not point to HTTPS “app casts” such as mine. I have long resisted installing my own HTTPS certificates because I was worried about messing up the expiry periods, etc.. apparently just the way that Apple did with the Mac App Store certificates.

Most developers will have been unaware of the change since Apple never announced it, but I had happened to see the WWDC conference videos that mentioned this in passing. Unfortunately, nothing “clicked” in my head when I heard this. My applications do not communicate with a server anywhere and I thus thought that this was not something I had to worry about. I had forgotten that Sparkle might use this internally.

Fortunately, I caught this at 6AM when I released A Better Finder Rename 10 final. I was just doing a normally completely redundant check through all the features of the program when I noticed that the new version failed when trying to check for updates. By 8AM, I had identified and fixed the problem so that very few people indeed could have been caught out by it. That was luck though.

The nefarious element here was that applications were opted in automatically and silently. Before 10.11 El Capitan was installed on your Mac, my applications updated just fine. Afterwards, they no longer did. Just because they were on El Capitan. Gee thanks!

Of course, this would not have happened if I hadn’t built A Better Finder Rename 10 with the Mac OS X 10.11 SDK (Software Development Kit) at the last moment.

It is somewhat crazy for a developer to change the SDK that s/he builds a brand-new version of their software against in the middle of the beta phase. Changing the SDK always introduces errors because the entire environment in which the code executes is changed. This may bring out bugs that were already present; things that should never have worked, but worked just because the API happened not to trigger the bug. It also introduces bugs that are just part of the new SDK and that you now have to work around. Changing SDKs makes existing programs fragile.

I’m very conservative when it comes to changing SDKs because I’m well aware of the risks. That’s why I’ve been building my code against older SDKs for the past 15 years. A Better Finder Rename 10 was built against the Mac OS X 10.7 SDK which is forwards-compatible with newer versions of Mac OS X.

The main reason for doing so, is that I wanted to be certain that I didn’t accidentally break A Better Finder Rename on older systems, which brings us to the next problem with Mac OS X development.

Xcode lets you specify a “deployment target”, for instance 10.7, while building with a newer SDK. This is the recommended way of developing on Mac OS X and keeping backwards compatibility. Xcode will, however, happily let you use APIs that are not compatible with your deployment target and thereby ensure that your application will crash on anything other than the latest Mac OS X.

In fact, Xcode encourages you to use the latest features that are not backwards compatible and will rewrite your code for you if you let it, so that it will crash. It will give you “deprecation warnings” for any API usage that is not in the latest SDK and resolving those warnings is likely to break backwards compatibly as well. Of course, you won’t know this until you run it on the old Mac OS X version.

Now which developer can afford to keep testing rigs with 10.7, 10.8, 10.9 and 10.10? Never mind spend the time re-testing every change on multiple platforms for each change?

Thus I happily built with the 10.7 SDK. Apple did not make this easy by not shipping the old SDKs with Xcode, but you could manually install them and they would work just fine.

Imagine my surprise after installing Xcode 7 and finding out that this no longer worked. The only workable solution was to build against the 10.11 SDK, so jumping forwards not one but 4 SDK versions. A bunch of code wouldn’t compile any longer because the libraries were gone. Luckily the receipt checking code was amongst those, so it got modernised just in time to avoid the Mac App Store receipt fiasco.

Nonetheless, now my entire code base had become fragile and largely un-tested between the last beta release and the final shipping product. Nightmare!

On top of that was it still even 10.7 compatible? or indeed 10.10 compatible? Just quickly running it on older systems wouldn’t provide more than a little additional confidence since it’s impossible to go through every code path of a complex product.

After installing virtual machines to test on, I still couldn’t be 100% certain. The solution came in the form of deploymate, a now truly essential developer tool which does what Xcode can’t do: check that API usage is compatible with the deployment target.

I have since spent many weeks trying to ensure that I won’t run into the same problems again by adding (additional) automated verification processes to my build system. My build system now runs the built product through SDK compatibility checking courtesy of deploymate, code signing validation and gatekeeper verifications on each build. I’m still working though deprecation warnings and the like and my code base will soon be bullet proofed at least until the next forced changes arrive.

You’d think that this was a long enough list of problems for one year, but this still does not account for Apple also changing the code signing rules (once again) earlier in the year (in a point update of 10.10 no less). This time it affected how resources and frameworks are signed. So applications that were signed correctly for years, now suddenly became incorrectly signed and Mac OS X would refuse to launch them because they were “broken”.

All this points to the underlying issues with the current spade of fragility of Mac applications: Apple keeps changing the status quo and neither it, nor developers have any chance of keeping up.

Apple’s own applications are full of bugs now. None more so than Xcode, which is both the lynch pin of all Mac OS X, iOS, watchOS and tvOS development and no doubt Apple most fragile app offering. Xcode is in beta at least 6 months a year and never really stabilises in between. Each new version has new “improvements” to code signing, app store uploading, verification code, etc.. and each new version breaks existing code and introduces its very own new bugs and crashes. From one day to the next, you don’t know as a developer whether your code works or not. Existing code that had worked fine on Friday evening, no longer works on Monday morning. Worse, chances are that you are not hunting for your own bugs, but those in your development tools, the operating system or Apple supplied SDKs.

All this is driven by the one-release-a-year schedule that Apple has imposed on itself. This leaves all of Apple’s software in various stages of brokenness. When Apple’s own staff cannot deal with this constantly shifting environment, how are third party developers supposed to?

Case in point: Apple’s own apps are not all iOS 9 compatible yet. Many don’t support the iPad Pro’s new native resolution yet. Some have gained Apple Watch extensions, but most haven’t.

Reliability is a property of a system that is changed slowly and deliberately and where all constitute parts are themselves reliable. The Mac and all other Apple platforms are currently undergoing the worst dip in reliability since Mac OS X was introduced.

Apple is pushing out half-baked annual releases of all its software products, as well as apparently completely unmanaged changes to policies, external rules and cloud services at an ever more frenetic pace.

These could be written off as temporary “growing pains”, but the big question is: Do all these annual updates equate to real progress?

When I switch on my Mac today, I use it for much the same things that I used it for 10 years ago. A lot has changed. Cumulatively Mac OS X 10.11 El Capitan is somewhat better than 10.6 Snow Leopard.. and yet if you discount cosmetic changes and new hardware, nothing much has changed. Certainly nothing much has actually improved.

I can’t help thinking that if we had had 2 or possibly 3 Mac OS X updates instead of 5 over those last 5 years, we’d be in a much better shape now. Apple and developers would have time to provide user benefits and rock solid reliability rather than just endlessly chasing their own tail.

The beauty of the Mac used to be that it just worked. I want to get back to that.

A Better Finder Rename 10 beta auto-update broken on 10.11 El Capitan

We are sorry to report that the auto-update feature on beta releases of A Better Finder Rename 10 is broken on Mac OS X 10.11 El Capitan and you will need to download the update to version 10 (out yesterday) directly from our website.

We  noticed this early at 6AM yesterday while checking the A Better Finder Rename 10.00 release and shipped a fixed version at 9AM (after struggling through work traffic) both GMT+1.

We have over time evolved a build process that guarantees that we only ship high-quality product builds, but we were caught out his time by the rapid pace of change imposed by Apple’s frequent Mac OS X and Xcode updates.

In the past, Apple was quite good about letting developers upgrade their development environments at their own pace, which is important because Mac users do not expect to always have to upgrade their Macs as soon as a new Mac OS X release is dropped. More recently, Apple has transferred a lot of its iOS practises to Mac OS X and have started really pushing developers to adopt features quickly and to get rid of backwards compatibility quickly.

At first this took the form of a gentle prodding, but over time it has become much more aggressive. Essentially they are deliberatley making it hard for developers not to drop support for older OS X versions.

We were caught out in this and still are. We had to install 10.11 on our development machines in order to test on El Capitan properly (the remote debugger has been discontinued for a while now), which lead to an auto-update of Xcode 7.

We had for the past decade built our products using the latest Xcode but using the oldest compatible SDK (in this case 10.7), because this ensures that the builds do not break backwards compatibility for customers on older OS X releases.

We were caught by the fact that Xcode 7 quietly drops the ability to work with older SDKs. Unfortunately building to the 10.11 SDK opts the program into a new rule that the program can only read HTTPS streams for security reasons. This was mentioned at WWDC for iOS applications but OS X hardly gets a mention in those talks. In any event, we did not think that this would affect A Better Finder Rename as we have no server backend, but as it happens, it breaks Sparkle auto-updates, which power our product’s (and 99% of non-Mac App Store applications’) auto-update feature. Note that our auto-updates are securely signed even thought they do not use https.
As a result the auto-update feature works fine as long as you are not on 10.11, but no longer works on 10.11. We only found this out when we shipped the first update since users have started installing 10.11, resulting in a minor but real mess where A Better Finder Rename beta cannot auto-update.
Sorry for the inconvenience.

Tools of the trade: Monitor Arms

Ergotron LXSitting in front of a computer display all day long does not do wonders for your health.

Things are made considerably worse if that computer display is of the notebook kind. Laptops in general are ergonomic nightmares putting your body into all the wrong positions. First, you need to look down all day long, then to make matters worse, the keyboard is attached directly to the screen forcing you to find the least bad compromise between positioning your arms and hands correctly and getting the screen into a semi-comfortable viewing position. Unfortunately no good compromise exists and you will over time do both your upper extremities and your neck/ back/ shoulders in. If it feels a little uncomfortable now, trust me, it’ll hurt a few years down the line.

Desktop computers are much better in that regard, allowing you to independently adjust keyboard, mouse and display. Unless you are using an iMac of course. Its aesthetics-over-function design has led everybody’s favourite industrial designer, Johnny Ive, to give it a stand that is far too low to allow you to view it comfortably. Such a shame because his Luxor Junior-inspired second generation iMac featured what must surely have been the best built-in monitor arm ever.. Oh Johnny..

When it comes to ergonomic Macs then, it’s a choice between the Mac Pro (my choice), the Mac mini (also my choice) or the newish VESA-mounted (stand-less) iMac.

On this type of setup you can not only choose your own (non-glossy if you want it to be easy on your eyes) display(s), but also adjust its height, distance from your eyes and inclination to your heart’s content.

The “ideal” viewing position is usually said to be at least 30cm (circa 12 inches) from your eyes, with the top-most row of pixels level with your eyes. A very slight forward tilt to the monitor is also said to be beneficial.

I find that advice to be fairly close to what I find comfortable myself, even though it’s better still to slightly raise and lower the display every now and then.

Monitor arms allow you to reach this position very easily and make adjusting it much less painful, though even the best monitor arms are not quite as good as the one on that second generation iMac. Monitor arms also free up space under the monitor and make for a much tidier setup over all.

I personally own several Ergotron LX Desk Mount Tall Pole mounts and they are great. They can be fixed directly through a screw onto your desk and once installed are much steadier than their admittedly much cheaper counterparts. They are sturdy and easily set up correctly and can be adjusted within a very large range of distances and heights. Moreover they work great in multi-display setups.

I’m fairly tall (6 feet 4) and I find that having the display slightly higher than is usually recommended is most comfortable for me. Most monitor arms do not stretch high enough for me and the Ergotron LX’s tall pole to which the arm itself is attached allows for raising the displays as high, and indeed higher than is comfortable. Anyway it’s better to have more range of adjustment than you need than to have just that little bit too little.

I also use a sit/ stand desk in my home office and unfortunately even the tall pole version of the LX, does not go high enough to cope with the standing position.

In theory, you shouldn’t have to adjust the height of the screen at all when your desk goes into the standing position. When you are sitting at your desk, you are holding your upper body completely straight just as if you were standing! Or at least that’s what the theory says.

In practice, my merely-human body isn’t candle straight at all times but likes to move around, lean forward, then back, etc. When I stand up I find that the screen is too low for comfort and it needs adjusting upwards. The Ergotron LX Sit/Stand Monitor Arm gives you jumbo-sized adjustability and takes even heavy weight monitors. I originally got those for my twin 30″ Apple cinema displays that showed their age through their ludicrous weight. One of them now has my Dell UP3214Q 4K display monitor on it, while the other supports a Dell UP2713HM; both awesome displays in different price ranges and weight categories.

Designed to be used to easily lift a monitor from a sitting to a comfortable standing position without the desk itself moving, the sit/stand version of the Ergotron easily deals with the comparatively small task of lifting the monitors that extra bit higher. The sit/stand version is clearly overkill but in a good way. It’s much more stable and paradoxically moves much more easily with even heavy loads. Not cheap but highly recommended, even in combination with a sit/stand desk.

Monitor arms seem like an indulgence to most people, but the cost of an ergonomic setup is dwarfed by the cost of wasted productivity and the inevitable medical bills that accumulate after a decade or two of full time screen-based work. For a home-based full-time IT professional like myself there really should be no hesitation in splurging out on a proper setup.



iPad Pro: A professional tablet with a phone operating system and an everything for $1 store?

Steve Jobs never understood “business“. Don’t get me wrong: he did understood how to make money. He did understand how to run a company (kind of). Most of all he very much understood consumers.. but he never understood organisations; least of all how to sell to them. I’m not sure I cared then or now.

The iPad Pro brings us a decent stylus, something that was anathema to Jobs, but in many other ways Tim Cook’s Apple seems little different from Jobs’ Apple when it comes to understanding the “Pro” crowd.

The stylus (sorry “Apple Pencil”) is a great step in the right direction for both Apple and for the iPad.  It shows that Apple is capable of listening to reason and putting out-dated home-brew memes behind itself. It is a long due improvement for iPad users in the creative fields, as well as for people who like taking hand-written notes or just like doodling; I’ve gone through a host of really crappy slightly-better-than-meat-pencils accessories and I’ve been lusting after a Wacom Cintiq Companion for ages. I even own a Microsoft Surface Pro 3 and a Live Scribe Sky Wifi Pen, so I’m clearly desperate.

It’s a few years late because Steve carefully crafted the meme that styluses are bad at everything just to rubbish Microsoft’s earlier attempts at creating tablets. I doubt he even really believed it himself, but it did make a great one-liner and for years Apple devotees could finish every conversation about styluses with a superior “you have no taste”.. unless they themselves owned a bunch of rubbish styluses just like me..

The new iPad Pro keyboard is a pure Microsoft Surface rip-off even though they changed the hinge to make it much less practical. Quite probably this was done to make it a little different from the Surface’s and quite definitely because Johnny Ive had a stroke when somebody suggested putting a stand at the back of his iPad.

What’s a real shame is that they didn’t put 3D Touch on the iPad Pro. A standardized right-click tap would have made the iPad much more productive than the tap-and-hold right-click. Once again Apple deliberately makes a product worse than it has to be, just so that they can upgrade all the iPads to 3D Touch in its next iteration. It’s not a winning strategy unless you can afford it.

Another omission is that of a trackpad on the keyboard. You’ll probably say “just tap the screen dummy!“, but once you’ve used a Microsoft Surface you realise that having a trackpad on the keyboard is essential. It makes “mousing” around the screen much more efficient (because of the up-scaling of the motion), avoids awkward reaching motions and plain “puts you in command”.

None of this really matters though. The iPad Pro won’t take business users by storm anyway. It’s not going to be a complete dud, but it’s not going to make the iPad into a serious productivity tool either.

The reasons for this are (almost) too many to mention. The two that are going to kill it are the App Store ecosystem and the operating system, but there are plenty more besides.

iOS is a smartphone operating system designed for quick, simple, casual interactions on a small touch screen. It’s been conceived for the iPhone form factor. It has been scaled up to the iPad, but the iPad was and remains a big phone without the phone features. iOS never really embraced the larger screen. In recent years Apple has almost completely ignored the iPad in its iOS revisions. iOS 7 pretty much forgot about it altogether. On the rare occasions that Apple does bother to demo iPads at all, it’s always to showcase some game or other.

iOS is rubbish at supporting keyboards. Yet the keyboard is a crucial part of what makes people productive on a Mac or a PC for that matter. Any experienced computer user knows how to zip around the screen using a combination of the arrow, tab and escape keys and keyboard shortcuts. Command-C and Command-V sound familiar? In many respects, Windows is still much superior to even Mac OS X when it comes to keyboard navigation. You can get quickly into any menu and select any option, whereas on the Mac you can do this but it’s really just for masochists (Command-F2 is it?).

All this keyboard navigation magic requires an infrastructure in the operating system. In Windows much of it is automatic, on the Mac it’s a lot of hard work in most cases. On iOS.. well even if there was a decent infrastructure, no developer would ever paid any attention to it. Even in Apple’s own apps a simple thing like the arrow and tab keys working is by no means a sure thing. More often than not when you connect a third party keyboard to an iPad, you are enter a world of frustration. Nothing works. Half the time you need to tap around the screen at arms length to perform even the most basic editing tasks. In fact, writing on the iPad is not half as frustrating as editing on the iPad.. in that way it is very similar to dictation features.

The video with the iPad Pro sitting flat on the desk with its full-size on-screen keyboard was hilarious. Who could type like that for more than a few minutes at most? How many hours of chiropractor’s work is involved in undoing the neck strain that you’d get after a mere hour of sitting like that?

So in practice, if you’re going to be typing long articles on the iPad Pro and you don’t have the keyboard cover, you’ll have it propped up on something. It might as well be the keyboard cover. In fact is there even a non-keyboard cover for the iPad Pro?

Again the much maligned Microsoft Surface is much more practical in that respect. It has a great built-in kickstand that means you can angle it even if you don’t have the keyboard cover on it. Great idea me thinks!

So you’re there, doing your “Pro” work on your iPad Pro in landscape mode, with the device propped up on its keyboard cover, seething at the fact that keyboard don’t work properly. Now, you’re faced with the biggest question of all: where’s the productivity software?

Well, Microsoft to the rescue: they have Office on the iPad. It’s probably not anything near as good as Office on the Mac or on a PC, but it looks like they’ve put a lot of effort in. The keyboard seems to be functional for navigation as well as for plain old typing. You still don’t have a file system, so you’ll have to use some kind of cloud service (surely not iCloud), but you can get some work done that way.

Beyond Office, however, you’ll soon fall into the abyss and then on your way down you’ll find it’s a bottom-less pit.. at least you won’t die since it’s not the falling that kills you. There is very little serious business-minded productivity software on the iPad and there quite possibly will never be. The reason for this is that nobody writes productivity business software for fun. Serious people write serious (aka boring) software for profit and there is none to be made on the iPad.

Apple’s App Store has long been a particular gripe of mine. Its business model encourages throw-away getting-rich-quickly software (games) and free (aka get-big-quick venture capital funded) software but nothing in between. Professional grade productivity is sold on a pay-up-front basis and relies on a constant upgrade revenue stream.

The solution so far has been for companies like Adobe or Microsoft to make the software free on the iPad and require an out-of-App Store subscription to a “service” like Office 360 or Creative Cloud. This gets around the 30% Apple share of anything sold on the App Store and its lack of upgrade options. This option is, however, neither available to most App Store developers, nor practical for most software.

The current state of the App Store is not tenable for Pro-level productivity software. About the only people making any serious money with productivity apps on the App Store is the OmniGroup. They prove that it is at least possible, but they benefit from intense promotion though Apple. Without that constant promotion of their products on the App Store one doubts  that they would be able to sustain their business.

This also shows the fragility of making software for the App Store: Make fun of Eddie Cue’s shirt while he’s standing next to you at the bar and your company is finished. More seriously, you are putting the fate of your company into Apple’s hands and Apple is not known for taking good care of its partners (developers included) and very well known for brusque unapologetic changes of direction that put people out of business.

The worst thing about the App Store is, however, the pricing. It takes a lot of time to make professional level software. Porting Adobe Photoshop to iOS would consume more than a man-lifetime; probably very much more. God only knows how much the Office port has cost Microsoft and they are still a long way from having parity with their PC versions.

You can’t expect anybody to put in that much time and money for a small and shrinking market where everything is $0.99 (or I guess $1.99 if you have an iPad, iPad Pro, iPhone, Apple Watch & Apple TV universal app).

The once vibrant Mac productivity software market shows that a marketplace where the majority of users are prepared to pay a fair price for high-quality software is possible, but even the Mac market is suffering through a combination of unsustainable race-to-the-bottom pricing and the other ill-effects of the Mac App Store with its stifling rules and arbitrarily enforced “guidelines”.

There are products on the Mac App Store that are doing very well, but it’s only the ones that get promoted by Apple. Apple tends to promote software that looks nice and/or is cheap and/or is written by people with a strong voice in the Mac community. You can’t rely on that.

Where in my opinion does this leave the iPad Pro? Between a rock and a hard place.

Apple is used to having people queue up to write software for their devices no matter what. The early days mobile gold rush is, however, slowly coming to a halt. There are today many more developers who have tried and failed to make money from the App Store than those who have had positive experience (the stats suggest something like 10,000 to 1). As a result, most developers have grown more cautious on how they spend their time. iPad sales are faltering. Many developers already regard the iPad market as “dead” with no money to be made; they are probably right.

It is hard to see who is going to be willing to invest much more time and effort to make much more complex and feature rich applications for a new niche within a shrinking market. Worse yet, because Apple is hiding the true cost of the iPad Pro by making keyboard and pencil optional extras, developers won’t even be able to rely on these accessories being present. How many iPhone developers are going to be spending $1,000+ to test their software on a huge iPad? Not many. This makes me believe that main-stream support for adequate keyboard navigation and pen input is going to be very slow in coming, if it is coming at all.

For the foreseeable future then, the iPad Pro is destined to be no more than a curiosity and its users will be just as frustrated as Microsoft Surface users are today. The hardware is there, but the software isn’t and probably won’t ever be.

Changing this will require a change of heart from Apple and Apple, regrettably, has become too big to be nimble, too successful to be humble, too set in its ways to welcome change.. and I suspect too afraid to change a winning formula.

After all that moaning: Will I get one? Hell, YES!

I’ve been waiting for an iPad with a decent stylus since it first came out. I’ve been wanting a bigger iPad to read my Magazines (I’m 43 and my eyes could be better) and Comics (I’m still young damn it) on for just as long. So count me in.

Will I be writing long blog posts on it? No. That might be a good thing 🙂