Microsoft Keyboard Comparison: Ergonomic v Sculpt v Surface

I’ve been using Microsoft’s Natural Ergonomic Keyboard 4000 keyboard at work, and the Natural Wireless Ergonomic Keyboard 7000 at home for as long as can remember. It’s always bothered me a little having the wire on my otherwise minimal desktop setup at work. So when a colleague of mine recently started using the Microsoft Sculpt Ergonomic Keyboard it prompted thoughts of a change. Microsoft then finally launched the Surface Ergonomic Keyboard in the UK, so I ordered both and tried them out.

Before I kick off the review it’s worth mentioning my perspective: I spend most of my day programming Java in Eclipse on OS X, and I also use Emacs a fair bit to navigate around log files. I work on a 2016 MacBook Pro, and have Apple headphones in most of the day so I can listen to music, do Skype calls, but still hear what’s going on in the office.

Natural Ergonomic Keyboard 4000

This keyboard has a great layout, with the integrated number pad and dedicated buttons for volume control. In the office I often use the mute button when I need to quickly tune into a conversation that’s going on around me. After trying the Sculpt and Surface keyboards and coming back to this, I found the keys to be really clunky and have way to much travel on them. It really feels hard going after using one of the more modern keyboards. It’s also frustrating that the wireless model has been discontinued. I did monitor eBay for a while, but a new one with a UK layout never came up. At only £33 on Amazon this is a still great value keyboard.

Natural Wireless Ergonomic Keyboard 7000

Exactly the same as the 4000, but wireless. As this is now discontinued so I won’t consider it for this review.

Sculpt

This keyboard looks fantastic. It’s small, black, and looks great on my desk. The keys are super snappy making typing a breeze. It does however miss the number pad. You can use the separate one that comes with the keyboard but when I tried this I found myself knocking into it and flipping it over. The function keys are a really odd shape, I wish there were just normal keys like on the other two keyboards. The extra functions keys have been removed and replaced by making the traditional functions keys multi purpose by use of a switch in the top right of the keyboard. This is a bit clunky as my one touch mute has now become an awkward switch slide and then a mute. I think this is fairly priced at £63.05 on Amazon, which includes the separate number pad and a mouse (I don’t use the mouse, I’m sticking with my Logitech M510).

Surface

The layout is very similar to the Ergonomic keyboard and has an integrated number pad. The keys have a much shorter travel than the Ergonomic, but are not as nice and snappy as the Sculpt. The keys also have slightly sharp edges, which can make it a little uncomfortable at times. Rather than using USB wireless, the keyboard uses Bluetooth. This is great as it’s one less receiver in my USB hub, but unfortunately the Bluetooth connection doesn’t work great with my Mac when it sleeps. I can’t use the keyboard to wake it despite having all the options set to allow Bluetooth wake in OS X. I have to use either my laptop keyboard, or my mouse. Normally, once the computer has woken the keyboard will reconnect automatically, however sometimes I do have go into the bluetooth settings to force a reconnection, which is really frustrating. Like the Sculpt there are no dedicated smart keys, they are again combined with the standard functions keys. Microsoft describe the keyboard as being “in stunning two-tone grey mélange Alcantara”, personally I’d rather they’d gone with black, this looks a bit dull to me. At £119, only available from Microsoft, this really is a super expensive keyboard.

The Winner

After trying both the modern keyboards out I knew it was time for a change. So it was really between the Sculpt and Surface. I’ve decided to go with the Sculpt primarily for the feel of the keyboard. It’s so nice to type on, the keys really are fantastic. Whilst I did like having the number pad on the Surface, the Bluetooth issues really were an annoyance.

The Perfect Keyboard

If I could have some input into the next keyboard Microsoft did it would be this: Start with the Sculpt, make the function keys real keys, add some hot keys, offer an option with an integration number pad, add a little more padding to the bottom of the palm rest.

Update: 10th April 2017

I’ve finally realised/noticed what it is about the Sculpt that is so much nicer than the Surface. The keys on the Sculpt have a little dip in the middle, allowing you to “anchor” your fingers in place, and the edges of the keys are also rounded slightly. The keys on the Surface are flat, which, although it’s only a tiny detail, makes you feel less connected to the keyboard. 2 months into using the Sculpt I’m delighted with my choice.

4 Areas For Poxon Sports To Improve On – Renold Quinlan V Chris Eubank Jr

Back in March of 2016 I went the see Eubank Jr fight Nick Blackwell for the British middleweight title at Wembley Arena in London. The event was hosted by Hennessy Sports. Whilst I was pleased to have my seat moved closer to the ring due to poor ticket sales, the organisation on the night left a lot to be desired. I didn’t write about it at the time, but here are a few of the key issue:

  • No ring walk music. Watching the fight back on Channel 5 they had music, but obviously Richie Woodhall who was commentating couldn’t hear it, as he made a comment that it “would be nice to have some ring walk music” which would have sounded very odd to those who didn’t attend.
  • Post fight interviews were not played out to those in the arena.
  • We were actually asked to be quiet during the interviews we couldn’t hear!
  • After the Eubank/Blackwell fight there was a huge delay before the Hughie Fury fight, with no communication as to why, with many leaving assuming it wasn’t going to happen. I asked several stewards, who didn’t even know if there was another fight scheduled.

So with this in mind, whilst it’s fresh in mine, I thought I’d throw out a bit of feedback for Poxon Sports on this event. Don’t get me wrong, I’m sure organising events like this is hard. My object is to give, as the consumer, constructive feedback on the event which will hopefully go towards improving the experience for me and others at future events. So here are my 4 things to be improved on, in order of importance.

1 – Main Event Time

A main event has to start at 2200, 2230 at the absolute latest. If you’re trying to convert the casual fan, this is essential. If you’ve got a big undercard, start earlier. The fighters weren’t both in the ring till 2313, according to the timestamp on my photo:

If you don’t value my opinion, then perhaps the tweets by the former editor of Boxing News are worth a look:

then the next day:

2 – Running Order Communication

I went along with the following running order from the Poxon Sports twitter feed:

It’s since disappeared, but I found this one:

Eubank Jr himself thought he was on at 2130:

The running order was actually:

  • Ardin Diale V Andrew Selby
  • Kid Galahad V Leonel Hernandez
  • Adam Etches V John Ryder
  • Chris Kongo V Edvinas Puplauskas
  • Christian Hammer V David Price
  • Renold Quinlan V Renold Quinlan

Would have been great if this was comunicated to the paying crowd. I asked a few stewards, who were equally confused!

3 – Crowd Control During Bouts

As is the way with boxing, the bouts on earlier in the evening are of less interest to most of those there. It would be been nice however if the walkways between seating areas could be off bounds when fights are taking place. It’s frustrating having groups of people standing around in front of you chatting and stewards not doing anything about it.

4 – Timings / Round Numbers

There were two huge screens for those watching at the back of the seating areas. There were excellent quality, gave a great view of the eye injury to Hernandez during the Galahad. Would have been even better if these screens had the round number and time remaining on them. Small point I know, but every little counts.

Conclusion

Grips aside, I had a great night. Was great to see the hugely talented Selby, slick skills from Kid Gallhad and the war between Etches and Ryder. Really feel for his David Price, it felt just like the second Tony Thompson fight where he just ran out of steam. Was a surprise to see Chris Kongo on so late, but certainly one to watch and has a good following. Quinlan proved to be much tougher than expected – but the real tests lie ahead for Team Eubank. I fell for it last time when they called out GGG, let’s hope this journey arrives at the destination we all want.

Avoiding This copy of Windows is not genuine when copying a VMware Virtual Machine

I’ve recently updated from the my early 2011 MacBook Pro to a 2016 MacBook Pro with TouchBar. One of the issued I encountered when doing this was when I copied my Windows 7 VM, I was met with the following in the bottom right hand corner:

I thought just setting the MAC address of the new VM to be the same as the original would do the trick:

but that didn’t work. The trick is, when importing the VM onto the new machine, you need to choose “I Moved It” rather than “I Copied It” when prompted by VMWare Fusion:

And no more nasty warnings!

That’s Not My Code!

I’ve just been reading with my 8 month old son. His favourite book at the moment is That’s not my Santa.

Thats Not My Santa

That's Not My Santa

The book is part of the “That’s not my” series by Rachel Wells. Each book follows the same pattern; 5 examples of the subject, in this case Santa, that are not “mine” for some reason.

That’s not my Santa. His sleigh is too sparkly.

And the final page shows an example of a Santa that is mine.

That’s my Santa! His beard is so fluffy.

Each page has a some textures on it to backup the statement being made about Santa; Fluff on Santa’s beard, shiny paper on his sleigh that kind of thing.  Pre-school kids just love these books. They’re made of thick card, so can withstand even the most determined of teeth-er!

One theme I’m sure Rachel hasn’t thought of is…code.  If I was author That’s not my code, what would I use as the examples? How about

  • That’s not my code. You can’t see the whole function on a single screen.
  • That’s not my code. It contains magic numbers.
  • That’s not my code. It doesn’t have any unit tests.
  • That’s not my code. It’s not been code reviewed.
  • That’s not my code. It’s got duplicated logic.

And finally:

  • That’s my code. It can be read by humans as well as computers.

So, come on, get the the Christmas spirit – If you were to author That’s not my code how would it read?

DSCM – A Move Back to Big Bang Integration

Used sensibly the move away from centralised to distributed source control management offers more benefits than it does drawbacks. Used unwisely it could take us back to the days of Big Bang Integration

Branching Privately

Irrespective of the number of meetings attended or PowerPoint slides produced, developers communicate in one language, and one language only – the language of code. In many organisations trunk, or centrally hosted branches, give visibility of the work going on. This keeps everyone in the loop as to what is being developed, how its being done, and by whom.

Moving away from this work flow, towards teams working in local branches away from prying eyes, cuts this information flow. The development effort in these branches, now lacking the exposure to more than the team working on it, could miss out on vital feedback from other team members. Team member, that had the work been done centrally, allowing them to see what was happening, could have given feedback. This could have saved the team weeks or months of effort had they known that code already existed over here, or this team requires that module to do that as well, so don’t duplicate effort. These issues won’t be discovered until the big bang, the merge back into the mainline.

Large Changesets

Branches, by there very nature result in larger changesets being pushed back into the mainline. This can also be true when working on the main branch, but on a local repository. There can also be a temptation to complete a full feature before pushing the changes to the mainline. Developers will still be commiting their code in small changesets, but without giving it that push upstream to the mainline many benefits are lost.

Committing a feature little by little has many benefits. I’ve already mentioned peer review when talking about branching. In addition to this it allows detection of regressions to be detected as early as possible, making them easier to locate. Putting code into the mainline in feature size chunks certainly result in making it easier to track which changeset caused a regression, as there will be less of them. But finding which part of that feature sized changeset caused the regression will not be so easy.

Enjoy responsibly

Distributed soruce control management systems give us great power, but remember with great power comes great responsilbity.

4 Steps to Painless SVN Branching

Doing some work that can’t be done in trunk in small increments? Then it time to branch.

Say your working on bug “12345 Wibble”…

Step 1: Create the branch

Create a branch of trunk, making a note in the commit message of the revision of trunk the branch is being created from:

svn copy -r 20000
svn://svn/trunk
svn://svn/branches/trunk_wibble/
-m "Bug 12345 Wibble -
Creating branch for feature work 
svn copy -r 20000 
svn://svn/trunk 
svn://svn/branches/trunk_wibble/"

There are now two types of commits you’ll be doing to this branch:

1 – Work for the feature
2 – Resyncing with trunk.

Step 2: Doing the feature work

The feature work should be tracked against a bug as normal:

svn commit -m "Bug 12345 Wibble
- Added Foo for Wibble"

Step 3: Merging changes from trunk into your branch

The commit messages for the merges from trunk are crucial for the book keeping of your branch. Above we branched at revision 20000, suppose trunk is now at revision 30000. So to get those changes merged over in your branch, sit in trunk_wibble, with no other local changes:

svn merge -r 20000:30000 svn://svn/trunk .

Once this is compiling commit it:

svn commit -m "Bug 12345 Wibble
- svn merge -r 20000:30000 svn://svn/trunk ."

Now next time you want to sync your branch with changes that have happned in trunk, all you need to do is look down the list of changesets to see what revision you last synced too.

Step 4: Merging back into trunk

Now your feature is done and dusted, its time to merge to trunk. You need to know 2 things here:

1 – What revision of your branch that was last synced to trunk, typically this will be the last commit you did you your branch
2 – What revision of trunk you last synced to, this will be in your commit message for that final commit.

Suppose these are 40000 and 5000.

Sit yourself in trunk at the revision you last synced to:

svn merge 
svn://svn/trunk/@50000 
svn://svn/branches/trunk_wibble/@40000 .

Depending on how many changes you’ve made this may take a while. Be sure to give the changes a check over to check that match what you think you’ve changed.

svn commit -m 
"Bug 12345 Wibble
Merging feature work into trunk
svn merge 
svn://svn/trunk/@50000 
svn://svn/branches/trunk_wibble/@40000 ."

And your done!

A final note about that last bit

I’ve seen a lot of confusion about this last step. What your not trying to do is merge each of your feature work changesets into trunk.

You’re just after a delta between trunk and your branch so you can apply that to trunk to make it the same as your branch, thats it.

So, what happened exactly when we did the merge back into trunk? That merge was really a diff between trunk and your branch, followed by the application of that patch. You could actually achieve a similar result by doing

svn diff
svn://svn/trunk/@50000 
svn://svn/branches/trunk_wibble/@40000 > wibble.patch

And then applying the patch manually.

patch -p0 < wibble.patch

However you’d end up with empty files for those that had been removed in your branch, and files that needed to be svn add’ed for those that had been added in your branch. Doing it the svn merge way does all the deleting and adding for you.

I Smell Duplication Can You?

Want to know how to find duplicate code in your code base quickly and easily? Want to be able to sniff out the most pungent of code smells in double quick time? You’ve come to the right place.

Smells

If your not familiar with the idea of code smells, the be sure to check out Martin Fowlers excellent book Refactoring: Improving the Design of Existing Code. Code smells are symptoms in your source code that can indicate problems – arguably the worst being code duplication which violates the principles of DRY.

The code maintenance issues are well known,  so I won’t revisit them here. What I want to do is talk about a tool I found the other day that helps find duplicate code.

Copy Paste Detection (CPD) is a great little program that can detect duplicate code in a code base.  Its available under a BSD-style licence, shipped as part of the PMD static code analyzer for Java. Although PMD is targeted as java, as CPD works using string matching, it can be used on any language.  Java, JSP, C, C++, Fortran and PHP are supported out of the box. It is also possible to add further langugages.

How to use it

Running CPD is very simple:

./cpd.sh ~/path/to/source/

And thats it. By default the output is in text format, this can be changed to xml or csv. Example output of processing the JDK (reported to take only 4 seconds) can be seen here. The number of duplicate tokens require for code to be considered copy and pasted can also be configured, this defaults to 100.

My findings

I had to increase the heap size available to java to get the code based I’m working on parsed. Its about a million lines of C/C++ code. There results were fascinating.  Sure enough, copy and pasted code was found, comments and all. Worse still, code that had been copy and pasted but not quite kept in sync, in most cases straight bugs.

The only real false positives I found were with auto generated code. By default CPD recursively parses the directories (you can supply as many as you like) on the command line, without being able to ignore certain files (eg *_autogen.cpp). As these files are produced as part of the build process, I’m now running CPD on a clean checkout, without build any build artifacts lying about.

What next?

As always with these things, I’m left with a bunch of open questions:

I can see this tool can offer some real value, but how do I integrate it into my teams work flow? Its a command line tool only, so there is no administration interface to allow results of various runs to be compared and analysed.

There are plenty of other static code analyzers that do much more than just check for duplication, what are peoples experiences with these?

Are You Stuck In The Maintaince Programmer Mindset?

We’re all very good at pointing out whats wrong with code. There are even websites dedicated to exposing and ridiculing bad code and bad design, such as the daily wtf. We all encounter code that isn’t “right” on a daily basis, but how often do we do something about it and let our actions speak louder than our words?

It’s very easy to point out whats wrong with code. Pointing out how the design and implementation of the code we’re working with is making our lives hard. Talking with fellow programmers about how our hands are tied and how we “did what we could” given that it “wasn’t our code”.

This all too familiar discussion demonstrates what I call the maintenance programmer mindset.

Are you too stuck in this mindset? Its very easy to fall into.

I have to remind myself regularly to break out of this mindset; to be bolder, to have the confidence to change the direction the code is heading. To make a clear statement about the problem domain by introducing a new class. To factor out that bit of repeated code into its own method. To break that monolithic module into two more focused modules to regain control over dependencies. To grab the wheel and make the kind of changes that steer the design down the correct path. The requirements on an active software project change constantly, so the design must also.

It’s quite easy to see if someone is stuck in this mindset. The following are a list of the actions that make a statement about the direction a code base is heading, in order of impact and are unlikely to be performed by sometime stuck in this mindset.

  • Adding new new module/package
  • Adding a new class
  • Adding a new source file
  • Adding a new function

If your not doing most of the above on a regular basis, irrespective of if your maintaining software or doing green field development – your programming with the maintainance programmers mindset.

Brain Teaser: Streaming An Enumeration In C++

Streaming an enumeratoin in C++, what could be easier? Can you spot the bug in the following code?

typedef enum {
    SEASON_UNDEF,
    SEASON_SUMMER,
    SEASON_AUTUMN,
    SEASON_WINTER,
    SEASON_SPRING,
    SEASON_NUM_TYPES,
} SEASON;
 
 
std::ostream& operator<<(std::ostream& rOs, 
                         const SEASON & rRhs)
{
    switch (rRhs)
    {
    case SEASON_UNDEF:
        rOs << "SEASON_UNDEF";
        break;
 
    case SEASON_SUMMER:
        rOs << "SEASON_SUMMER";
        break;
 
    case SEASON_AUTUMN:
        rOs << "SEASON_AUTUMN";
        break;
 
    case SEASON_WINTER:
        rOs << "SEASON_WINTER";
        break;
 
    case SEASON_SPRING:
        rOs << "SEASON_SPRING";
        break;
 
    default:
        rOs << "Unknown SEASON: " << rRhs;
        break;
    }
 
    return rOs;
}

Scroll down for the answer….

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Nearly there… 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

You’ve got it, its the default case. This makes a recusive call which will never terminate. Now how much stack space do I have….

rOs << "Unknown SEASON: " << rRhs; // Recusive call!

Overloading, Is It really Worth It?

For a long time I’ve used overloading, but just recently I’ve been questioning its uses.

Readability

Looking at the call site when invoking an overloaded function its not immediately obvious which method is being called.

find("code buddy");
find(C_PLUS_PLUS);

After a quick look through the overloading options available I can always work out which one will be called – but why should I? If code is making me think more than is absolutely neccessary then for my money, there is an issue with it. Wouldn’t the previous code be clearer if it was written like this:

findByName("code buddy");
findByLanguage(C_PLUS_PLUS);

Tags

My poor tags get confused! I’m working in a C++ code base on a linux OS. Rightly or wrongly, I’m using emacs with cscope as my chosen editing/tagging solution.  For example, if I’m working with the Visitor Pattern that is using overloading for its visit method, as search for:

void visit(const NodeA& node);

also finds me:

void visit(const NodeB& node);
void visit(const NodeC& node);

This maybe a shortcoming of cscope, however I’ve seen other tagging solutions, notably the one used in Slick Edit fall at the same hurdle.  I’d much rather see this visitor interface written like this:

virtual void visitNodeA(const NodeA& node) = 0;
virtual void visitNodeB(const NodeB& node) = 0;
virtual void visitNodeC(const NodeC& node) = 0;

Templates

It maybe nessary for templatised code that you use overloading, eg:

template<Node>
void doSomeStuffAndVisit(IVisitor& visitor,
                        const Node& node)
{
    // some code
    visitor.visit(node);
    // some more code
}

Constructors

Overloading of constuctors is unavoidable.

// Creates a Foo from the xml in file file
Foo::Foo(const std::string& file);
 
// Creates a Foo from the xml node root
Foo::Foo(const xmlNode& root);

Unless of course your using a factory method, in which case, there is no need to overload like this:

// Creates a Foo from the xml in file file
*Foo FooFactory::createFoo(const std::string& file);
 
// Creates a Foo from the xml node root
*Foo FooFactory::createFoo(const xmlNode& root);

The comments really are the give away here, taking the comments away things become so much more readable.

FooFactory::createFooFromFile(const std::string& file);
 
FooFactory::createFooFromXMLNode(const xmlNode& root);

Conclusion

These days, unless I’m working with any code thats using templates, i’m avoiding overloading, the pros are much outweighed by the cons.