Sunday, November 28, 2010

On Heroes and Interviews

When I started my career in software development, “heroic effort” was a compliment.  It meant a developer went above and beyond normal hours and “burned the midnight oil” to get a task done.  It meant he or she prioritized work at the top of the list and let other options and responsibilities fall by the wayside in order to get results.  It was indicative of a “can do” attitude.
For years now, heroic effort in the context of software is not a compliment.  Or at the least it’s a back-handed compliment.  It usually comes with suggestions of poor planning, poorly designed code, of an immature development process.  It leads to poor decisions which lead to poor quality.  That’s one reason not to use the word “heroic’ to describe software development activities.  Another is the fact that with a war going on, we have real heroes in our armed forces.  They are involved in life or death struggles; I don’t want to cheapen the definition of a hero.
But there are software development activities we ought to celebrate.  The word that comes to my mind is noble.  Webster defines noble as “possessing, characterized by, or arising from superiority of mind or character, or of ideals or morals.” Examples I see of noble behavior:
·         The software developer who doesn’t settle for mediocrity, but strives to be a craftsman
·         The tester in waterfall environments who seems to always find her schedule squeezed at the end of projects but doesn’t let frustration affect the quality of her work.
·         The developer who always leaves the code better than he found it, but gets his work done on time.
·         The introverted tester who regularly steps out of her comfort zone to help her scrum team become self-organizing and effective.
The difference is attitude.  We all need to make big pushes sometimes and drive projects or programs to completion.  But when you are surrounded by noble people who are driven by the desire to be excellent and do excellent work, those big, last-minute pushes, which never produce the highest quality of work, become much more the exception than the norm. 
This is why technical competence is so important to assess in an interview.  Not because I care about coding trivia, not because the information isn’t readily available on-line, and not because it’s a measure of IQ.  It is important to measure technical competence because it is one indication of a person’s attitude towards excellence.  It is one measure of whether a person is limited by their own sizeable potential, or limited by their lack of passion.

Wednesday, November 17, 2010

BIBS

The following is a true story.  Anna Maravelas of TheraRising.com writes about in her book How to Reduce Workplace Conflict and Stress.

A man is late for a meeting and seems to be hitting every red light on his way to his appointment.  He gets to yet another red light and is behind a woman in a sedan.  While they are waiting for the light to change, she turns around and begins foraging in the back seat.  Perhaps she is looking for lipstick or a CD or who knows what.  But one thing is certain; she is oblivious to the light.  The light turns green.  The man taps his horn.  She ignores him and keeps searching the back seat.  He is now more aggressive on his horn.  She not only ignores him, she gets out of the car, walks to the back door, opens it, and continues foraging!  He is outraged.  He is now leaning on the horn, and has rolled down his window so he can tell her what he thinks of her lack of consideration.  Eventually, without any acknowledgement, she returns to her front seat and drives off.  Another self-absorbed, oblivious driver.

That night, the woman writes a letter to the local newspaper.  She wants to tell her side of the story.  Her story is this:  Today I was sitting at a red light when I sensed something was wrong.  I turned around and saw that my toddler son in the back seat was choking on something.  Frantically I turned to help but could not reach him.  I jumped out of the car, threw open the back door and – thank God – I was able to dislodge the thing he was choking on.  All of this took a minute or two.  I was fighting to save my baby’s life.  There was a man behind me honking his horn and shouting profanities at me.  He was indignant because I may cause him to sit through a two minute light again.  I could not believe how rude he was.

BIBS stands for Baby In the Back Seat.  It is an acronym to remember this story.  It is an acronym to remember that we have a strong tendency to attribute negative motivation when people behave in ways we do not like, or do not understand.  The stranger does not return our hello greeting because he is arrogant.  The supervisor is overly critical because she is petty and controlling.  The babysitter does not show up because she is a typical irresponsible teenager.  We don’t know why this behavior is happening.  But we tend to invent reasons, and the reasons do not show grace.  How strong is this tendency?  Even when you are aware of it you will still catch yourself doing it. I speak from experience.

Testers and developers have jobs that are converging, but their approaches still cause conflict.  The tester logged multiple low-priority defects because he is anal-retentive, or lacks judgment or is trying to make development look bad.  The developer  gave a demo of a new feature without inviting the tester because he does not think about testing, or does not think the tester adds any value.  This thinking makes a problem worse.  BIBS reminds us to deal with the problem and not the person.  Testers and developers don’t always understand each other well – the pressures, the thought processes, the motivation.  As leaders, BIBS is a tool we can use to remind ourselves and those we influence to show each other grace and focus on the why without vilifying the who.

Sunday, October 31, 2010

An Inspired Pairing

I’ve been traveling a lot lately and that has given me the opportunity to knock of some of the books that have been sitting on my to-read list for too long.  Two of those books, which I read back-to-back go together like Gruyere and Chardonnay.  They are great individually, but a truly inspired pairing.

The first is The Visual Display of Quantitative Information by Edward R. Tufte.  This book has received gushing reviews since it was first published and deservedly so.  My favorite review quote was from the Boston Globe:  "A visual Strunk and White", referring to the classic on how to write clearly and concisely.  The Elements of Style, is self-consistent; it is clear and concise.  The same can be said of the The Visual Display of Quantitative Information.  It holds to its own principles in how it presents data. 

This book motivates me to be more contemplative of the charts and graphs I produce to convey information.  Example after example show ways the display of data can add to or take away from the message the data has for its readers.  Excel gives you some tools for aspiring to the former, but makes it oh-so-easy to achieve the latter.

The second book is Presentation Zen: Simple Ideas on Presentation Design and Delivery by Garr Reynolds.  This book seeks to inspire us to deliver better presentations.  I’ve sat through hundreds of PP presentations.  Not many were memorable.  But some were, and those that were follow the guidelines in this book.  It’s about crafting your message, not making yourself obsolete by having the slides stand alone, simplifying and using space effectively.  It is a convicting book because boring presentations are the de facto result when we use built-in templates, cut-and-paste from Word documents and Excel spreadsheets, and throw in some images we find from Google images.  We can do better, and we must do better.  We can serve our coworkers by better valuing their time, and making them delighted to have been at our presentations.

Each of these books is wonderful.  But having by serendipity read them back to back, I realize they are an inspired pairing.  The one guides us to presenting data in a way that is clear and compelling.  The other guides us in using that data in clear and compelling presentations.  If we take these lessons to heart and apply them in our creations, we may not save the world from death by PowerPoint, but we can avoid adding to carnage.

Monday, October 18, 2010

Making Microsoft Test Manager Better

There are three things (two features and a datasheet) that, if Microsoft added to Test Manager, would push me past the hesitation I have to move my defect and test case management from our existing tool to TFS and TM. 

Currently, we are using Rally across the board: requirements, defect management, test case management.  Having all of our data in one unified tool has some definite advantages, and makes traceability easy.  But inadequate customization, poor support for required fields (they are either required or not, regardless of the state a defect is in) and an inability to fail a specific test step are just three examples of why I am open to enduring the pain of transitioning to another tool.

But I want three things from Microsoft.

First, the ability to version test cases.  I was surprised, maybe even shocked that this was missing.  After all, TFS is the base upon which TM stands.  And TFS is all about versioning.  Like most commercial software companies that deliver software (i.e. not hosted), we have multiple parallel development streams being worked on.  As requirements change, as defects are discovered and as gaps are found, our test cases change.  I do not want to have to duplicate my test cases for each stream.  But I need to be able to modify a test case for one streams and still execute the previous “version” of that test case for another streams .  Test case versioning is the obvious way to handle this well and TM doesn’t have it.

Second, and this is not a feature, I want hard data on the system impact of executing test cases within TM.  It has some impressive features.  It will capture a video of the application execution which can be automatically submitted with defects.  It will keep track of what code is executed when a test case (manual or automatic) is run so later, when a change set is introduced it can recommend the test cases which are affected and therefore need to be rerun.  It will collect IntelliTrace data to allow a developer to step through the code as it executed when a defect was discovered.  This is great stuff.  It also sounds like the Heisenberg Principle on steroids.  What is impact to the system?  I cannot get any data when I ask Microsoft this, only vague responses that “I’ve never seen it be an issue.”  That’s not good enough.

The third want is related to the feature I mention above.  I can execute manual and automated tests and tell TM to keep track of the code that gets executed.  Later, when development checks in a change set, I can know which test cases exercised code that was changed.  This is a great feature.  So I asked the following question at a Microsoft presentation:

Since you keep track of what code gets executed when I run a test case, can I run through all of my test cases and then have TM tell me what code never got executed?

They have all the data to have a very powerful code coverage feature, but the answer, sadly is no, the system will not do that.  But it makes too much sense not to add in the future.

Microsoft’s focus on testing is exciting and long overdue.  The team has delivered some excellent features and the defect handling in particular is very powerful.  But changing tools is a sometimes painful, and usually a hard sell internally.  The addition of these three things would make me pull the trigger on the change.

Friday, October 1, 2010

Using the Intranet to Implement Shared Steps

One of the really useful features in Microsoft Test Manager is shared steps.  It allows you to specify steps which are shared across multiple test cases, give a name to those steps, and reference them in your test cases. 

Wherever the shared step is referenced, all of the associated steps follow.  This can be used for manual test cases, automated test cases, or a mix, where the shared steps are recorded and referenced in a manual test case.  Microsoft’s example is recording the logging in to your application, and then including that automated snippet in your test cases.

We have not yet embraced Microsoft Test Manager (I’ll give you some reasons in an upcoming post), although we are now on Visual Studio 2010.  As far as sharing steps, we are doing something very simple, but for us, very beneficial.

We have created wiki posts on our intranet site which describe the steps needed to perform our most common tasks.  These include logging in to the application, creating a new case, creating a new patient, performing filtering and organization activities on our cases, etc.  Each wiki post describes these steps in enough detail so it is clear how to perform the actions and what result is expected.

Then, in our manual test cases, we include a one-liner and link it to the associated wiki page.  The test case may have a step which states “Filter Pile on last name starting with ‘D’”.  That is linked to the page which describes in detail how to do that.  When you are executing the test and you come to that shared step, if you know how to perform it, do it.  If you need more information, simply click on the link and the wiki page will give you the details you need.  The benefits:
  •  Common activities are centrally located and thus easier to maintain
  • Test cases are shorter as detailed is abstracted out
  • Testers needing a knowledge refresh can get the detail they need without any searching

We could record these steps although we are not doing that now.  This is a simple technique which we are leveraging to improve the readability, supportability, and usability of our test cases.

Thursday, September 30, 2010

Star West 2010

I just finished my second ever test conference: Star West 2010 in (hot, rainy!) San Diego.  I have attended many development conferences over the last 2+ decades as a developer or development manager, and have attended conferences which include test sessions (Agile 2010 for instance).  But this was only my second bona fide, testing-centric conference.  I brought two of our testers with me.  It was the first conference of any kind for either of them.

The conference did not get off to a good start for us.  The content of the first day sessions on Monday were disappointing for me.  Tuesday was significantly better, and Wednesday and Thursday was a mix. 

When we were disappointed, it was often due to the content being too simplistic for us.  It became evident that we are doing much better at adopting agile at our company than the typical audience member the show is geared towards.  We came to the conference ready to discuss the myriad of testing challenges we see at our company as we relentlessly try to improve ourselves.  We came away thinking “Gee, we’re doing pretty well!”  That kind of thinking is a recipe for mediocrity and I will not let us think that way for long.  There was a wide variety of presentation skills, and some presentations were frankly fresher than others; some seemed stale.

When we were engaged it was due to a presenter either going deeper into agile (which we were hungry for), a very practical session (such as free or cheap testing tools) or a presenter striving to break through the same old testing thoughts and presenting something challenging.

But here is what was most exciting to me about the conference.  I emailed the conference leadership at SQE after my disappointing first day and told them about my experience and how I thought they could improve the conference.  They thanked me for the email which is what I had hoped for – that they took it in the spirit of honest, helpful feedback I intended.  But they then modeled humility, continuous improvement and strong leadership by asking to meet with me to hear my thoughts in person.  That is being intentional about improvement and it speaks volumes about the leadership of the conference.

Here’s what I think: 80% of the conference attendees are first-timers (SQE’s figure).  My anecdotal experience (supported by those with more experience than I) is that in general (exceptions abound!) testers are not as proactive in staying current and providing thought leadership to their domain as developers are to theirs.  The content of Star West is meeting the attendees at the level they want to be met. The problem is testers want to be met at too shallow a level.  Don’t come to a conference to learn what you can read in a book.  The best way to improve Star West is for attendees to improve themselves and demand deeper content.  There was some good content at Star West and I am glad I went, and I am glad I brought 2 team members.  But we could go much deeper, and energize and empower testers to be change agents in their organizations.  That would make for exciting future Star West conferences.