The Way We Work
May 11, 2006 by oDesk

At oDesk, we work with offshore teams as a matter of course. The issue we continually face is how do you manage and measure the performance of your remote team. Sure, screenshots of what they are doing gives you a sense of what they are working on but it is hard to talk to their co-workers, peers, etc.

With most local teams you can set a series of milestones and checkpoints and manage to those deliverables. With a remote team, that becomes problematic due to the distance and lack of regular visibility into their work. Many software development shops keep a staff of programmers/QA onsite to review all code checked in and bugs reported by the remote team. While this level of management works, the reality is that many small companies who wish to offshore do not have programming expertise onsite.

To maximize the working relationships we have, here are a few of the things we believe are key to a successful outcome.

Communication:

First and foremost is communication. If you are not constantly in the loop with your offshore team, you are out of the loop. And with a team 8k-10k miles away, being out of the loop is not good for business.

At oDesk, we utilize several modes of communication. Email, IM chat, Skype Voice and IM Chat conference. We keep a Skype chat conference open and active during the hours we share with our offshore QA team as a open line of communication. It is amazing how freely information began flowing once we started that. I can come in and review the chat of the previous 2-3 hours and see any issues and how/if they were resolved. Our teams use Skype voice chat several hours per day. In fact there have been active Skype voice calls lasting several hours.

Assigning Tasks:

Having clearly defined tasks in key in managing remote teams. We assign tasks via Bugzilla for writing/updating automated testing scripts, regression testing, and back end api testing. These tasks are then broken down to their components and assigned to individual QA Engineers. Once a task is complete it is marked as resolved, and checked into SVN. (if applicable)

Metrics:

We have also determined that the standard metrics of test case writing, defect reporting and management, as well as the updating of the automated testing scripts offer us a good place to start.

We track metrics such as:

  • Defects logged per hour of testing
  • Test Cases written/modified per week
  • Ratio of valid vs. invalid Defects

Using these metrics not only gives us a peek into the quantity but also the quality of work vis a vie auditing defects, and test cases.
We also keep track of test case files by requiring them to be checked into SVN. This allows us to track progress on the file from revision to revision. It also gives us the ability to do forensic work if we think we missed a very obvious bug or a feature was added that was not covered in a test case.

Inclusion:

We include representatives from our offshore teams in our local Engineering meetings. This gives them the chance to interact with the team, make suggestions, and participate in the decision making process. This has the effect of making them a real part of the team and it shows in how they respond to the job at hand.

Conclusion:

This is just a few of the things we do but there is one underlying theme. Visibility breeds accountability. It also let's people know you are not just interested in getting your project done, but also in helping them be more successful.

For more info on this and other QA/Project Management related issues check out:

http://www.stickyminds.com

http://qaforums.com