Developing on Staxmanade

TFS bisect the manual way (When was that bug introduced?)

I’d like to share a powerful workflow I originally found using git and its powerful git-bisect command and how I’ve leveraged the idea when using TFS.

What is a bisect on your source history?

Git’s bisect command is extremely powerful and I won’t be covering it here. However git describes its feature as a way to:
Find by binary search the change that introduced a bug

Why do I need to look through source code history to find why a bug was introduced?

It’s true, that many bugs are so basic that once you hear about the bug you immediately understand where it is, why it’s broken and how to fix. In that scenario this approach is not something you need.
However, if you know a bug was introduced sometime in the past but are not sure when or how it was introduced, I think we could all agree that doing a binary search through the history of your code’s changes is a pretty good approach to finding the specific change-set that introduced a bug. Once you have a handle on the specific code change that was made, it becomes much easier to understand how it changed and track down the reason a bug was introduced and how to fix it.

High level steps/concept:

  1. First you should have discovered a reproducible bug
  2. Next we have to find a commit in the past where we know the bug does not exist. (Say you know that 3 weeks ago, this bug didn’t exist.)
  3. Now, from that “good” commit we do a binary search through source history to find when the bug was first introduced. Noting at each commit its goodness/badness state and continuing with the search until we’ve found the commit when the bug was introduced.
  4. Analyze the commit until you understand what and how the bug was introduced and fix it.

One manual approach to TFS bisect.

There is not a built-in feature with TFS (that I’m aware of) and leaves us with some manual bookkeeping that we wouldn’t have to do if we were using git.
Side Note: If you’re familiar with git, I’d recommend just using git-TFS or the new git-tf tool and just clone your TFS repro and use git-bisect to accomplish these steps.
Let’s assume you can find a commit in the past that you know doesn’t have the bug.
Load up PowerShell and CD into the root of your project directory. Execute a tf.exe command to pull a string output of your history into the clipboard. We’ll leverage this in our bookkeeping.
I’m using PowerShell and have tf.exe on my %PATH%.
>tf history ./* /recursive /noprompt | clip
Notice the pipe to the ‘clip’ command at the end of the TF call. This places the output of one command into the clipboard.
Let’s say the above command places the following into our clipboard.

Take the output of the command (that is now in your clipboard) and paste it into Excel (or notepad) wherever you want to keep track of your work.
We know that at commit ID #13 the bug did not exist. Let’s mark it as ‘good
Now we start our binary search through the different commits to find our bug.
Find a midway commit between this commit (#13) and the most recent commit (#79).
You don’t have to be all mathematical about the binary search, I tend to just eyeball the ‘middle’ and go from there. But you’re more than welcome to execute the binary search perfectly. Smile
Now use your TFS tools to checkout this specific version. In this case we’ll checkout commit #46.
I tend to prefer the command line to check out the specific version as it’s easier to repeat these steps with commands and we already have the command open from earlier.
>tf get ./* /recursive /force /overwrite /version:46
Or you can use the GUI to get a specific version.
With version #46 checked out, we run our tests and find that the bug exists here. Mark it as ‘bad’ to signify the bug is here.
Now we can continue our binary search between commit 13 and 46 until we narrow down the exact commit where the bug first shows up.
As you can see by the numbers to the left in the screenshot above, it took us 5 checkouts to find the commit where the bug was introduced.
Now the rest is up to you. I tend to spend time looking at the diff and understanding why the specific commit introduces the bug. If you keep the size of your regular commits small then it tends to be pretty easy to understand why the bug was introduced and how to fix it.
Don’t forget to ‘get latest’ before you try to do much work so you’re not stuck with your source code way back in time.

These steps should be automated.

It’s true the bookkeeping should be done for us by a tool, and in fact I started writing a PowerShell implementation of this, but never finished and didn’t find it worth my time. The manual approach works well, and it’s not something I have to use often. However, I did find someone who’s written a tool that looks promising.

Happy bug hunting.

Development Environment Merge/Compare Tools Setup in Visual Studio/TFS

I’m writing this post more as a reminder to myself when I need to setup my development environment again. In the past I have usually leveraged Google to search Keith Craig’s blogs and pieced the information together each time.

In this post I will outline the details I need in the two blogs Keith wrote and how I use that information when setting up my development environment for custom diff/merge tooling with Visual Studio and Team Foundation Server. I’m giving both the text version for copy/paste and the screenshots of each so it’s clear how each is used.

First you will need to install the tools listed before going into configuring the setup of the options.
  1. DiffMerge
  2. WinMerge
Next you need to open the TFS “Configure Tool” dialog from within Visual Studio.

Go to Tools –> Options –> Source Control –> Visual Studio Team Foundation Server –> Configure User Tools.


Now you’re ready to configure each tool as outlined below.

Merge tool - DiffMerge

How to integrate with VS

My setup options for VS:

Extension: .*
Operation: Merge
      x64 default install path - C:\Program Files (x86)\DiffMerge\DiffMerge.exe
      x86 default install path - C:\Program Files\DiffMerge\DiffMerge.exe
Arguments: /title1=%6 /title2=%8 /title3=%7 /result=%4 %1 %3 %2

Compare tool - WinMerge

How to integrate with VS

My setup options for VS:

Extension: .*
Operation: Compare

      x64 default install path - C:\Program Files (x86)\WinMerge\WinMergeU.exe
      x86 default install path - C:\Program Files\WinMerge\WinMergeU.exe
Arguments: /ub /dl %6 /dr %7 %1 %2 -e



James Manning
FWIW, you can see the arguments for other tools here:

I love KDiff3 :)

Branch-Per-Feature with Team Foundation Server (TFS) – Part 3 – Lessons Learned

Lessons learned by doing Branch-Per-Feature with Team Foundation Server.

Branch-Per-Feature with Team Foundation Server (TFS) Series Links

  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

In this post I’ll outline several of the issues/hiccups/features we found while attempting to apply Branch-Per-Feature with TFS.

260 Characters limit.

One of the first obstacles we ran into when attempting the Branch-Per-Feature with our TFS was the 260 Characters limit (you can read more here

The largest offenders of this were artifacts added to a project as a result of doing an “Add Service Reference”. This feature created file names with the entire namespace in the file path. The way we got around this was the T4 replacement for "Add Service Reference" which helped keep some of the longer file paths shorter in our Silverlight projects. However it still rears it’s ugly head when we create a new branch and give a descriptive name that’s too long.

Which bring me to the next hiccup we run into.

Don’t RENAME a newly created branch. Delete it and re-create it with new name.

After a branch was created, if we decide the name for the branch wasn not good enough (either causes file path length issues, or it’s description isn’t clear enough), DON’T RENAME THE NEW BRANCH. Instead, choose to delete and re-create it. Clearly this has to be caught before commits are made to the new branch.

Why is this an issue?

In TFS, when you follow the simple steps to merge a feature from a branch into the trunk, you get to a point where all the changes made in the branch are checked out and staged to be merged into the trunk in your development environment. However when (or if) you’ve applied a rename to the branch at some stage in the lifetime of the branch, you don’t get a nice pretty list of files that changed and ready to be checked in, instead you get every file in the branch as though it were changed at some point in time. Sadly, this is usually not the case, and why I said earlier to catch the problem as soon as possible.

One of the great benefits of the branch/merge strategy is the final merge into the trunk is typically all changes required for a particular feature. When you have to go back to grapple some source control history debugging, it’s much easier to detect large changes from branch merges than sifting through tens of check-ins per file.

After the feature is complete and you start the steps required to merge the feature into the trunk, typically you only see the files that have changed get checked out and ready to be merged into the trunk. However, when a rename occurs on the branch it somehow tags every item as though it were changed. So the Merge back into the trunk ends up looking like the entire project changed. This makes the source diffing extremely difficult as I described in the Tester Pass 1 step in our kanban steps.

Can’t easily merge between different branches or grand-child branches (or at all, didn’t push hard enough to make it)

Another issue we’ve come across (which hasn’t road blocked us too bad) was the in-ability to merge between two different branches that stemmed from the same trunk or merging a grandchild branch into the grandparent (bypassing the child/parent).

A specific scenario we ran into was when Feature A was under development on a branch, and a developer was ready to start working on Feature B. Feature B had a dependency on some of the changes that had taken place in Feature A, however we wanted to deploy Feature A before Feature B was complete. As an experiment we thought we would just create Feature B’s branch straight from Feature A’s branch, however what this would have left us with when Feature A was merged into the Trunk was Feature B two levels away from the Trunk.

Although TFS allows this scenario, any changes to the trunk had to be pushed into Feature A’s branch before it could be pushed into Feature B’s branch, and come final merge time for Feature B, we couldn’t merge straight into the trunk. We would have had to first merge into Feature A’s branch and then do the final merge into the trunk. In the end we just held back the deployment of Feature A and both Feature A & B were developed in Feature A’s branch.

I read somewhere that this “could” be possible through some command line tools, however it wasn’t important enough to go through the pain and this would be much better if we could just use the existing TFS interface to accomplish this simple scenario.


I’m sure there are other tips/tricks I could outline here, but either they’re not coming to mind or they’re too basic to really care about. If I think of any, I’ll update this post further.


Cool, thanks for the update.

Yeah, I thought it would come down to planning and communication. I only mentioned as I've read Martin Fowler's recent blog about it. He didn't seem to favour it, and mentioned CI is the preferred point of communication.

Basically branching per feature makes increased work in terms of planning and communication.

I'm still in favour of it though, especially for the work we do (Broken down, smallish stories, with WIP limits).
There are a number of things to consider.

Branch-Per-Feature is not for every project.

The fear of a merge. This was big for everyone on the team in the beginning; however, with practice and repetiong, this has become just a part of the process. I think the biggest thing here was to just start doing it, learn as you go, and frequent merging will give you enough practice that a merge becomes simple.

Regularly forward merge (pull changes from the trunk into your branch). (I almost do it after every check-in to the trunk)

As far as code merges stomping on each other’s code, it does come down to careful planning and communication. And enough separation of concerns that one feature should NEVER be stomping on another feature’s code.

Think of having a project as described in Ayande’s blog here

Focus on branching features that are separate enough in context that they don’t collide in difficult areas of the code.

Focus on keeping the features in each branch small. We deploy weekly, and it's unfortunate and rare for a branch to live more than 2 weeks. (It happens, but we try not to)

Even following the two ideas above, you will still run into merging conflicts. We run into them more frequently than I’d like, but it’s really up to the developer to be careful. I've come to not trust the "auto-merge" within TFS. Well, I trust it to do the bulk of the work, but I still scrutinize and diff every file before checking those merged changes in.

TODO: One item still on my plate is to setup our C.I. server to do things like build/run unit tests on each branch automatically. (Without having to setup/configure a build per branch manually) This is unfortunately one large flaw with the existing C.I. tooling. Most of my team is pretty good about running unit and integration (database) tests on their dev box so we hopefully don’t see too many failing tests after a merge into the trunk.
How did you deal with the issue of merging?

Branching per feature is a move away from Continuous Integration, and conflicts can occur when trying to merge your work back into the trunk because someone else has made changes to the same code.

Was this not an issue for you? Or did it come down to careful planning and communication?

Thanks for the feed back. I did consider the scenario you proposed where after A is merged into the trunk B becomes A. However, we would run into the Rename issue I described above in this blog post.

I do agree that most of these issues can best be mitigated through careful team planning, and is what we will probably continue to do.
Either approach is fine. You could probably do a baseless merge (with the cmd line) up from B straight to the trunk if necessary, but IMHO it muddies the cleanliness of the branch hierarchy and causes confusion.

We've used both strategies on our team, and these days we tend to favor the "A&B in the same branch" approach. I think it really comes down to planning out your dependencies in advance and trying to align your team's work to do the most parallel development possible.

Another subtle variation is that if A is ready to go up to trunk before B, then after you mrege A back up, go ahead and merge B in its current state to A, then delete branch B. This way, you can take integrations from trunk down to B as new features come in (while you're still developing B), and you won't have to do the double-merge from B->A->trunk when you're done with B.

Branch-Per-Feature with Team Foundation Server (TFS) – Part 2 – Kanban Stages…

Branch-Per-Feature with Team Foundation Server (TFS) Series Links

  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

In the previous post (“How we got here”) I provided a small intro into why and how my team arrived at a Branch-Per-Feature/Kanban development lifecycle.

In this post I’ll describe each stage of our current development lifecycle.

  1. Triage – Initial drop point for most features.
    • I say most because some head straight into other steps further along in the pipeline.
    • Items are categorized by area or department (These map to an actual TFS Area)
      EX: Infrastructure, Operations, Billing, Client Services, etc…
    • Items are prioritized.
      Each department head & higher gets the opportunity to prioritize within each Area.
    • The WIP in Triage is basically N/A.
  2. Backlog – Items in the Backlog have been deemed important enough to begin their life in the pipeline and start being pulled through each stage.
    • These items need extra research, design, and requirements gathering.
    • This is where items that were priority #1 in their respective Area go head to head with the #1’s in other Areas and can be further prioritized.
    • We try to keep the WIP in the Backlog to about 10 or less.
  3. Queue – Items in the Queue are items that have no further design/requirements gathering needed and when a developer is ready can pull an item straight into development.
    • We keep the WIP in the Queue to about 7 or less.
  4. Development
    • When the developer pulls an item from the Queue into development we create a branch in a Branches folder and give it a name related to the feature being developed. All development for the feature is done in this branch.
    • In our current workflow, after the development of the feature is complete, the developer does a first pass of testing. (We have a fairly small shop, where all of the developers are testers and all of the testers are developers.)
    • When the developer is done testing the feature is (pushed) into Tester Pass 1. (this is part of why I stated above we have a semi-kanban & not a true pull based kanban)
    • IMPORTANT NOTE: frequently forward merge from the trunk into the branch this helps to avoid issues later, and is a requirement before the next stage (testing).
  5. Tester Pass 1
    • A different person from the implementing developer needs to be brought up to speed as to what the feature is and the needed changes to accomplish the feature.
    • The tester here pounds away at the changes and gives feedback to the original developer of any issues/changes that may need to be made.

    TFS Hint: When I become Tester 1 for a feature, one trick I use is to “pretend merge” the branch back into the trunk. I say pretend merge because I take all the normal steps to merge into the trunk up until the check-in part. I do this so I can see all the changed files easily and can diff each file with the trunk to find the exact source code changes. After a visual code review is complete I undo any changes and begin testing the branch.

  6. User Acceptance – Before we merge the feature into the trunk, we do a review with the customer.
    • This allows us to get solid feedback before it gets merged into the trunk, one more testing pass, and deployed to production. This way we DON’T get feedback like “this is not what I need because it needs to do/be like…“ (after it’s been deployed) and instead get more of a “could you tweak it to be like…” which allows us to deliver what the customer actually needs and not what we interpreted the design to be.
    • Also since it’s not merged into the trunk, any changes requests as a result of the User Acceptance review allow us to take our time to get the change done right and not feel like we have to hurry the feature to catch the week’s deployment. We are able to correctly make the changes, and we can usually communicate to the user at the meeting what changes to the system mean, (if we change Feature A and add/remove/change how it operates it may not end up shipping in the next scheduled deployment (or next etc…))
  7. Merge Into Trunk - original developer is now responsible for merging the feature branch back into the trunk.
    • If the changes made in the branch are more system-wide or architectural in nature, we will pair on complicated merges.
    • There is usually some coordination that may need to happen before a merge can be done. We don’t want to merge a new feature into the trunk when we’re creating a deployment snapshot and anything else that may determine we hold off on merging the feature.
  8. Tester Pass 2
    • After merging the new feature into the trunk is complete we have one more tester take a shot testing the newly merged features. We added this step to the kanban to help reduce potential regression bugs and help keep the quality band high before the feature was marked as done and queued for deployment.
  9. Deployment – After all testing and user acceptance is complete the feature is moved to Deployment.
    • This step is only to keep track of what is queued up for the next deployment cycle.
    When we started this new process, we attempted to deploy each feature as soon as it became available. This caused some issues, in part related to source control management and timing of pending Merges; however, the biggest issue revolved around deployment and interruption to the users. When we deployed once every 4-6 weeks with our old process, this wasn’t much of a problem for the users. However, deploying whenever a feature was ready caused some issues with our users. We settled on a weekly deployment (same day and time every week), if there is something to be deployed it’s now on a regular schedule.
    One other benefit of deploying on a regular weekly schedule is the cadence the team has adjusted to. There’s less confusion around “are we deploying today, tomorrow” etc… With deployments scheduled for a regular cadence, it’s much easier for us to create a process that is consistent, efficient and less error prone.
  10. Completed - After the deployment is complete the task is moved to Completed and considered DONE!
Below is a screen shot of how it looks in our TFS work item view.

Yes, we hacked and crammed our kanban into the MSF-Agile template, and although it’s rough, it’s working better than our previous non-kanban ways.

To move a feature though the kanban we select it in the “iteration path” drop down.


I don't disagree with the impression that another pass of testing once a branch has been merged could seem a little redundant. However, I think it really depends on your development shop and the context of the work being tested. We have a small shop (5 devs, 0 testers). So each pass of testing done by a different dev is a completely different set of experiences/backgrounds taking a look at a feature under test. In fact we've found the second pass of testing to be extremely valuable and cost effective (finding/fixing bugs before they get deployed).

About the automation part - we run a full automated test suite & integration tests after each check-in to the trunk. However, (in our context -- 6yr old code base - very little code coverage, etc...) automated tests are just not enough to give the confidence that every merge means everything's green.

There are many factors to take into consideration when deciding how your kanban should be setup and all the stages you will need. I'm am certainly not speaking with authority on the subject, just experience (and only a small experience at that). But one of the great parts of this process is it's ability to change when a need is discovered.
Interesting approach. The main issue for me is the merge back into the trunk. You solve that with a 2nd round of testing.

I'm not sure if this is wasteful though. It's definitely needed to check the merge went ok, but it's a lot of inspection.

The developer inspects his code, after working on it. Then a tester inspects it. Then another round of inspection takes place after the merge.

That said, automation would solve a lot of that. The same tests could be re-run quickly (plus a bit of manual testing) to ensure everything was ok.

What we may do in fact, is just have automation tests running constantly on our trunk. Do the merge, and if nothing breaks it's ok. If there were conflicts in merging, then we may do some manually testing around that area.
Have you seen the kanban process template over at codeplex?

I haven't taken a thorough look at the template myself, just know it's out there...
For process template customization, there are some great tips and tricks in this blog post:

I'd really like to see someone develop a Lean/Kanban process template something like the Conchango Scrum template & share it with the community. Maybe even something that integrates with AgileZen or something similar to help visualize the Kanban. Lots of opportunities in this space!
@Jason As far as TFS is concerned my blog title is probably a little misleading... I will have a very TFS centric post coming with the gotcha's and others I've learned while implementing the process. We haven't gone as far as customizing the template or written any specific reports. I have a sql statement that gives a rough estimate of the time it takes for a feature to get through the pipeline, but feels more like a hack than a useful report.

The big thing lacking for us in the tool at this point is the visibility of the kanban. Given time I could probably write a report or something else to display this information, but don't feel my time would best be spent there.

If you have any good tips on how to slowly customize an existing template and morph it into something else, I would like to do things like you state in your comment "remove iterations altogether and replace...".
I like this approach. I'm curious if you've developed any reports around your new template? Have you looked into customizing your process template in such a way that you remove iterations altogether and replace with something like a "Kanban Stage" field?

Branch-Per-Feature with Team Foundation Server (TFS) – Part 1 – How we got here…

Branch-Per-Feature with Team Foundation Server (TFS) Series Links

  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

During one of my blog reading catch-up afternoons, I ran across Derick Bailey Branch-Per-Feature Source Control. Part 1 Why. It is a great read into the subject, covering many of problems other source control methods introduce and how the Branch-Per-Feature concept alleviates some of these issues. It is also great to see a how it can be used in relation to a Kanban style of development. I look forward to his further posts about some of the details of this process with subversion.

While waiting for those posts, I thought I’d write up some of the things my team and I have learned while implementing a semi-Kanban process using Team Foundation Server (TFS).

We practiced a scrum like agile process for about 3 years with a fair amount of success. However, about 4 months ago we hit the end of a very long “Sprint” (6 weeks). There were at least 3 major “features” built during this sprint and we also determined that since were going to be developing several large features, we would also take the time to upgrade our database server, since we would inherently, through the development time, be able to do a little database regression testing.

During the retrospective for this sprint there developed a couple themes revolving around

  1. Size and number of features developed made testing in the allotted sprint time difficult to complete each feature thoroughly.
  2. The features developed at the beginning of the sprint were developed and tested very thoroughly,
  3. The features developed near the end of the sprint felt rushed and resulted in some choices making some choices that may not have left the code
  4. Upgrading a to a new database version AND deploying all these new features threw too many things in the mix for one deployment.
  5. The items developed first sat behind the development firewall for in one case over 3 weeks, when that feature could have given the business value 3 weeks prior.

Our transition to Kanban was somewhat sudden. I had been reading kanban as a tool for delivering software for a couple months. I thought that many of the issues we were having with scrum could be resolved with a simple Kanban process. So I brought up the idea of doing “feature driven development” during the retrospective. What came as a shock to me was the welcoming the team gave this new idea. I had been thinking about how to bring the idea up for some time, and couldn’t imagine how the team would want to make the drastic change that this new style of development would require. After talking about it for a short time during this retrospective, just about everyone on the team seemed to jump all over the idea.

The team decided right then that we would give it a try for a while, work out the kinks, learn from it, and see how it would go.

When we first created our kanban, there was a much simpler pipeline of stages than the list I will outline as our current process. Most blogs/articles basically described starting with something simple like (Backlog, Dev, Test, Deploy). So we started with those very simple steps and it has been refined for our process.

We continued to value the retrospective and through this constant reflection were able to very quickly fine tune our process to something that, looking back, is really suiting us well.

We have been using the new approach for 4 months now and haven’t looked back. Some of the original things that the team was worried about when moving to the Branch-Per-Feature have all but washed away. In particular, one of the largest concerns everyone on the team had was the overhead branching and merging would bring into the process. And while it is a little bit more overhead than just opening the solution and pounding away, it brings many more benefits to the team than we loose in branching/merging time.

In the next part I’ll describe the kanban stages our team has ironed out and how we mix that with the Branching-Per-Feature development.


@Jason - Fixed the links, sorry about that.
Jason Barile
Very cool series - Thanks for posting. Could you fix the link to the 2nd article at the top of this post?