Developing on Staxmanade

New Windows 7 Taskbar theme

I came back to my computer after some time and noticed what looked like a new theme on my windows 7 taskbar…

image

Turns out it was just a cmd.exe window that had been moved down a little and was showing through the see-through taskbar… Either way I thought it was interesting :)

StatLight (Silverlight Test Automation Tool) V0.8 Released

image

A new version of the StatLight tool has been released.

Download: http://www.statlight.net

Major Feature Updates:

  • Support for 64bit
  • Back support for previous Microsoft.Testing.Silverlight assemblies
    • By giving the tool a specific -v=[December2008 | March2009 | July2009], StatLight can now run the asynchronous tests supported by the Microsoft Testing library.
  • Xml report output.
    • By giving the tool the -r=<FilePath.xml>. StatLight will write out an xml report of the test run.
  • Update to the Silverlight 3.0 Runtime – Should be compatible with Silverlight 2.0 assemblies.

Minor Updates:

  • Support for auto-closing modal Silverlight MessageBox dialogs.
  • Update feedback to the end user for when access to run statlight web service is not allowed (Added better feedback on how to get it up and running).
  • Added a timeout monitor to detect when the StatLight browser has not communicated with the StatLight server within a reasonable amount of time. StatLight will stop running if communication (a test message) has not arrived within 5 minutes. (Will look into making this configurable)
Also to note:

PowerShell – Compiling with csc.exe – more of a headache that it should have been. It is possible…

I was attempting to use PowerShell to compile a group of *.cs source files – needing the flexibility of programmatically swapping out dependent assembly references at compile time depending on certain build conditions… Don’t want to get too much in to why I needed it, just that it is doable – (more painful than initially expected), but still possible.

First let’s get a csc command we want to compile.

Second let me state that this was more of an exercise in wanting to learn PowerShell and there probably other ways of accomplishing what I needed, just seemed like a good time to start down the painful learning curve. Also note, I’m not a CSC compiler pro – I haven’t analyzed each of the “options” and weather it’s right/wrong/best practice – it just works… (thanks to Visual Studio & MSBuild for hiding how we actually should use the compiler)

Ok take a simple csc compile command – (In Visual Studio – File –> New Project -> ClassLibrary1 as a good starting point). Compile the project & check the build output window. You’ll get an output similar to the below.

C:\Windows\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Data.DataSetExtensions.dll" /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:"C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll" /debug+ /debug:full /filealign:512 /optimize- /out:obj\Debug\PowershellCscCompileSample.dll /target:library Class1.cs Properties\AssemblyInfo.cs

Next figure how the heck to execute this in PowerShell.

& $csc $params --- NOPE
exec $csc $params – NOPE

I must have tried tens if not hundreds of methods to get the simple thing above to compile… needless to say I pinged a co-worker for some help. http://obsidience.blogspot.com/

His pointer – when trying to get big string command to execute in powershell do the following.

  1. Open up “Windows PowerShell ISE”  (on Windows 7)
  2. Paste the command prompt window (with an “&” at the beginning)
  3. look for any coloration changes like…
     image
  4. Next place PowerShell escape character [`] in front of any character where the coloration changes (They’re very subtle so look long and hard)
     image

We should now have a PowerShell string that compiles our project.

After I got that far – I cleaned up the compiler syntax for a little re-use. (You can download the project blow to check it out)

If you don’t want to see the entire csc compile in the project download above, below is the general usage…



################## Build Configuration ##################
$project_name = 'PowershellCscCompileSample'
$build_configuration = 'Debug'
#########################################################

$core_assemblies_path = 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5'
$framework_assemblies_path = 'C:\Windows\Microsoft.NET\Framework\v2.0.50727'

function global:Build-Csc-Command {
param([array]$options, [array]$sourceFiles, [array]$references, [array]$resources)

$csc = 'C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe'

# can't say I'm doing delimeters correctly, but seems to work ???
$delim = [string]""""

$opts = $options

if($references.Count -gt 0)
{
$opts += '/reference:' + $delim + [string]::Join($delim + ' /reference:' + $delim, $references) + $delim
}

if($resources.Count -gt 0)
{
$opts += '/resource:' + $delim + [string]::Join($delim + ' /resource:' + $delim, $resources) + $delim
}

if($sourceFiles.Count -gt 0)
{
$opts += [string]::Join(' ', $sourceFiles)
}

$cmd = [string]::Join(" ", $options)
$cmd = $csc + " " + $opts
$cmd;
}

function global:Execute-Command-String {
param([string]$cmd)

# this drove me crazy... all I wanted to do was execute
# something like this (excluding the [])
#
# [& $csc $opts] OR [& $cmd]
#
# however couldn't figure out the correct powershell syntax...
# But I was able to execute it if I wrote the string out to a
# file and executed it from there... would be nice to not
# have to do that.

$tempFileGuid = ([System.Guid]::NewGuid())
$scriptFile = ".\temp_build_csc_command-$tempFileGuid.ps1"
Remove-If-Exist $scriptFile

Write-Host ''
Write-Host '*********** Executing Command ***********'
Write-Host $cmd
Write-Host '*****************************************'
Write-Host ''
Write-Host ''

$cmd >> $scriptFile
& $scriptFile
Remove-If-Exist $scriptFile
}

function global:Remove-If-Exist {
param($file)
if(Test-Path $file)
{
Remove-Item $file -Force -ErrorAction SilentlyContinue
}
}

$resources = @(
#""
)

$references = @(
"$core_assemblies_path\System.Core.dll",
"$framework_assemblies_path\System.dll"
)

$sourceFiles = @(
#""
)

$sourceFiles += Get-ChildItem '.' -recurse `
| where{$_.Extension -like "*.cs"} `
| foreach {$_.FullName} `

$debug = if($build_configuration.Equals("Release")){ '/debug-'} else{ "/debug+" }

$options = @(
'/noconfig',
'/nowarn:1701`,1702', # Note: the escape [`] character before the comma
'/nostdlib-',
'/errorreport:prompt',
'/warn:4',
$debug,
"/define:$build_configuration``;TRACE", # Note: the escape [`] character before the comma
'/optimize+',
"/out:$project_name\bin\$build_configuration\ClassLibrary.dll",
'/target:library'
)

$cmd = Build-Csc-Command -options $options -sourceFiles $sourceFiles -references $references -resources $resources

Execute-Command-String $cmd
 
  

How do I get UISpy.exe on my Windows 7 Developer Machine?

I know there’s a tool somewhere in some SDK called UISpy.exe. It looks like this.

ms727247.UI_Spy_Main_Window(en-us,VS.90)[1]

You can read all about it here - http://msdn.microsoft.com/en-us/library/ms727247.aspx

But can you find it?

The docs in that link above state…

ms727247.alert_note(en-us,VS.90).gifNote: UI Spy is installed with the Microsoft Windows SDK. It is located in the \bin folder of the SDK installation path (uispy.exe) or can be accessed from the Start menu (Start\All Programs\Microsoft Windows SDK\Tools\UISpy).

If I go take a look at the (Start\All Programs\Microsoft Windows SDK\Tools\UISpy) as shown in the image below. UISpy is nowhere to be found.

image

In some forum post somewhere – I saw that I had to install the .Net Framework 3.0 SDK. Which is strange since I have the 3.5 framework already installed and the tool isn’t there. Oh well, I found it (I thought), downloaded it and tried to run the setup. Where in Windows 7 I was prompted with the following dialog.

You must use "Turn Windows features on or off" in the Control Panel to install or configure Microsoft .NET Framework 3.0.

Ok, fine I’ll do that…

image

Searching the list of features to turn on and off, the only one that looks anything like what I’m looking for (outlined in red) has already been installed.

image

Off to do more research…

Then I run into this blog http://blogs.msdn.com/windowssdk/archive/2008/02/18/where-is-uispy-exe.aspx

Eventually I download the Windows SDK for Vista Update. Run the install wizard with all the defaults except this screen…

As the blog post states – make sure you take a pre-install snapshot of the following registry key

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs
Before Install:image 

image 

After Install:
image

I reverted the changes back to v6.0A in the registry above…

And TAH-DAH UISpy.exe has been found.

C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\UISpy.exe

Pain in the A$$… But it seems to work.

Comments

Jason.Jarrett
I never experienced a dialog. However, this post is rather old and things have changed since. I have not tried to run these steps since then. Good Luck
Anonymous
Thank you, however, when i done and open the UISpy, it pop up a dialog about stopping the tool. How can i handle it!
Anonymous
I have to agree with you, real pain in the a$$. Seriously, your post helped me a lot more specifically with the registry rollback.

Thanks

Don’t forget what’s happening behind the syntactic sugar. (C# events)

In a comment posted by Lashas to the i4o project over at Codeplex (comment here), he suggested a great performance improvement where instead of assigning an event’s handler directly to a handler method callback, we assign it to a field containing the handler. Initially I was skeptical of his change until I profiled it for myself. And after thinking about it for a second it made total sense.

The original implementation of adding an event handler to the property we used the C# syntax as follows.

(item as INotifyPropertyChanged).PropertyChanged += IndexableCollection_PropertyChanged;

where “IndexableCollection_PropertyChanged” is a method defined as

private void IndexableCollection_PropertyChanged(object sender, PropertyChangedEventArgs e)
{…}

Lashas’s suggested performance improvement was to initialize a private field, assign it to the IndexableCollection_PropertyChanged and add/remove the property changed handler from the field instead of the example outlined above.

Field definitaion:

private PropertyChangedEventHandler _propertyChangedHandler;

Initialization (in the constructor of the class)

_propertyChangedHandler = IndexableCollection_PropertyChanged;

Usage then becomes

(item as INotifyPropertyChanged).PropertyChanged += _propertyChangedHandler;

Lashas reported approximately 30-40% performance improvement when adding/removing items from the collection by the suggested change.

Why is that?

Well, if you think about it, what is this line?

(item as INotifyPropertyChanged).PropertyChanged += IndexableCollection_PropertyChanged;

but just a little Syntactic Sugar on top of actually doing…

(item as INotifyPropertyChanged).PropertyChanged += new PropertyChangedEventHandler (IndexableCollection_PropertyChanged);

and by the suggested change of assigning a field to the property change we avoid all the “new PropertyChangedEventHandler(…)” calls. This object creation can happen once in the constructor of the collection and not on every add/remove of an item.

Just a good reminder that you need to constantly be aware of what is happening under the covers. The saying most complexity can be solved by another layer of abstraction can cause issues when you don’t truly understand (or at least remember) what that layer is actually doing.

Development Environment Merge/Compare Tools Setup in Visual Studio/TFS

I’m writing this post more as a reminder to myself when I need to setup my development environment again. In the past I have usually leveraged Google to search Keith Craig’s blogs and pieced the information together each time.

In this post I will outline the details I need in the two blogs Keith wrote and how I use that information when setting up my development environment for custom diff/merge tooling with Visual Studio and Team Foundation Server. I’m giving both the text version for copy/paste and the screenshots of each so it’s clear how each is used.

First you will need to install the tools listed before going into configuring the setup of the options.
  1. DiffMerge
  2. WinMerge
Next you need to open the TFS “Configure Tool” dialog from within Visual Studio.

Go to Tools –> Options –> Source Control –> Visual Studio Team Foundation Server –> Configure User Tools.

image

Now you’re ready to configure each tool as outlined below.

Merge tool - DiffMerge

How to integrate with VS http://blogs.vertigo.com/personal/keithc/Blog/archive/2008/04/09/using-sourcegears-diffmerge-as-the-merge-tool-in-microsoft-team-system.aspx

My setup options for VS:

Extension: .*
Operation: Merge
Command:
      x64 default install path - C:\Program Files (x86)\DiffMerge\DiffMerge.exe
      x86 default install path - C:\Program Files\DiffMerge\DiffMerge.exe
Arguments: /title1=%6 /title2=%8 /title3=%7 /result=%4 %1 %3 %2
image

Compare tool - WinMerge

How to integrate with VS http://blogs.vertigo.com/personal/keithc/Blog/archive/2007/10/31/using-winmerge-with-microsoft-team-system.aspx

My setup options for VS:

Extension: .*
Operation: Compare

Command:
      x64 default install path - C:\Program Files (x86)\WinMerge\WinMergeU.exe
      x86 default install path - C:\Program Files\WinMerge\WinMergeU.exe
Arguments: /ub /dl %6 /dr %7 %1 %2 -e

image

Comments

James Manning
FWIW, you can see the arguments for other tools here: http://blogs.msdn.com/jmanning/articles/535573.aspx

I love KDiff3 :)

Branch-Per-Feature with Team Foundation Server (TFS) – Part 3 – Lessons Learned

Lessons learned by doing Branch-Per-Feature with Team Foundation Server.

Branch-Per-Feature with Team Foundation Server (TFS) Series Links



  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

In this post I’ll outline several of the issues/hiccups/features we found while attempting to apply Branch-Per-Feature with TFS.

260 Characters limit.

One of the first obstacles we ran into when attempting the Branch-Per-Feature with our TFS was the 260 Characters limit (you can read more here http://troyfarrell.com/blog/post/Maximum-file-path-length---Windows-and-TFS.aspx).

The largest offenders of this were artifacts added to a project as a result of doing an “Add Service Reference”. This feature created file names with the entire namespace in the file path. The way we got around this was the T4 replacement for "Add Service Reference" which helped keep some of the longer file paths shorter in our Silverlight projects. However it still rears it’s ugly head when we create a new branch and give a descriptive name that’s too long.

Which bring me to the next hiccup we run into.

Don’t RENAME a newly created branch. Delete it and re-create it with new name.

After a branch was created, if we decide the name for the branch wasn not good enough (either causes file path length issues, or it’s description isn’t clear enough), DON’T RENAME THE NEW BRANCH. Instead, choose to delete and re-create it. Clearly this has to be caught before commits are made to the new branch.

Why is this an issue?

In TFS, when you follow the simple steps to merge a feature from a branch into the trunk, you get to a point where all the changes made in the branch are checked out and staged to be merged into the trunk in your development environment. However when (or if) you’ve applied a rename to the branch at some stage in the lifetime of the branch, you don’t get a nice pretty list of files that changed and ready to be checked in, instead you get every file in the branch as though it were changed at some point in time. Sadly, this is usually not the case, and why I said earlier to catch the problem as soon as possible.

One of the great benefits of the branch/merge strategy is the final merge into the trunk is typically all changes required for a particular feature. When you have to go back to grapple some source control history debugging, it’s much easier to detect large changes from branch merges than sifting through tens of check-ins per file.

After the feature is complete and you start the steps required to merge the feature into the trunk, typically you only see the files that have changed get checked out and ready to be merged into the trunk. However, when a rename occurs on the branch it somehow tags every item as though it were changed. So the Merge back into the trunk ends up looking like the entire project changed. This makes the source diffing extremely difficult as I described in the Tester Pass 1 step in our kanban steps.

Can’t easily merge between different branches or grand-child branches (or at all, didn’t push hard enough to make it)

Another issue we’ve come across (which hasn’t road blocked us too bad) was the in-ability to merge between two different branches that stemmed from the same trunk or merging a grandchild branch into the grandparent (bypassing the child/parent).

A specific scenario we ran into was when Feature A was under development on a branch, and a developer was ready to start working on Feature B. Feature B had a dependency on some of the changes that had taken place in Feature A, however we wanted to deploy Feature A before Feature B was complete. As an experiment we thought we would just create Feature B’s branch straight from Feature A’s branch, however what this would have left us with when Feature A was merged into the Trunk was Feature B two levels away from the Trunk.

Although TFS allows this scenario, any changes to the trunk had to be pushed into Feature A’s branch before it could be pushed into Feature B’s branch, and come final merge time for Feature B, we couldn’t merge straight into the trunk. We would have had to first merge into Feature A’s branch and then do the final merge into the trunk. In the end we just held back the deployment of Feature A and both Feature A & B were developed in Feature A’s branch.

I read somewhere that this “could” be possible through some command line tools, however it wasn’t important enough to go through the pain and this would be much better if we could just use the existing TFS interface to accomplish this simple scenario.

 

I’m sure there are other tips/tricks I could outline here, but either they’re not coming to mind or they’re too basic to really care about. If I think of any, I’ll update this post further.

Comments

Bealer
Cool, thanks for the update.

Yeah, I thought it would come down to planning and communication. I only mentioned as I've read Martin Fowler's recent blog about it. He didn't seem to favour it, and mentioned CI is the preferred point of communication.

Basically branching per feature makes increased work in terms of planning and communication.

I'm still in favour of it though, especially for the work we do (Broken down, smallish stories, with WIP limits).
Jason.Jarrett
There are a number of things to consider.

Branch-Per-Feature is not for every project.

The fear of a merge. This was big for everyone on the team in the beginning; however, with practice and repetiong, this has become just a part of the process. I think the biggest thing here was to just start doing it, learn as you go, and frequent merging will give you enough practice that a merge becomes simple.

Regularly forward merge (pull changes from the trunk into your branch). (I almost do it after every check-in to the trunk)

As far as code merges stomping on each other’s code, it does come down to careful planning and communication. And enough separation of concerns that one feature should NEVER be stomping on another feature’s code.

Think of having a project as described in Ayande’s blog here http://ayende.com/Blog/archive/2009/07/22/the-tale-of-the-lazy-architect.aspx

Focus on branching features that are separate enough in context that they don’t collide in difficult areas of the code.

Focus on keeping the features in each branch small. We deploy weekly, and it's unfortunate and rare for a branch to live more than 2 weeks. (It happens, but we try not to)

Even following the two ideas above, you will still run into merging conflicts. We run into them more frequently than I’d like, but it’s really up to the developer to be careful. I've come to not trust the "auto-merge" within TFS. Well, I trust it to do the bulk of the work, but I still scrutinize and diff every file before checking those merged changes in.

TODO: One item still on my plate is to setup our C.I. server to do things like build/run unit tests on each branch automatically. (Without having to setup/configure a build per branch manually) This is unfortunately one large flaw with the existing C.I. tooling. Most of my team is pretty good about running unit and integration (database) tests on their dev box so we hopefully don’t see too many failing tests after a merge into the trunk.
Bealer
How did you deal with the issue of merging?

Branching per feature is a move away from Continuous Integration, and conflicts can occur when trying to merge your work back into the trunk because someone else has made changes to the same code.

Was this not an issue for you? Or did it come down to careful planning and communication?
Jason.Jarrett
@Jason

Thanks for the feed back. I did consider the scenario you proposed where after A is merged into the trunk B becomes A. However, we would run into the Rename issue I described above in this blog post.

I do agree that most of these issues can best be mitigated through careful team planning, and is what we will probably continue to do.
Jason
Either approach is fine. You could probably do a baseless merge (with the cmd line) up from B straight to the trunk if necessary, but IMHO it muddies the cleanliness of the branch hierarchy and causes confusion.

We've used both strategies on our team, and these days we tend to favor the "A&B in the same branch" approach. I think it really comes down to planning out your dependencies in advance and trying to align your team's work to do the most parallel development possible.

Another subtle variation is that if A is ready to go up to trunk before B, then after you mrege A back up, go ahead and merge B in its current state to A, then delete branch B. This way, you can take integrations from trunk down to B as new features come in (while you're still developing B), and you won't have to do the double-merge from B->A->trunk when you're done with B.

Branch-Per-Feature with Team Foundation Server (TFS) – Part 2 – Kanban Stages…

Branch-Per-Feature with Team Foundation Server (TFS) Series Links



  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

In the previous post (“How we got here”) I provided a small intro into why and how my team arrived at a Branch-Per-Feature/Kanban development lifecycle.

In this post I’ll describe each stage of our current development lifecycle.

  1. Triage – Initial drop point for most features.
    • I say most because some head straight into other steps further along in the pipeline.
    • Items are categorized by area or department (These map to an actual TFS Area)
      EX: Infrastructure, Operations, Billing, Client Services, etc…
    • Items are prioritized.
      Each department head & higher gets the opportunity to prioritize within each Area.
    • The WIP in Triage is basically N/A.
  2. Backlog – Items in the Backlog have been deemed important enough to begin their life in the pipeline and start being pulled through each stage.
    • These items need extra research, design, and requirements gathering.
    • This is where items that were priority #1 in their respective Area go head to head with the #1’s in other Areas and can be further prioritized.
    • We try to keep the WIP in the Backlog to about 10 or less.
  3. Queue – Items in the Queue are items that have no further design/requirements gathering needed and when a developer is ready can pull an item straight into development.
    • We keep the WIP in the Queue to about 7 or less.
  4. Development
    • When the developer pulls an item from the Queue into development we create a branch in a Branches folder and give it a name related to the feature being developed. All development for the feature is done in this branch.
    • In our current workflow, after the development of the feature is complete, the developer does a first pass of testing. (We have a fairly small shop, where all of the developers are testers and all of the testers are developers.)
    • When the developer is done testing the feature is (pushed) into Tester Pass 1. (this is part of why I stated above we have a semi-kanban & not a true pull based kanban)
    • IMPORTANT NOTE: frequently forward merge from the trunk into the branch this helps to avoid issues later, and is a requirement before the next stage (testing).
  5. Tester Pass 1
    • A different person from the implementing developer needs to be brought up to speed as to what the feature is and the needed changes to accomplish the feature.
    • The tester here pounds away at the changes and gives feedback to the original developer of any issues/changes that may need to be made.

    TFS Hint: When I become Tester 1 for a feature, one trick I use is to “pretend merge” the branch back into the trunk. I say pretend merge because I take all the normal steps to merge into the trunk up until the check-in part. I do this so I can see all the changed files easily and can diff each file with the trunk to find the exact source code changes. After a visual code review is complete I undo any changes and begin testing the branch.

  6. User Acceptance – Before we merge the feature into the trunk, we do a review with the customer.
    • This allows us to get solid feedback before it gets merged into the trunk, one more testing pass, and deployed to production. This way we DON’T get feedback like “this is not what I need because it needs to do/be like…“ (after it’s been deployed) and instead get more of a “could you tweak it to be like…” which allows us to deliver what the customer actually needs and not what we interpreted the design to be.
    • Also since it’s not merged into the trunk, any changes requests as a result of the User Acceptance review allow us to take our time to get the change done right and not feel like we have to hurry the feature to catch the week’s deployment. We are able to correctly make the changes, and we can usually communicate to the user at the meeting what changes to the system mean, (if we change Feature A and add/remove/change how it operates it may not end up shipping in the next scheduled deployment (or next etc…))
  7. Merge Into Trunk - original developer is now responsible for merging the feature branch back into the trunk.
    • If the changes made in the branch are more system-wide or architectural in nature, we will pair on complicated merges.
    • There is usually some coordination that may need to happen before a merge can be done. We don’t want to merge a new feature into the trunk when we’re creating a deployment snapshot and anything else that may determine we hold off on merging the feature.
  8. Tester Pass 2
    • After merging the new feature into the trunk is complete we have one more tester take a shot testing the newly merged features. We added this step to the kanban to help reduce potential regression bugs and help keep the quality band high before the feature was marked as done and queued for deployment.
  9. Deployment – After all testing and user acceptance is complete the feature is moved to Deployment.
    • This step is only to keep track of what is queued up for the next deployment cycle.
    When we started this new process, we attempted to deploy each feature as soon as it became available. This caused some issues, in part related to source control management and timing of pending Merges; however, the biggest issue revolved around deployment and interruption to the users. When we deployed once every 4-6 weeks with our old process, this wasn’t much of a problem for the users. However, deploying whenever a feature was ready caused some issues with our users. We settled on a weekly deployment (same day and time every week), if there is something to be deployed it’s now on a regular schedule.
    One other benefit of deploying on a regular weekly schedule is the cadence the team has adjusted to. There’s less confusion around “are we deploying today, tomorrow” etc… With deployments scheduled for a regular cadence, it’s much easier for us to create a process that is consistent, efficient and less error prone.
  10. Completed - After the deployment is complete the task is moved to Completed and considered DONE!
Below is a screen shot of how it looks in our TFS work item view.

Yes, we hacked and crammed our kanban into the MSF-Agile template, and although it’s rough, it’s working better than our previous non-kanban ways.

To move a feature though the kanban we select it in the “iteration path” drop down.
image

Comments

Jason.Jarrett
I don't disagree with the impression that another pass of testing once a branch has been merged could seem a little redundant. However, I think it really depends on your development shop and the context of the work being tested. We have a small shop (5 devs, 0 testers). So each pass of testing done by a different dev is a completely different set of experiences/backgrounds taking a look at a feature under test. In fact we've found the second pass of testing to be extremely valuable and cost effective (finding/fixing bugs before they get deployed).

About the automation part - we run a full automated test suite & integration tests after each check-in to the trunk. However, (in our context -- 6yr old code base - very little code coverage, etc...) automated tests are just not enough to give the confidence that every merge means everything's green.

There are many factors to take into consideration when deciding how your kanban should be setup and all the stages you will need. I'm am certainly not speaking with authority on the subject, just experience (and only a small experience at that). But one of the great parts of this process is it's ability to change when a need is discovered.
Bealer
Interesting approach. The main issue for me is the merge back into the trunk. You solve that with a 2nd round of testing.

I'm not sure if this is wasteful though. It's definitely needed to check the merge went ok, but it's a lot of inspection.

The developer inspects his code, after working on it. Then a tester inspects it. Then another round of inspection takes place after the merge.

That said, automation would solve a lot of that. The same tests could be re-run quickly (plus a bit of manual testing) to ensure everything was ok.

What we may do in fact, is just have automation tests running constantly on our trunk. Do the merge, and if nothing breaks it's ok. If there were conflicts in merging, then we may do some manually testing around that area.
Jason.Jarrett
Have you seen the kanban process template over at codeplex?

http://www.codeplex.com/site/search?projectSearchText=kanban

I haven't taken a thorough look at the template myself, just know it's out there...
Jason
For process template customization, there are some great tips and tricks in this blog post:

http://weblogs.asp.net/dmckinstry/archive/2006/01/03/434440.aspx

I'd really like to see someone develop a Lean/Kanban process template something like the Conchango Scrum template & share it with the community. Maybe even something that integrates with AgileZen or something similar to help visualize the Kanban. Lots of opportunities in this space!
Jason.Jarrett
@Jason As far as TFS is concerned my blog title is probably a little misleading... I will have a very TFS centric post coming with the gotcha's and others I've learned while implementing the process. We haven't gone as far as customizing the template or written any specific reports. I have a sql statement that gives a rough estimate of the time it takes for a feature to get through the pipeline, but feels more like a hack than a useful report.

The big thing lacking for us in the tool at this point is the visibility of the kanban. Given time I could probably write a report or something else to display this information, but don't feel my time would best be spent there.

If you have any good tips on how to slowly customize an existing template and morph it into something else, I would like to do things like you state in your comment "remove iterations altogether and replace...".
Jason
I like this approach. I'm curious if you've developed any reports around your new template? Have you looked into customizing your process template in such a way that you remove iterations altogether and replace with something like a "Kanban Stage" field?

Branch-Per-Feature with Team Foundation Server (TFS) – Part 1 – How we got here…

Branch-Per-Feature with Team Foundation Server (TFS) Series Links


  1. How we got here…

  2. Kanban Stages

  3. Lessons Learned

During one of my blog reading catch-up afternoons, I ran across Derick Bailey Branch-Per-Feature Source Control. Part 1 Why. It is a great read into the subject, covering many of problems other source control methods introduce and how the Branch-Per-Feature concept alleviates some of these issues. It is also great to see a how it can be used in relation to a Kanban style of development. I look forward to his further posts about some of the details of this process with subversion.

While waiting for those posts, I thought I’d write up some of the things my team and I have learned while implementing a semi-Kanban process using Team Foundation Server (TFS).

We practiced a scrum like agile process for about 3 years with a fair amount of success. However, about 4 months ago we hit the end of a very long “Sprint” (6 weeks). There were at least 3 major “features” built during this sprint and we also determined that since were going to be developing several large features, we would also take the time to upgrade our database server, since we would inherently, through the development time, be able to do a little database regression testing.

During the retrospective for this sprint there developed a couple themes revolving around

  1. Size and number of features developed made testing in the allotted sprint time difficult to complete each feature thoroughly.
  2. The features developed at the beginning of the sprint were developed and tested very thoroughly,
  3. The features developed near the end of the sprint felt rushed and resulted in some choices making some choices that may not have left the code
  4. Upgrading a to a new database version AND deploying all these new features threw too many things in the mix for one deployment.
  5. The items developed first sat behind the development firewall for in one case over 3 weeks, when that feature could have given the business value 3 weeks prior.

Our transition to Kanban was somewhat sudden. I had been reading kanban as a tool for delivering software for a couple months. I thought that many of the issues we were having with scrum could be resolved with a simple Kanban process. So I brought up the idea of doing “feature driven development” during the retrospective. What came as a shock to me was the welcoming the team gave this new idea. I had been thinking about how to bring the idea up for some time, and couldn’t imagine how the team would want to make the drastic change that this new style of development would require. After talking about it for a short time during this retrospective, just about everyone on the team seemed to jump all over the idea.

The team decided right then that we would give it a try for a while, work out the kinks, learn from it, and see how it would go.

When we first created our kanban, there was a much simpler pipeline of stages than the list I will outline as our current process. Most blogs/articles basically described starting with something simple like (Backlog, Dev, Test, Deploy). So we started with those very simple steps and it has been refined for our process.

We continued to value the retrospective and through this constant reflection were able to very quickly fine tune our process to something that, looking back, is really suiting us well.

We have been using the new approach for 4 months now and haven’t looked back. Some of the original things that the team was worried about when moving to the Branch-Per-Feature have all but washed away. In particular, one of the largest concerns everyone on the team had was the overhead branching and merging would bring into the process. And while it is a little bit more overhead than just opening the solution and pounding away, it brings many more benefits to the team than we loose in branching/merging time.

In the next part I’ll describe the kanban stages our team has ironed out and how we mix that with the Branching-Per-Feature development.

Comments

Jason.Jarrett
@Jason - Fixed the links, sorry about that.
Jason Barile
Very cool series - Thanks for posting. Could you fix the link to the 2nd article at the top of this post?