Retain VSTS Build Indefinitely – Fetch Build ID from RM artifact variable

June 26, 2016

While showing a Visual Studio Release Management demo in a Practical DevOps training, I stressed how important it was that the build artifact, which was used during a release, was not destroyed by the built-in retention policy. By default, the output of a build run is only stored for 10 days. So, in case you really want to keep the build and build artifacts, you must take care of this yourself.

For that purpose I created a powershell script which calls the VSTS REST API to accomplish this.

The powershell script is called from a release management task at the beginning of the release process.


The powershell script calls the VSTS Build v2 REST API and uses Basic Authentication (passed in the headers) with a Personal Access Token password.

At the moment I worked on this activity (probably still in private preview of VSRM – September 2015) it was not possible yet to fetch the exact Build ID through the build artifact which is linked in the release definition. That’s why I was dropping a simple text file with the Build ID in the build process which was also stored in the build artifact. That file was then used in the release management process to parse the Build ID.


Apparently, this workaround is not necessary anymore and you can now immediately fetch the Build Id from the build artifact via a pre-defined release management artifact variable RELEASE_ARTIFACTS_[source-alias]_[variable-name]. Read more about the available RM artifact variables.

Next task on my todo list => create a VSTS extension to provide a dedicated build/release task.

TFS Build 2015 … and versioning!

August 24, 2015

Lately I got some time to play a bit more with the new build system which was released with TFS 2015 and which is also available for Visual Studio Online. The new build system was in the beginning announced as build vNext, but now with the release of TFS 2015, it’s safe to call it Team Foundation Build 2015 (TFBuild 2015) while the “old” build system can be referred to as the xaml (workflow) build system. Colin Dembovsky has a great post why you should switch to the new build sytem.

In the last years, I had to implement a lot of customizations into the xaml build system and I became very productive with the workflow activities. Along the way I developed a number of generic activities which I could reuse for other assignments and I really knew my way around in the build workflow. In many cases, the TFS Build Extensions were used to not reinvent the wheel. So, in the first phase I was a bit sceptic about the rise of yet another build system, but I clearly saw some interesting advantages which are explained in the post by Colin. One disadvantage of the xaml build system is the steep learning curve to master the customization process and also the deployment mechanism to refresh the TFS build controller(s). But like I experienced, once you got there, you were able to integrate very powerful customizations into the build process. Anyway, the “old” build system won’t disappear and you can still rely on this functionality for quite some time in the future, but I recommend to have a good look at the new build system and use it for your new/future build definitions.

In this post I want to share how I integrated a common activity in the build process: Versioning. With the available build steps it has become extremely simple to hook your own scripts into the build process. In your scripts you will have access to some predefined build variables.

In my previous blogpost I wrote about adopting a Global .NET Versioning Strategy and the existence of a third (optional) version attribute: AssemblyInformationalVersion. Let’s use this strategy to add versioning to a sample Fabrikam software application.

My build definition:


In the screenshot above you will see that I launch a powershell script (PreBuild.ps1) before building the solution and I pass one argument productVersion to the script. The powershell script will do the magic in the background to replace all versioning values for AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersion in the Assembly Info files, based on this product version. The product version will be passed as a whole to the AssemblyVersion and the AssemblyInformationalVersion attributes. The AssemblyFileVersion will be replaced with a full version number which will consist of the major and minor version number of the product version, a Julian based date and an incremental build number.


Assembly File Version = 1.0.15236.3

  • 1 => taken from “Major” product version
  • 0 => taken from “Minor” product version
  • 15236 => generated by build process: “15” = year 2015, “236” = day of year 2015
  • 3 => third build, run on day 236 in year 2015

Looking at the assembly details of a custom built Fabrikam assembly now reveals correct meta data:


I also modified the build number format to have some more version information displayed in the build run.



I added a gist at GitHub to share the powershell script. Note that the script has been used for experimentation and may not be ready to be used for production. it certainly lacks some proper validation and error-handling. Use at your own risk.

Also have a look at some similar inspiring blog posts about versioning TFS Builds which helped me to develop the powershell script that works for my scenario.

Update MSBuild Toolpath in TFS build process template

October 20, 2014

I have experienced a number of migration scenarios where it was decided to first upgrade old Visual Studio 2010 solutions to the latest and greatest version of Visual Studio (VS 2013 at this moment) without forcing a TFS upgrade at the same time.

Depending on the type of included projects for the Visual Studio solution, the TFS build might not work anymore because it requires other MSBuild .targets files (related to the .NET Framework version / Visual Studio version).

The easiest way to fix your TFS build failures is to modify the TFS 2010 build process templates and explicitly set the MSBuild ToolPath variable in the MSBuild activity to the upgraded Visual Studio version.

Visual Studio 2013 => C:\Program Files (x86)\MSBuild\12.0\Bin


Integration of Dynamics CRM 2011 solutions with TFS

December 28, 2012

Some weeks ago I was asked for a proof of concept to design a TFS 2010 solution to fully (not less than 100%) automate a complex Dynamics CRM 2011 deployment for various environments (dev / test / staging / production). Many different components were involved: the CRM solution itself, but also web applications, database objects, reports (SSRS), transformations, …

Dynamics CRM

It has been an interesting journey so far and along the way I got to know (a bit) how Dynamics CRM 2011 is working. Not to my surprise, I realized that it’s quite hard to push all source related items to TFS and to force ALL changes/updates to a CRM environment from a version controlled solution in TFS. Many things in the CRM environment are easily modified by the development team via the CRM UI web interface and as result, directly stored in the CRM database(s). So, the POC also required me to think about enforcing best practices for the CRM development team to avoid inconsistencies in the global deployment solution and I definitly wanted to end up with a build-once;deploy-many solution.

Anyway, I won’t talk about the entire scope of the POC, but I want to highlight the approach I took for automating the export & extract operation from the development CRM 2011 instance via a TFS build definition. The goal here was to automatically capture the daily changes which were published to deployed CRM development solutions.

The MSCRM 2011 Toolkit contains a Solution Export command line utility which enabled me to export one or multiple CRM solutions from an existing Solutions Export Profile into a single compressed solution file (.zip).

The compressed solution file (zip-format) is of course not ideal to track the individual changes and to bind it to a version control repository. Luckily, with the latest release of the Dynamics CRM 2011 SDK, a new tool (SolutionPackager) was added to extract the different components into individual files.

The SolutionPackager tool, available in the Microsoft Dynamics CRM 2011 Update Rollup 10 version of the Microsoft Dynamics CRM SDK download, resolves the problem of source code control and team development of solution files. The tool identifies individual components in the compressed solution file and extracts them out to individual files. The tool can also re-create a solution file by packing the files that had been previously extracted. This enables multiple people to work independently on a single solution and extract their changes into a common location. Because each component in the solution file is broken into multiple files, it becomes possible to merge customizations without overwriting prior changes. A secondary use of the SolutionPackager tool is that it can be invoked from an automated build process to generate a compressed solution file from previously extracted component files without needing an active Microsoft Dynamics CRM server.

TFS 2010

So, these tools opened the door for me to work out a custom build process (workflow) in TFS 2010 with the following sequential activities:

  • Export CRM solution from dev environment (MSCRM 2011 Toolkit)
  • Prepare TFS workspace before extract of solution file [Get Latest + Check-Out]
  • Extract compressed solution file into TFS workspace (SolutionPackager)
  • Scan TFS workspace for changes/additions/deletions
  • Check-In pending changes of the TFS workspace as a single changeset

The scan of the TFS workspaces – to end up with all differences [changes/additions/deletions] – was a bit more complex than expected because I needed to use several TFS API Workspace calls like PendEdit, PendAdd, PendDelete, … I also made use of the EvaluateCheckin2 method to detect potential conflicts and to perform proper exception handling.

This process allows the development team to easily follow-up the incremental changes (via TFS changesets) which were applied to the dev CRM environment. Note that the SolutionPackager tool is also able to generate a compressed solution file from the individual component files.

Managing Builds through the Build Quality value

February 12, 2012

The integration in TFS between Builds and Work Items is just too good. In this blog post I will describe a solution based on a TFS Server Plugin to have more fine-grained control on completed builds.

Completed builds may sometimes be more valuable for the development team and only a few of them will eventually be published/deployed for (public) Testing. This means that Testers (in theory) should only be able to log bugs against these particular builds.

There are two important fields on a bug work item which may point to a build: “Found in Build” and “Integrated in Build”.


For both fields a value can be selected from a combobox. The values in the combobox are part of the global list for the builds in the active Team Project.


By default, ALL finished builds (failed + succeeded) will trigger the BuildCompletion event in the Team Project Collection and the accompanying Build Number will be appended to the existing global list for the builds in the Team Project. In Team Projects where a lot of builds are defined, this “Builds” global list will be flooded by superfluous builds which should never be selected for the above fields. CI builds for example should not be part of this global list. Only full builds which are deliberately transferred for testing should be searchable in the above comboboxes.

So, how to explicitly mark a build for “Testing”? This can be easily done with setting a Build Quality for a given build.


Note that people who want to modify the Build Quality should have the permission “Edit build quality”.


Build Quality values can also be managed in a dedicated list.


By using a specific Build Quality value, it’s also possible to create a TFS Server Plugin and to listen to a BuildQualityChangedNotification event and to only add the Build Number to the global list when the Build Quality is set to “Ready for Initial Test”. Of course, the default event-subscription for the BuildCompleted event must be disabled and the existing global list should be cleaned.

This is exactly what I did. Only the Build Numbers of the builds that get a desired Build Quality (“Ready For Initial Test”) will be pushed to the global list of the Builds in the Team Project.

When your team makes fully use of Microsoft Test Manager to file bugs, the “Found in build” field can be automatically set by associating particular builds to a Test Plan. Test execution can then be done against an approved build list. The “Integrated in build” field is normally automatically set by the build process which picks up a bug resolution through a changeset at check-in time.



Some more details how I did implement the customization of the global build list:

Delete the out-of-the-box event-subscription for the BuildCompleted event

The easiest way is actually to navigate to the event-subscription table via SQL Management Studio (tbl_EventSubscription in specific Team Project Collection database) and to delete the event-subscription row from there. But, that shortcut is not really recommended I’m afraid. A safer solution is to rely on the Bissubscribe command line tool ((executable can be found in :\Program Files\Microsoft Team Foundation Server 2010\Tools). Strange enough, there’s no option to list the existing event-subscriptions, but there certainly is an unsubscribe switch to unsubscribe from an event-subscription. You will need the ID of the event-subscription. The only way to get this ID without having to write custom code on the TFS API is to get the ID from looking into the tbl_EventSubscription table in SQL Management Studio. Note that it’s the Integer ID you will need instead of the GUID Subscriber ID.



Recreating this default event-subscription is possible through the command BisSubscribe /eventType BuildCompletionEvent /address http://:8080/tfs//WorkItemTracking/v1.0/Integration.asmx /collection http://:8080/tfs/.

Clean up the existing “Builds” global list

The global list of a Team Project Collection can be exported/imported with the witadmin command-line options or you can export/import a global list through the UI with the Process Editor from the TFS Power Tools.


You will be prompted to store the GlobalList.xml file after which you may edit the list. Delete as many ListItem entries as you want in order to clean up the Build list. This is the value list that will be used for the “Found in Build” and “Integrated in Build” fields.


Import the global list back to TFS via witadmin or the Process Editor.


To immediately witness the result, you may need to restart Visual Studio to clear the cache.

Activate the TFS Server Plugin for processing BuildQualityChangedNotification events

Since TFS 2010, it has become fairly easy to write custom event handlers that will run in the context of Team Foundation Server. It’s a plugin (dll) that needs to be deployed on the TFS Application Tier. How to accomplish this is not well documented, but your best starting point would be to read Chapter 25 of the book Professional Team Foundation Server 2010. Grant Holliday – one of the authors – explains how the ISubscriber interface can be used for extending Team Foundation Server.

For my purpose I implemented the ISubscriber interface in my BuildQualityChangedEventHandler class. The Build Quality I will use in my example is “Ready For Initial Test”. This action should append the Build Number to the Build global list of the Team Project.


Next task was to pick up the BuildQualityChangedNotificationEvent and to take action when the Build Quality was set to “Ready For Initial Test”: export global list, append the current Build Number and import the list back to TFS.


That’s it, build the assembly and drop it in the plug-in folder on all active TFS Application Tiers.


Setting the Build Quality to “Ready For Initial Test” does the trick and adds the Build Number to the Build global list for the appropriate Team Project.


As an extra, I also decided to keep the “Ready For Initial Test” builds indefinitely.



Controlling your builds with the Build Quality value in combination with a TFS Server plugin gives you a lot of power! The next step could be to trigger deployments off a Build Quality value …

The importance of Continuous Building/Integration

December 13, 2011

Automatically triggering a build after each check-in (continuous integration option in TFS Build Definitions) in a development branch is a good practice that nowadays needs less explanation than a few years ago. Many development teams seem to adopt this practice quite well, but there’s still a lot of room for improvement.

The main benefit of enabling continuous (integration) builds is clear: providing early feedback on the validity of the latest code changes that were committed to the version control repository. I won’t discuss in detail the different checks (compilation, deploy, test) that should be part of the validation step (as always: it depends!), but for this post I want to focus mainly on the importance of having at least a continuous (integration) build. A future topic to tackle is how to get your CI builds as fast as possible with TFS 2010.


The first guideline that developers should follow if they want to reap the full benefits of CI builds, is to check-in early and often. I’m often worried/shocked when developers show me the amount of pending changes in their local workspace. This nearly always leads to the conclusion that those developers are working on many different tasks at the same time which should be avoided at all cost. The risk of committing unwanted files for a dedicated change is just too high. There are a number of version control features available to safely switch to another task: shelving or creating an extra workspace. The next version of TFS (TFS11) will even include better support for task-driven development through a revised Team Explorer (more details in this blog post of Brian Harry). Check-in early and often does of course not mean that developers must just check-in changes that are not ready yet and not locally validated, but it does mean that developers should work on small incremental changes. This requires every involved developer to break down their assigned work into small tasks which may result in a check-in operation. Ideally, as a developer, you should commit your changes to the version control repository at least daily. And don’t forget to provide a decent comment for the check-in operation in addition to an association with a work item. This meta-data might become quite important in some merging scenarios.

Having a CI build and adopting a check-in early and often policy, will lead to the fact that a broken build can only be caused by an incremental small change which should be easier to fix than a large changeset or a combination of multiple changesets by different developers. More frequent check-ins will further decrease the amount of files that could be part of a conflicting changeset that caused the build to fail.


Another golden rule in a CI environment is to fix the broken build as soon as possible and to delay other commits of pending changes. This implies that a notification mechanism should be put in place to warn the development team about broken builds. In Team Foundation Server 2010, you can easily set up an e-mail alert or you can rely on the built-in Build Notifications tool. The developer who caused the build to fail must take responsibility to fix it immediately. The check-in operation of a developer can only be considered as completed/done when the CI build has built successfully the latest changeset.

My recommended 10-step workflow for all developers:


After the last step, you may start all over again or you may just go home! Committing changes to version control requires you at least to wait until the CI build has finished successfully. Do you really want to impact the entire development team by going home without verifying your latest changes?

This reminds me to my early years where I was a junior .NET developer in a mid-sized development team. At that time, there wasn’t yet a proper build server and we never heard of continuous integration before. I don’t have to tell you how many times all developers were impacted by an invalid check-in and how much time we spent on finding the culprit. You probably have all been in a similar situation.

  • Dev1: My local build fails after a get latest
  • Dev1: Who did a check-in recently? Anyone heard of method X?
  • Dev2: I Just did a get latest and everything works fine!
  • Dev3: Are you sure that you took a “real” latest of everything?
  • Dev1: Wait, let me redo a get latest to be sure!
  • Dev4: Damn, I also took a get latest and my build now fails!
  • Dev2: Sorry guys, I might have forgotten to check-in a file!

In some cases, the person who caused the build to fail was not at his desk (sick/holiday/meeting) the moment the build failure was discovered! Yeah, those were the times we all never want to go back right? Not to mention what this type of troubleshooting costs!

In brief, it’s all about discipline and communication. Having a nice up-to-date build dashboard on a big screen might definitely help to make everyone aware of the importance of the latest build status.

In a next blog post I will talk about the importance of the CI build speed. It’s obvious that for providing early feedback, the CI builds must run as fast as possible. What’s the value of a CI build that takes more than 1 hour?! There are some interesting things you can enable in the TFS Build Definitions to optimize CI build definitions.

Note that since TFS 2010, there is also the Gated Check-in trigger option that will even prevent broken builds by delaying the actual commit until a successful build was run with the latest code changes in a temporary shelveset.

Errors during build execution while downloading large files from Version Control

August 3, 2011

In a large SharePoint development project at a customer I was confronted with the not always reproducible build error while syncing the build workspace:

Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host


This error sometimes occurred while downloading the larger database backup files (> 1.5GB) which are required in the deployment and test phase.

Luckily a patch (KB981898) for the TFS Application Tier (Windows Server 2008 R2) does exist to resolve this particular issue! Also have a look at this blog article for more background info.

Also thanks to Giulio Vian for helping me out with this issue!

Publication of Test Results to TFS 2008

July 16, 2010

Lately I’ve been struggling with some weird behavior during a Team Build (TFS 2008). The build executed also a set of Unit Tests which passed, but during the publication step of the test results to Team Foundation Server, the operation failed time after time.



I couldn’t find any additional information (eventlog, TFS Log, …) about the root cause of this failure, but while limiting the test methods for the test run I bumped into a test method which name consisted of 461 characters!

Apparently there’s a hard limit of 256 characters for the test method names that are published to the TFS data warehouse.

The Gated Check-in build in TFS2010

April 18, 2010

Everybody should be already familiar with Continuous Integration or should I say Continuous Building? Automatically building a development codeline after a check-in is often not immediately followed by an integration action towards a main branch. I picked up the term Continuous Building in this article of Martin Fowler.

Apart from the fact how this “build automation” should be called, there are many reasons why you should enforce this behavior on different branch types for your applications. The ultimate goal is to improve the quality of the software application and to reduce the time to release the application in production. By setting up early validation (compilation, automatic testing + other quality gates) through “build automation” you will at least be notified as soon as possible of all kinds of validation errors (= quality check) and you will have a chance to fix them before other team members will be impacted by pulling a get latest on the repository.

Automatically firing a validation build after a check-in will in the end not prevent broken builds and that’s where the Gated Check-in Build will come into play with Team Foundation Server 2010.

The Gated Check-in Build in TFS2010 will prevent broken builds by not automatically committing your pending changes to the repository, but the system will instead create a separate shelveset that will be picked up by the Gated Check-in Build. The build itself will finally decide if the pending changes need to be committed to the repository based on the applied quality gates.

Gated Check-In Build process

The picture above describes the full process of a Gated Check-In build.

How to setup a Gated Check-in build?

The Trigger Tab in the Build Definition window has now an extra option for selecting Gated Check-in.


At the moment a check-in is attempted by a developer in the branch where the Gated Check-in build is active, the developer will be faced with a dialog box.


Cancelling this window will not kick off the build, but will also not commit your pending changes to the repository. If you really want to overrule this build with committing your changes directly to the repository, you may select the 2nd checkbox to bypass the validation build (not recommended). By default your pending changes will reside in your local workspace (first checkbox). In the situation where you immediately want to start with new changes – not relying on previous changes – it might be appropriate to uncheck the first option.

In the ideal situation, the build will complete without any validation errors and will eventually commit the changes to the repository. This will also lead to a Gated Check-in notification for the original committer via the Team Build Notification tool.



If you had previously chosen to preserve the changes locally (default), you may have noticed that the files you were working on were still checked out during the build … and in fact after a successful build these changes do not reflect the as-is situation anymore of the repository. With the above window you get the option to immediately reconcile your workspace with the up-to-date repository. So, clicking the “Reconcile …” button will give you the opportunity to select the desired files to force an undo in your local workspace and to pickup the changes that were committed by the Gated Check-in build for these files.

Another way to reconcile your workspace (if you for example ignored this window or when the build notification is way too slow) is by right-clicking the completed Gated Check-in Build in the Build Explorer and selecting the option to reconcile your workspace.


If you did not choose to preserve the changes locally, there won’t be any changes to reconcile after the Gated Check-in build, even if you forced the reconciliation.


The Gated Check-in build may also be kicked off manually where you may need to create a shelveset or where you may point to an existing shelveset.


A last thing to note is that the comment that was originally supplied to the changeset by the developer will be suffixed with the NoCICheckinComment variable (default = ***NO_CI***) to prevent that another continuous integration build will be fired after the final check-in done by the Gated Check-in build.



What meant to be a small post on the Gated Check-in feature in Team Foundation Server 2010 ended up in a more detailed explanation of how it works and how you can work with it in the Visual Studio IDE. Remember that you should setup the most appropriate build types according to your specific branches. Not all branches may need a Gated Check-in build. Only configure this for branches that should never have a broken build. A Gated Check-in build may for example validate a big merge operation from a development branch to a stable main branch.

Running Coded UI Tests (from action recordings with MTLM) in Team Builds (TFS2010)

January 21, 2010

With Visual Studio 2010 (Premium/Ultimate) we are able to create several types of automated tests. Automated tests will execute a sequence of test steps and determine whether the tests pass or fail according to expected results.

Coded UI Tests provide functional testing of the user interface and validation of user interface controls.

How to create Coded UI Tests? You could create them directly into Visual Studio, but for this blogpost I want to start from an action recording in Microsoft Test and Lab Manager (MTLM). An action recording is quite useful in manual tests that you need to run multiple times and for recycling common steps in different manual tests that contain shared steps.

I did create a simple test case with different test steps in MTLM to test some behavior on my website.


From MTLM I started a test run for this test case.


Before running the test, I do need to check the action recording to be sure to capture my actions for this test.


The Test Runner will give a detailed overview of the recorded actions. Afterwards you will be able to replay all these stored actions in the Test Runner.


After saving the results of this test run (all data is associated to my test case) it’s time to open Visual Studio 2010 and to create a Coded UI Test.



Instead of choosing the default option to record actions I did choose to use an existing action recording after which I need to retrieve the appropriate test case to link to the associated actions.


By clicking OK, Visual Studio will start generating code that will represent my actions that were recorded in Microsoft Test and Lab Manager. On top of that you are also able to add assertions on parts of the user interface in a separate Coded UI Test that you may reuse in other Coded UI Tests.


Now, let’s integrate this entire UI test (MyCodedUITest) into the automated build. I created a default new build defintion where I also enabled to run the automated tests.


To run unit tests that interact with the desktop during a Team Build, we need to modify the Build Service Host properties in the Team Foundation Administration Console to run the build service as an interactive process instead of running the build service as a Windows Service.


That’s about it. Make sure that the Build Service Host is running in the command line that will pop up after starting the BuildServiceHost. Queue the build and explore the results!



With this post I wanted to highlight the powerful integration of (automated) testing into the upcoming Visual Studio 2010 offering.