Migrating TFVC (TFS on-premises) to Git (VSTS)

June 9, 2016

In the last couple of months I do get more requests to move TFVC version control history to a git repository in Visual Studio Team Services (VSTS). The migration from TFVC to TFVC is at the moment possible via the TFS Integration Tools and is not that straightforward to accomplish. Migrating to a git repository is much simpler and is certainly the way to go if you were already planning to adopt git in the future. The migration can be done via Git-TF which is a set of cross-platform command-line tools that facilitate sharing of changes between Team Foundation Server, Visual Studio Team Services and Git.

What do you need to get started?

  • Download git via https://git-scm.com/downloads
  • Download and extract Git-Tf to your computer
  • Add the extracted git-tf path to the system environment variable “path”
  • Create a new “git” Team Project in VSTS

Migration Steps:

  • Open a command-line prompt and navigate to a directory where you want to host the local git repository
  • Call git-tf clone to push all TFS changeset info from TFVC to a new local git repo. The first argument is the Team Project Collection url. You pass the TF version control path to the exact branch in the second argument and you end the command with the “deep” flag to ensure that the full history of the branch is moved into separate commits in the git repo. Pass your credentials to connect to TFS and execute the command.

git-tf-clone1

  • Once you have a local git repository it’s easy to push it towards an empty central VSTS git repository. First use the git remote add command to link your local git repo to the remote “origin” and afterwards you can push all changes via git push.

git-tf-clone2

    Navigate to the Code Hub in your VSTS Team Project and you should see all code history inside the git repo. What’s a big plus is that the original changeset date/time stamps are now part of the git commit info.


    Work Item Query via TFS API and the dayPrecision parameter

    March 7, 2016

    By default TFS doesn’t pay attention to the time part in work item queries when comparing datetime values. If you want to launch a query and you need to take into account the exact timestamp, you must switch off the dayPrecision parameter in the Query constructor.

    dayPrecision

    using the dayPrecision parameter in the Query constructor

    MSDN documentation: https://msdn.microsoft.com/en-us/library/bb133075(v=vs.120).aspx

    Mystery resolved!

     


    Global .NET Versioning Strategy – AssemblyInformationalVersion

    August 24, 2015

    Ever heard of a third (optional) versioning attribute in the AssemblyInfo files: AssemblyInformationalVersion. No? Please read!

    Without a methodical (assembly) version numbering strategy, the ability to determine what changes were included in which version is lost. In my opinion, you always need to know exactly which source files went into which build and which version of the sofware is currently deployed in a specific environment. A random version numbering system creates confusion and will soon or later cause deployment risks. It will become a nightmare to fetch the exact source files to reproduce a bug from production.

    All versioning of .NET assemblies that use the common language runtime is done at the assembly level. The specific version of an assembly and the versions of dependent assemblies are recorded in the assembly’s manifest. The default version policy for the runtime is that applications run only with the versions they were built and tested with, unless overridden by explicit version policy in configuration files.

    Each .NET project has an AssemblyInfo file which contains an AssemblyVersion attribute and an AssemblyFileVersion attribute.

    • AssemblyVersion: this is the version number used by the .NET framework during build and at runtime to locate, link and load the assemblies. When you add a reference to any assembly in your project, it is this version number which gets embedded. At runtime, the CLR looks for assembly with this version number to load. But remember this version is used along with name, public key token and culture information only if the assemblies are strong-named signed. If assemblies are not strong-named signed, only file names are used for loading.
    • AssemblyFileVersion: This is the version number given to a file as in file system. It is displayed by Windows Explorer. It’s never used by .NET framework or runtime for referencing.

    But what about this difference between AssemblyVersion and AssemblyFileVersion? Many times, I see that the same version is applied to both attributes … but why are these two (different) attributes provided by the .NET Framework? The AssemblyVersion should be the public version of an entire software application, while the AssemblyFileVersion is more the version of a specific component which may only be a small part of the entire application. The AssemblyFileVersion is the best place to put extra build version information which can be important for patching individual components of a software application.

    Please follow the Semantic Versioning recommendations to dictate how the AssemblyVersion should be assigned and incremented. For the AssemblyFileVersion, I tend to include specific build information. Often, you will need to build (and test) a number of time a specific SemVer version of your software.

    For example: release 1 of a software application could have the AssemblyVersion set to 1.0.0 (all components), while the AssemblyFileVersion of the individual components could be set to 1.0.15234.2 which refers to a unique build number of the build system and is linked to a particular date and a revision: “15” = year 2015; “234” = day number in 2015; “2” = second build processed that day. This also allows to later patch individual components in production with a similar AssemblyVersion (1.0.0), but a different AssemblyFileVersion (1.0.15235.1).

    So, let’s try to apply this to a test project in Visual Studio and see the assembly details after building the project …

    Versioning-1

    Now you should be confused! Why does the Product Version display the AssemblyFileVersion and where’s the AssemblyVersion? The problem here is that a new Visual Studio project doesn’t include a third version attribute AssemblyInformationalVersion which is intended to represent the public version of your entire software application. Note that the CLR doesn’t care about this third (optional) version attribute. In short, the same Semantic Versioning rules of AssemblyVersion should be applied to AssemblyInformationalVersion.

    Versioning-2

    Aha! This looks much better right? Now it’s also easy to extract this metadata from your deployed assemblies and this information can be nicely listed in the about box of your software. The only issue with this approach is that the AssemblyFileVersion doesn’t include the “patch” number (Semantic Versioning) of the AssemblyVersion, but this can be ignored with the fact that the AssemblyFileVersion will be unique and can be linked to a unique build run in the build system. This way of working is my personal interpretation of how versioning can be properly applied in complex software applications and doesn’t reflect official guidelines from Microsoft. My goal here is to make software developers aware of the potential risks of not having a clear versioning strategy.

    Now, forget about manually setting version information in the AssemblyInfo files and never ever release software from a local Visual Studio build. In a streamlined build process, generating unique build and version numbers are centrally coordinated. For effective troubleshooting and traceability, it’s imperative that the generated assemblies are stamped with a unique identifier that can be easily traced back to a system build number.

    In a next post I will talk about how you can achieve this global .NET versioning strategy with the new build system in TFS 2015.


    Split test runs for TFS Build and inspect test results

    September 22, 2014

    As a consultant, many times I have to deal with custom requests which cannot be handled in TFS out-of-the-box. Many customizations end up to become valuable for other customers as well. Unfortunately I don’t always find the time to write about it and to work out a more generic solution which could help other people.

    But recently I got an interesting question to intervene during the test run on the TFS Build Server because a complete test run took way too much time. The solution which was built on the server consisted of a big set of Unit Tests and a big set of Integration Tests. The Integration Tests required a deployment of a SQL Server database with some reference data. All tests were run at the same time and this caused builds to run for a long time, even if one of the Unit Tests failed at the beginning of the test run. The test run only completes (success/failure) after running ALL tests.

    So, the goal was to quickly detect when a test fails (= fail early!) and to have the possibility to stop the build process immediately after the first test failure (=stop/fail build at the point one of the tests fails). The customer didn’t see any added value to run the remaining tests, knowing that already one test failed. Instead of waiting 30’ or longer for the full test results, the developers could already start fixing the first test failure and stopping the build would also decrease the load on the build server and test environment. We also agreed to only deploy the database when all Unit Tests succeeded.

    How to separate the Integration Tests from the Unit Tests?

    image

    My sample solution above contains 2 separate projects/assemblies to host the Unit Tests and the Integration Tests. During the configuration of a Build Definition, you can easily define 2 separate test runs.

    image

    The first test run definition will only fetch the Unit Tests, while the second test run definition will look for the Integration Tests. Note that I specified a specific name for the test run definition. I will use this name later to filter the test run definitions. Creating multiple test run definitions is a flexible and easy way to split your big test run in multiple smaller test runs.

    How to filter the test run definitions before the execution of a Test Run?

    Time to customize the build process a bit so that first only the Unit Tests can be run before deciding to proceed with a database deployment and the run of the Integration Tests.

    image

    Instead of running the default VS Test Runner activity which would run all test run definitions (“AutomatedTests”), you need to filter for the Unit Tests. This can be done by modifying the TestSpec parameter for the RunAgileTestRunner activity. A where clause is added to the AutomatedTests value to search only for the “UnitTests” test run definition(s).

    image

    Result => only the Unit Tests will be executed by the VS Test Runner.

    After this test run we can check the TestStatus of the build to stop the build or (in case of no test failures) to continue with a database deployment and the run of the Integration Tests.

    In the ForEach activity after the database deployment operation I added a TestSpec filter for the AutomatedTests to only fetch the “IntegrationTests”.

    image

    The sequence in the ForEach activity will then call again the VS Test Runner and check for test failures to potentially stop the build in case of failure in the Integration Tests.

    The more fine-grained you define your Integration Tests (= different test run definitions, making use of the test assembly file specification or the test case filter), the sooner you can inspect the test results and fail the build without running the other set(s) of Integration Tests.

    Inspect Test Results during a Test Run (no filters)?

    In the beginning, I started looking into options to inspect the test results during the ongoing one-and-only test run (no different test run definitions / no requirement for filters). I quickly stumbled on this MSDN page to learn more about the VSTest.Console.exe command-line options. By adding the switch /Logger:trx it’s possible to drop the test results into a Visual Studio Test Results File (.trx), but the problem is that the .trx file is only persisted to disk once the test run finishes. I didn’t find a way to get to the test results while the test run was still executing.

    To stop the build in the customized build process template, I did throw an exception which is sufficient to stop the build process.

    You can download the build process template which I used to write this blog entry. It’s far from complete and not fully tested, but it could help you to understand the filtering of the test run definitions.


    TFS Migration Upgrade – “Scale-Out” Reporting Services issues

    July 14, 2014

    Last Friday I started a long upgrade process to migrate from an old TFS 2005 environment to the latest and greatest: TFS 2013 Update 3 RC. As you know, upgrading from TFS 2005 to TFS 2013 is only possible via an intermediate upgrade to TFS 2010. Both upgrades were full migration upgrades to make use of new hardware. One important note here: make sure you still use Windows Server 2008 R2 to setup the TFS 2010 environment. TFS 2010 is not supported on Windows Server 2012!

    image

    Anyway, during both migration upgrades, I ended up with a “Reporting” error during the verification process in the upgrade wizard.

    image

    Having done many upgrades before, I immediately had an idea what went wrong and checked the Scale-Out Deployment configuration in SQL Reporting Services.

    image

    image

    By doing a full migration upgrade (Data Tier + Application Tier), the “old” TFS server (TFS-02) was still joined and blocked the verification process for the Reporting feature in the TFS Upgrade Wizard.

    Removing the Server from the UI generates an error, so you need to open up a command prompt to complete this removal process.

    image

    Use the rskeyMgmt list command and provide the SQL Server instance name if you did not use the default one (MSSQLSERVER). This command will return the guid of the “new” and “old” Report Servers.

    image

    Copy the “old” guid and use that string to paste it into the rsKeyMgmt remove command.

    image

    All green now!

    image

    Would love to see this workaround pop up in the TFS Installation Guide for a TFS Migration upgrade.

    All’s well that ends well!

    image


    Techorama Belgium – ALM and more!

    March 30, 2014

    For those who lived under a rock the last couple of months, you might not know yet, but there’s a new big international developer conference in Belgium which will take place on May 27 and May 28: Techorama.

    Techorama-logo

    Together with Gill and Kevin, I decided to roll up my sleeves! Read the full context why we are doing this.

    Only 57 days left, so time to have a look at the ALM Track. I’m extremely happy with the speakers who will show up on stage.

    May 27, 2014 (Day 1):

    May 28, 2014 (Day 2):

    You have to agree, this is a great line up and next to the ALM track, there are many other interesting sessions to follow that fit into one of the other tracks: Cloud, Mobile, Web, Languages & Tools and SQL/SharePoint for Devs. Have a look at the full agenda.

    So, what’s your reason not to be present at Techorama? What’s missing to get you there? Let us know because over the years we want to make this the best dev conference in Belgium and around!

    That’s why we are also promoting Techorama in other countries. Last month we did a Techorama on Tour event in London where Gill and I delivered two technical sessions on Windows 8.1 and Visual Studio Release Management. Other on Tour events are planned but not confirmed yet …

    On top of the breakout sessions, Techorama will deliver two inspiring keynotes and during the conference you will have the opportunity to meet your peers and visit our partners.

    See you at the first edition of Techorama in Mechelen! All feedback is welcome!


    The evolution of ALM/TFS – pdf available for download

    December 3, 2013

    A few weeks ago, I started with publishing different parts of an article on the evolution of Application Lifecycle Management.

    Part I: Introduction

    Part II: Diving into the basics of ALM and how did Microsoft start with an ALM solution?

    Part III: Heterogeneous Software Development

    Part IV: A fully integrated testing experience with TFS 2010

    Part V: TFS 2012 and Continuous Value Delivery

    Part VI: TFS 2013 and Visual Studio Online

    Part VII: Conclusion

    You can now also download the full article in pdf-format (25 pages in total – 2MB).

    Happy reading!


    Follow

    Get every new post delivered to your Inbox.