Global .NET Versioning Strategy – AssemblyInformationalVersion

August 24, 2015

Ever heard of a third (optional) versioning attribute in the AssemblyInfo files: AssemblyInformationalVersion. No? Please read!

Without a methodical (assembly) version numbering strategy, the ability to determine what changes were included in which version is lost. In my opinion, you always need to know exactly which source files went into which build and which version of the sofware is currently deployed in a specific environment. A random version numbering system creates confusion and will soon or later cause deployment risks. It will become a nightmare to fetch the exact source files to reproduce a bug from production.

All versioning of .NET assemblies that use the common language runtime is done at the assembly level. The specific version of an assembly and the versions of dependent assemblies are recorded in the assembly’s manifest. The default version policy for the runtime is that applications run only with the versions they were built and tested with, unless overridden by explicit version policy in configuration files.

Each .NET project has an AssemblyInfo file which contains an AssemblyVersion attribute and an AssemblyFileVersion attribute.

  • AssemblyVersion: this is the version number used by the .NET framework during build and at runtime to locate, link and load the assemblies. When you add a reference to any assembly in your project, it is this version number which gets embedded. At runtime, the CLR looks for assembly with this version number to load. But remember this version is used along with name, public key token and culture information only if the assemblies are strong-named signed. If assemblies are not strong-named signed, only file names are used for loading.
  • AssemblyFileVersion: This is the version number given to a file as in file system. It is displayed by Windows Explorer. It’s never used by .NET framework or runtime for referencing.

But what about this difference between AssemblyVersion and AssemblyFileVersion? Many times, I see that the same version is applied to both attributes … but why are these two (different) attributes provided by the .NET Framework? The AssemblyVersion should be the public version of an entire software application, while the AssemblyFileVersion is more the version of a specific component which may only be a small part of the entire application. The AssemblyFileVersion is the best place to put extra build version information which can be important for patching individual components of a software application.

Please follow the Semantic Versioning recommendations to dictate how the AssemblyVersion should be assigned and incremented. For the AssemblyFileVersion, I tend to include specific build information. Often, you will need to build (and test) a number of time a specific SemVer version of your software.

For example: release 1 of a software application could have the AssemblyVersion set to 1.0.0 (all components), while the AssemblyFileVersion of the individual components could be set to 1.0.15234.2 which refers to a unique build number of the build system and is linked to a particular date and a revision: “15” = year 2015; “234” = day number in 2015; “2” = second build processed that day. This also allows to later patch individual components in production with a similar AssemblyVersion (1.0.0), but a different AssemblyFileVersion (1.0.15235.1).

So, let’s try to apply this to a test project in Visual Studio and see the assembly details after building the project …

Versioning-1

Now you should be confused! Why does the Product Version display the AssemblyFileVersion and where’s the AssemblyVersion? The problem here is that a new Visual Studio project doesn’t include a third version attribute AssemblyInformationalVersion which is intended to represent the public version of your entire software application. Note that the CLR doesn’t care about this third (optional) version attribute. In short, the same Semantic Versioning rules of AssemblyVersion should be applied to AssemblyInformationalVersion.

Versioning-2

Aha! This looks much better right? Now it’s also easy to extract this metadata from your deployed assemblies and this information can be nicely listed in the about box of your software. The only issue with this approach is that the AssemblyFileVersion doesn’t include the “patch” number (Semantic Versioning) of the AssemblyVersion, but this can be ignored with the fact that the AssemblyFileVersion will be unique and can be linked to a unique build run in the build system. This way of working is my personal interpretation of how versioning can be properly applied in complex software applications and doesn’t reflect official guidelines from Microsoft. My goal here is to make software developers aware of the potential risks of not having a clear versioning strategy.

Now, forget about manually setting version information in the AssemblyInfo files and never ever release software from a local Visual Studio build. In a streamlined build process, generating unique build and version numbers are centrally coordinated. For effective troubleshooting and traceability, it’s imperative that the generated assemblies are stamped with a unique identifier that can be easily traced back to a system build number.

In a next post I will talk about how you can achieve this global .NET versioning strategy with the new build system in TFS 2015.


Split test runs for TFS Build and inspect test results

September 22, 2014

As a consultant, many times I have to deal with custom requests which cannot be handled in TFS out-of-the-box. Many customizations end up to become valuable for other customers as well. Unfortunately I don’t always find the time to write about it and to work out a more generic solution which could help other people.

But recently I got an interesting question to intervene during the test run on the TFS Build Server because a complete test run took way too much time. The solution which was built on the server consisted of a big set of Unit Tests and a big set of Integration Tests. The Integration Tests required a deployment of a SQL Server database with some reference data. All tests were run at the same time and this caused builds to run for a long time, even if one of the Unit Tests failed at the beginning of the test run. The test run only completes (success/failure) after running ALL tests.

So, the goal was to quickly detect when a test fails (= fail early!) and to have the possibility to stop the build process immediately after the first test failure (=stop/fail build at the point one of the tests fails). The customer didn’t see any added value to run the remaining tests, knowing that already one test failed. Instead of waiting 30’ or longer for the full test results, the developers could already start fixing the first test failure and stopping the build would also decrease the load on the build server and test environment. We also agreed to only deploy the database when all Unit Tests succeeded.

How to separate the Integration Tests from the Unit Tests?

image

My sample solution above contains 2 separate projects/assemblies to host the Unit Tests and the Integration Tests. During the configuration of a Build Definition, you can easily define 2 separate test runs.

image

The first test run definition will only fetch the Unit Tests, while the second test run definition will look for the Integration Tests. Note that I specified a specific name for the test run definition. I will use this name later to filter the test run definitions. Creating multiple test run definitions is a flexible and easy way to split your big test run in multiple smaller test runs.

How to filter the test run definitions before the execution of a Test Run?

Time to customize the build process a bit so that first only the Unit Tests can be run before deciding to proceed with a database deployment and the run of the Integration Tests.

image

Instead of running the default VS Test Runner activity which would run all test run definitions (“AutomatedTests”), you need to filter for the Unit Tests. This can be done by modifying the TestSpec parameter for the RunAgileTestRunner activity. A where clause is added to the AutomatedTests value to search only for the “UnitTests” test run definition(s).

image

Result => only the Unit Tests will be executed by the VS Test Runner.

After this test run we can check the TestStatus of the build to stop the build or (in case of no test failures) to continue with a database deployment and the run of the Integration Tests.

In the ForEach activity after the database deployment operation I added a TestSpec filter for the AutomatedTests to only fetch the “IntegrationTests”.

image

The sequence in the ForEach activity will then call again the VS Test Runner and check for test failures to potentially stop the build in case of failure in the Integration Tests.

The more fine-grained you define your Integration Tests (= different test run definitions, making use of the test assembly file specification or the test case filter), the sooner you can inspect the test results and fail the build without running the other set(s) of Integration Tests.

Inspect Test Results during a Test Run (no filters)?

In the beginning, I started looking into options to inspect the test results during the ongoing one-and-only test run (no different test run definitions / no requirement for filters). I quickly stumbled on this MSDN page to learn more about the VSTest.Console.exe command-line options. By adding the switch /Logger:trx it’s possible to drop the test results into a Visual Studio Test Results File (.trx), but the problem is that the .trx file is only persisted to disk once the test run finishes. I didn’t find a way to get to the test results while the test run was still executing.

To stop the build in the customized build process template, I did throw an exception which is sufficient to stop the build process.

You can download the build process template which I used to write this blog entry. It’s far from complete and not fully tested, but it could help you to understand the filtering of the test run definitions.


TFS Migration Upgrade – “Scale-Out” Reporting Services issues

July 14, 2014

Last Friday I started a long upgrade process to migrate from an old TFS 2005 environment to the latest and greatest: TFS 2013 Update 3 RC. As you know, upgrading from TFS 2005 to TFS 2013 is only possible via an intermediate upgrade to TFS 2010. Both upgrades were full migration upgrades to make use of new hardware. One important note here: make sure you still use Windows Server 2008 R2 to setup the TFS 2010 environment. TFS 2010 is not supported on Windows Server 2012!

image

Anyway, during both migration upgrades, I ended up with a “Reporting” error during the verification process in the upgrade wizard.

image

Having done many upgrades before, I immediately had an idea what went wrong and checked the Scale-Out Deployment configuration in SQL Reporting Services.

image

image

By doing a full migration upgrade (Data Tier + Application Tier), the “old” TFS server (TFS-02) was still joined and blocked the verification process for the Reporting feature in the TFS Upgrade Wizard.

Removing the Server from the UI generates an error, so you need to open up a command prompt to complete this removal process.

image

Use the rskeyMgmt list command and provide the SQL Server instance name if you did not use the default one (MSSQLSERVER). This command will return the guid of the “new” and “old” Report Servers.

image

Copy the “old” guid and use that string to paste it into the rsKeyMgmt remove command.

image

All green now!

image

Would love to see this workaround pop up in the TFS Installation Guide for a TFS Migration upgrade.

All’s well that ends well!

image


Techorama Belgium – ALM and more!

March 30, 2014

For those who lived under a rock the last couple of months, you might not know yet, but there’s a new big international developer conference in Belgium which will take place on May 27 and May 28: Techorama.

Techorama-logo

Together with Gill and Kevin, I decided to roll up my sleeves! Read the full context why we are doing this.

Only 57 days left, so time to have a look at the ALM Track. I’m extremely happy with the speakers who will show up on stage.

May 27, 2014 (Day 1):

May 28, 2014 (Day 2):

You have to agree, this is a great line up and next to the ALM track, there are many other interesting sessions to follow that fit into one of the other tracks: Cloud, Mobile, Web, Languages & Tools and SQL/SharePoint for Devs. Have a look at the full agenda.

So, what’s your reason not to be present at Techorama? What’s missing to get you there? Let us know because over the years we want to make this the best dev conference in Belgium and around!

That’s why we are also promoting Techorama in other countries. Last month we did a Techorama on Tour event in London where Gill and I delivered two technical sessions on Windows 8.1 and Visual Studio Release Management. Other on Tour events are planned but not confirmed yet …

On top of the breakout sessions, Techorama will deliver two inspiring keynotes and during the conference you will have the opportunity to meet your peers and visit our partners.

See you at the first edition of Techorama in Mechelen! All feedback is welcome!


The evolution of ALM/TFS – pdf available for download

December 3, 2013

A few weeks ago, I started with publishing different parts of an article on the evolution of Application Lifecycle Management.

Part I: Introduction

Part II: Diving into the basics of ALM and how did Microsoft start with an ALM solution?

Part III: Heterogeneous Software Development

Part IV: A fully integrated testing experience with TFS 2010

Part V: TFS 2012 and Continuous Value Delivery

Part VI: TFS 2013 and Visual Studio Online

Part VII: Conclusion

You can now also download the full article in pdf-format (25 pages in total – 2MB).

Happy reading!


The evolution of Microsoft’s solution for Application Lifecycle Management: Team Foundation Server – Part VII

November 27, 2013

Part I: Introduction

Part II: Diving into the basics of ALM and how did Microsoft start with an ALM solution?

Part III: Heterogeneous Software Development

Part IV: A fully integrated testing experience with TFS 2010

Part V: TFS 2012 and Continuous Value Delivery

Part VI: TFS 2013 and Visual Studio Online

Part VII: Conclusion

I have been a “Team System” user/advocate from the early beta versions in 2004 and it’s amazing to experience at short range what has been delivered with all releases of Team Foundation Server to improve the quality and predictability of software development. While the first release wasn’t all that glamorous to get up-and-running, it has definitely started a complete new way of collaborating as a team in software development projects.

Nevertheless IT software development projects still have a very bad reputation to deliver quality software on time and on budget. Often this is due to a lack of experience and a lack of software craftsmanship, but on many projects it’s also due to lack of a decent ALM vision and strategy. In a world where software is eating the world, all companies are becoming a software company and should invest in adopting Application Lifecycle Management. It’s not about writing some code that runs on one specific box, but it’s about bringing value to the market for the consumers of your product in a reliable and sustainable way. That’s why I like so much the cloud story for TFS and the increased delivery speed of new features. Have a closer look at the details in the Features Timeline page on Visual Studio Online to discover what has been released in the latest months.

The last few years have been extremely interesting for those people living in the ALM space. The evolution of ALM tools in the market has been phenomenal. From very specific niche tools of small companies to the more all-in-one solutions of the traditional big players like HP, IBM and Microsoft. The specific niche tools are more widespread in the startup landscape and might often be the best fit for one specific purpose. The big vendors on the other hand will always have their place in large organizations, but the challenge will remain to integrate all active ALM related tools to improve the collaboration and efficiency between different stakeholders. Many people are still keen to have quick access to the data they need within their tool of choice (whatever tool is perceived as the master ALM tool). Data will need to synchronize flawlessly across different ALM tools/solutions without losing focus on the value of a corporate ALM strategy. Companies like TaskTop and OpsHub are already playing an important role in this area.

ALMYourWay

Another important evolution of Microsoft’s ALM solution has been to also support non-Windows environments. Team Explorer Everywhere (TEE) was the result of specific attention for cross-platform collaboration from a dedicated group inside Microsoft (after buying Teamprise). Nowadays, all different TFS product teams inside Microsoft share a common vision to deliver ALM features for hybrid environments with extensibility in mind.

With the recent announcements of Application Insights and Monaco, Microsoft demonstrates to drive the industry further in an improved and fast Build-Measure-Learn cycle (Lean ALM) with an increased focus to embrace the opportunities of cloud computing.

I would like to finish off with a link to the recent publication of the new Gartner report (November 19, 2013) on Application Development Lifecycle Management where Microsoft has been identified again as leader in the Magic Quadrant.

Microsoft is a Leader in the ADLM market with a strong customer base and partner base, together with a solid stream of innovation. Microsoft offers one of the broadest sets of ADLM functionality available in the market; second only to IBM. Since delivering the Azure-based version of Microsoft TFS, the vendor has moved to a consistent release train, moving new features first to the cloud-based versions and then into on-premises releases.


The evolution of Microsoft’s solution for Application Lifecycle Management: Team Foundation Server – Part V

November 18, 2013

Part I: Introduction

Part II: Diving into the basics of ALM and how did Microsoft start with an ALM solution?

Part III: Heterogeneous Software Development

Part IV: A fully integrated testing experience with TFS 2010

Part V: About TFS 2012, the Cloud, Git and the new shipping cadence

The next major release of Team Foundation Server (TFS 2012) was released on September 12, 2012. Some of the key new features in this release were full integration of Agile Project Management through a new improved Web Access UI, involvement of feedback from other stakeholders in the software development process by using PowerPoint StoryBoarding and the Feedback Client, increasing the productivity of developers by introducing a new Team Explorer UI and by providing important version control enhancements like the availability of Local Workspaces.

AgileProjectManagement

Continuous Value Delivery

The main theme of this release was Continuous Value Delivery. TFS 2012 delivered workflows and tools that are able to shorten delivery cycles, include customers and operations (non-development stakeholders) in software construction, and it looks for opportunities to reduce waste in the software development process. As a result, your organization should be able to reduce risks, solve problems faster, and continuously deliver value that exceeds customers’ expectations.

DeliveryCycle

TFS2012Overview

Primary goals of ALM with Visual Studio 2012:

  • Prioritize collaboration among everyone involved in developing an application, incorporating customers as equal members of the development team.
  • Deliver timely and actionable feedback to reduce wasted effort and its associated costs.
  • Provide natural and appropriate tools for the tasks involved in designing, developing,  testing, delivering, and maintaining quality software.
  • Support best practices for application development, while remaining independent of any specific methodology.

Reflecting on the impact this release had for many of my customers, it must have been the best release of Team Foundation Server. The majority of the new big features were easy to demonstrate and it didn’t take long to convince decision makers of the business value to upgrade old TFS environments to TFS 2012 or to introduce it to new customers who were struggling with implementing ALM in general. There were so many known pain points in software development tracks which could benefit heavily from built-in solutions in the TFS 2012 release. Think about for instance the possibility to shorten the feedback loop between business and the development team, but it also included important development productivity features like explicit code reviews in the daily workflow of developers and the inclusion of a new improved diff/merge experience. After feeling a bit lost in the redesigned Team Explorer UI in the beginning, it became clear that it focused more on a task-oriented approach, freeing the development team from many of the distractions that can occur when working on a complex project and enabling the team to work more quickly and more efficient (for example: Suspend-And-Resume functionality to stay in the zone). Testing became a true first-class citizen in the development process with a unified Test Explorer in Visual Studio and a unique way to perform automated and exploratory testing from Microsoft Test Manager.

DevOps

Team Foundation Server 2012 provided also a server monitoring solution for teams that use or want to adopt System Center Operations Manager (SCOM). A monitoring agent can be installed/configured for ASP.NET applications running on a web server. This agent collects rich data about exceptions, performance issues, and other errors. Using the TFS Connector for SCOM, Operations staff can file these exceptions as operational issues (work items) in TFS and assign them to the development team to investigate in order to improve and fix production web applications. Visual Studio and the TFS connector, in conjunction with SCOM, provide a real-time improvement feedback loop for server-based applications deployed in production environments, leading to continuous improvements, an increased mean time to repair (MTTR) cycle and better quality in the end of the application in production.

TFS-SCOM

The Cloud Story

More importantly, Microsoft has also been working very hard on an ALM “cloud” offering in this timeframe: Team Foundation Service or Team Foundation Server in the cloud. A preview of Team Foundation Service was announced for the first time at the BUILD Conference in 2011 and the public launch was communicated on June 11 of 2012. What started as a technical experiment to get TFS running on Windows Azure eventually evolved into a new product which also completely shifted the release cadence for new Team Foundation Server releases. More info on this in one of the next paragraphs.

TheCloudStory

The cloud version of Team Foundation Server provides (small) development teams easy access to ALM features (Version Control, Agile Collaboration and Automated Builds) without having to install and manage the server application on-premises. Not all existing on- premises TFS features like Reporting and Lab Management are available in Team Foundation Service, but it’s clear that this offering is quite compelling for small companies that want to be instantly up-and-running with the foundations of Application Lifecycle Management.

The Git Story

Also in 2012, Microsoft announced (basic) Git integration with Team Foundation Server via the open source Git-TF solution (http://gittf.codeplex.com/) which offered a set of cross-platform command line tools (running on Windows, Mac and Linux) for sharing version control changes between TFS and Git. Instead of building their own Distributed Version Control System (DVCS), Microsoft decided to embrace Git (the most popular DVCS solution on the market) even more when it was publicly announced in January 2013 that Team Foundation Server would host genuine Git repositories and Visual Studio would get native Git support for managing local and hosted repositories. I remember Brian Harry introducing that idea in a private NDA event at the MVP Summit in February 2012 (almost a year before the public announcement). There were mixed opinions internally in the TFS product team, but for the Visual Studio ALM MVP group it sounded immediately as the way to go: adopt the best DVCS solution in the market! I’m still very glad Microsoft eventually took this decision because it’s nowadays actually adding extra value for DVCS workflows in Visual Studio on top of Git instead of playing catch up with Git by building their own “Microsoft” DVCS solution. Git didn’t have the best client development experience on the Windows platform in the past, but with the Visual Studio Tools for Git, developers are getting native support for managing Git repositories inside Visual Studio and they could also target hosted Git repositories in Team Foundation Service. Including Git in the broad ALM vision of Microsoft resulted today in an integrated enterprise-ready ALM offering with Git: version control, work item tracking and build automation. It’s good to see that Microsoft is now actively working together in the open-source space to provide the best possible experience for software developers and it strengthens Microsoft’s presence on non-Windows platforms.

TFS Shipping Cadence

As from the beginning, Team Foundation Server has been a boxed product which had a 2 to 3 year release cycle. Bringing TFS to the cloud started a new unknown era. Rolling out fixes/updates for an on-premises product or for a cloud service is totally different. Imagine a cloud service which doesn’t update for a long period of time. People would assume it’s dead and forgotten. Customers of TFS in the cloud expect the product to evolve much faster than a boxed product and a service in the cloud must always be “on”. Delivering small and big updates to Team Foundation Server (different on-premises versions + cloud) was a big challenge for the TFS Product Team, but they managed to get it aligned pretty well now. Team Foundation Service will always be the front-runner and is updated every three weeks with features enabled/disabled by feature flags. Many features you will detect in the cloud may not be present for quite some time in the version of TFS you are running on-premises. On average, updates to VS/TFS (on-premises software) are pushed in a quarterly update and new major versions of TFS will be released yearly.

TFSShippingCadence

Part VI will cover the latest release of Visual Studio and Team Foundation Server: VS and TFS 2013. Stay tuned!

Part VI: TFS 2013 and Visual Studio Online

Part VII: Conclusion

 


Follow

Get every new post delivered to your Inbox.