Version Control with Team Foundation Server 2010

January 11, 2011

On February 15, I will do a Microsoft Live Webcast on “What you should know about Version Control in Team Foundation Server 2010”.

Unlike Visual SourceSafe (VSS), which relied on a file-based storage mechanism, Team Foundation version control stores all artefacts, as well as a record of all changes and current check-outs in a SQL Server database which makes it highly reliable and scalable. It supports features such as multiple simultaneous check-outs, conflict resolution, shelving and unshelving, branching and merging, and the ability to set security levels on any level of a source tree, alongside the most visible features of document versioning, locking, rollback, and atomic commits. The source control mechanism integrates with TFS Work Item Management as well. TFS administrators can enforce check-in policies that require specific requirements to have passed and individual versions of files can be assigned labels. This session is targeted towards developers who want to know all the details about the new version control features in Team Foundation Server 2010.

Read more.

Update [May 4, 2011]: recording uploaded to Channel9

Watch recording


Timeout with TFS2010 Backup/Restore Power Tool

December 3, 2010

I peviously alread blogged about the TFS2010 Backup/Restore Power Tool, but there are still some gotchas you should be aware of.

At a customer where I made use of the TFS2010 Backup/Restore Power Tool we ran into the (known) timeout issue during a TFS Backup execution.

Active backup plan configuration: full backup each week, differential backup each day, transactional backup each 30 minutes.

The timeout (600 seconds) was caused by very big transactional log files (> 15 GB) that couldn’t be stored in time to the backup location. No matter what backup plan configuration you choose, the transactional log files of all TFS databases are continuously growing because the recovery mode of the TFS databases is set to "Full". To keep it short here, the Full recovery mode is used because it provides greater protection for data than the Simple recovery model. It relies on backing up the transaction log to provide full recoverability and to prevent work loss in the broadest range of failure scenarios. More details on SQL Server recovery modes can be found here.

As a quick fix, I changed the recovery mode of the involved databases from Full to Simple and shrunk the log files. After that I switched the recovery mode back to Full. But the issue with the growing transactional log files (+ timeout) will continue to pop up in the (near) future …

So, I was thinking about setting the recovery mode of the TFS databases to Simple permanently and switching to a nightly full backup each day. I assumed that we will always be able to do a restore to one of those full backups (maximum loss of data = 1 day) … No! Just don’t do this! The Backup/Restore Power Tool relies on SQL marked transactions to keep consistency across the TFS (and dependency products) databases. The SQL marked transaction implementation in the Backup/Restore Power Tool requires the SQL recovery mode to be set to Full. Thanks to the TFS product team for making this clear to me! Switching permanently to a Simple recovery mode could possibly result in a rollback to inconsistent TFS databases. More details on marked transactions can be found here.

A temporary solution is to manually switch to Simple recovery mode, shrink the log files and then switch back to Full recovery mode. The problem is that you would need to do this sometimes when the log files are getting "too big". A better solution might be to automate and schedule these actions for all involved TFS databases.

Here’s a sample SQL script that you might use:

ALTER DATABASE [<DatabaseName>] SET RECOVERY SIMPLE WITH NO_WAIT

USE [<DatabaseName>]

GO

DBCC SHRINKFILE (N'<DatabaseName>_log’ , 0, TRUNCATEONLY)

GO

ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL WITH NO_WAIT

Timeout issues + log file sizes will be fixed in the next TFS Power Tool release (probably Q1 2011).

[Update March 13, 2011]

With the release of the new TFS Power Tools (March 2011), the timeout issue has been resolved. Note that you must not forget to disable the workaround script to shrink the logfiles.


TFS2010 Backup/Restore Tool

October 19, 2010

Despite there are some known issues with the first version of the TFS2010 Backup/Restore Tool, it has saved me already a lot of time during different TFS2010 assignments. Setting up manually a complete backup plan for all involved databases is not that straightforward for non-database-administrators. I also like the neat integration with the existing Team Foundation Administration Console.

Some other obstacles I encountered during the TFS2010 Backup configuration:

  • System Check failed in the readiness check

    TF255118: The Windows Management Instrumentation (WMI) interface could not be contacted on this computer

    This failure was simply fixed by restarting the Windows Management Instrumentation service.

    RestartWMI

  • Grant Backup Plan Permissions failed in the readiness check

    Account “x” failed to create backups using path \\tfs2010\Backups 2010

    This failure had nothing to do with security or permissions, but the error was simply caused by a space in the network path. The network backup path must not contain a space!

Note that you shouldn’t backup (yet) the SharePoint databases with the TFS2010 Backup/Restore Tool.

You can download the TFS2010 Backup/Restore Tool as part of the TFS2010 Power Tools (September 2010).


TFS2010 Configuration issue in a Windows 2000 domain

October 12, 2010

I did encounter an error while configuring Team Foundation Server 2010 on a Windows Server 2008 R2 machine (64 bit) which was joined to a Windows 2000 domain.

The error came up while running the system check verification in the TFS2010 configuration wizard.

TF255435: This computer is a member of an Active Directory domain, but the domain controllers are not accessible.  Network problems might be preventing access to the domain. Verify that the network is operational, and then retry the readiness checks.  Other options include configuring Team Foundation Server specifying a local account in the custom wizard or joining the computer to a workgroup.  http://go.microsoft.com/fwlink/?LinkID=164053&clcid=0x409

Note that the link will just bring you to the microsoft.com site and won’t help you in solving the error.

I first stumbled on this MSDN forum article, but I wasn’t really confident that this “solution” would work in my situation. The new virtual machine was setup correctly in the domain from the start and wasn’t conflicting with some other machine(s).

Digging deeper in the configuration logfile gave me this:

Exception Message: The trust relationship between this workstation and the primary domain failed.
(type SystemException)

Exception Stack Trace:    at System.Security.Principal.NTAccount.TranslateToSids(IdentityReferenceCollection sourceAccounts, Boolean& someFailed)
   at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess)
   at System.Security.Principal.NTAccount.Translate(Type targetType)
   at Microsoft.TeamFoundation.Common.UserNameUtil.GetMachineAccountName(String hostName)
   at Microsoft.TeamFoundation.Admin.VerifyDomainAccess.Verify()

Together with the fact that SIDs could not be resolved correctly on this machine when editing local groups it was clear that there was something wrong with the AD communication.

Apparently there’s a known problem with the LookupAccountName function (only on Windows Server 2008 R2 computers joined to a Windows 2000 domain) to retrieve a security identifier (SID) for a domain account.

After applying the available hotfix (KB 976494), everything was working again and the system check in the TFS2010 configuration wizard succeeded without warnings. Problem solved!

Again a confirmation for me that installing/configuring Team Foundation Server in an enterprise environment is always a challenge because there are so many different platforms involved: Active Directory, Internet Information Server, SQL Server, Reporting Services, Analysis Services, SharePoint, …


Live Meeting on Visual Studio Lab Management 2010

September 5, 2010

On September 15, I will do a MSDN Live Meeting on Visual Studio Lab Management.

Register here.


Screencast Visual Studio Lab Management 2010

August 7, 2010

As announced at VSLive last week in Seattle, Visual Studio Lab Management will go RTM at the end of August 2010. On top of the general availability, the Lab Management capabilities will become available to all customers who have licenses for Visual Studio 2010 Ultimate with MSDN or Visual Studio Test Professional with MSDN. This really rocks because it means that companies won’t have to pay additional licenses (as communicated in the past) for using Lab Management if they already have one of the above products.

I did setup Visual Studio Lab Management already twice in the past and after experimenting with it for a few months I must say this product has a big future. At many customers I have seen the pain of deploying and testing applications during the development phase. Many of these pains are properly addressed with Visual Studio Lab Management 2010.

To present you a small teaser of the product features I did prepare a 20’ screencast about the build-deploy-test cycle in Visual Studio Lab Management. The screencast is available at Channel 9. Note that the sound is a bit dusty during the first two minutes.

ScreencastVSLabManagement

Summary of demo in the screencast:

The solution that’s used for the demo contains a web application project and a database project. Some automated UI tests with assertions are part of a dedicated Test Suite in a Test Plan with Microsoft Test Manager. A virtual environment has been created with 2 virtual machines (one will serve as the web server and the other will be the database server) in which a clean snapshot has been taken for deployment. The Lab Build will take the latest binaries of the solution and will deploy the web application to the web server (msdeploy) while the database project will be deployed to the database server.  After deployment the automated UI tests will run in the virtual environment.

[screencast has been recorded and edited with Camtasia Studio]

Sharing some extra links:


The Gated Check-in build in TFS2010

April 18, 2010

Everybody should be already familiar with Continuous Integration or should I say Continuous Building? Automatically building a development codeline after a check-in is often not immediately followed by an integration action towards a main branch. I picked up the term Continuous Building in this article of Martin Fowler.

Apart from the fact how this “build automation” should be called, there are many reasons why you should enforce this behavior on different branch types for your applications. The ultimate goal is to improve the quality of the software application and to reduce the time to release the application in production. By setting up early validation (compilation, automatic testing + other quality gates) through “build automation” you will at least be notified as soon as possible of all kinds of validation errors (= quality check) and you will have a chance to fix them before other team members will be impacted by pulling a get latest on the repository.

Automatically firing a validation build after a check-in will in the end not prevent broken builds and that’s where the Gated Check-in Build will come into play with Team Foundation Server 2010.

The Gated Check-in Build in TFS2010 will prevent broken builds by not automatically committing your pending changes to the repository, but the system will instead create a separate shelveset that will be picked up by the Gated Check-in Build. The build itself will finally decide if the pending changes need to be committed to the repository based on the applied quality gates.

Gated Check-In Build process

The picture above describes the full process of a Gated Check-In build.

How to setup a Gated Check-in build?

The Trigger Tab in the Build Definition window has now an extra option for selecting Gated Check-in.

GatedCheckIn2

At the moment a check-in is attempted by a developer in the branch where the Gated Check-in build is active, the developer will be faced with a dialog box.

GatedCheckIn3

Cancelling this window will not kick off the build, but will also not commit your pending changes to the repository. If you really want to overrule this build with committing your changes directly to the repository, you may select the 2nd checkbox to bypass the validation build (not recommended). By default your pending changes will reside in your local workspace (first checkbox). In the situation where you immediately want to start with new changes – not relying on previous changes – it might be appropriate to uncheck the first option.

In the ideal situation, the build will complete without any validation errors and will eventually commit the changes to the repository. This will also lead to a Gated Check-in notification for the original committer via the Team Build Notification tool.

GatedCheckIn5

GatedCheckIn4

If you had previously chosen to preserve the changes locally (default), you may have noticed that the files you were working on were still checked out during the build … and in fact after a successful build these changes do not reflect the as-is situation anymore of the repository. With the above window you get the option to immediately reconcile your workspace with the up-to-date repository. So, clicking the “Reconcile …” button will give you the opportunity to select the desired files to force an undo in your local workspace and to pickup the changes that were committed by the Gated Check-in build for these files.

Another way to reconcile your workspace (if you for example ignored this window or when the build notification is way too slow) is by right-clicking the completed Gated Check-in Build in the Build Explorer and selecting the option to reconcile your workspace.

GatedCheckIn6  

If you did not choose to preserve the changes locally, there won’t be any changes to reconcile after the Gated Check-in build, even if you forced the reconciliation.

GatedCheckIn8

The Gated Check-in build may also be kicked off manually where you may need to create a shelveset or where you may point to an existing shelveset.

GatedCheckIn9

A last thing to note is that the comment that was originally supplied to the changeset by the developer will be suffixed with the NoCICheckinComment variable (default = ***NO_CI***) to prevent that another continuous integration build will be fired after the final check-in done by the Gated Check-in build.

GatedCheckIn7

Summary

What meant to be a small post on the Gated Check-in feature in Team Foundation Server 2010 ended up in a more detailed explanation of how it works and how you can work with it in the Visual Studio IDE. Remember that you should setup the most appropriate build types according to your specific branches. Not all branches may need a Gated Check-in build. Only configure this for branches that should never have a broken build. A Gated Check-in build may for example validate a big merge operation from a development branch to a stable main branch.


Techdays Belgium 2010 – Session details

April 1, 2010

This year I presented a session at Techdays Belgium on Branching & Merging with Team Foundation Server 2010.

The session slides can be downloaded in the download section of this blog.

Watch recorded video (1 hour and 10 minutes) at Channel 9.

A demo on Branching & Merging with TFS2010 was the major part during the presentation and I did practically cover everything I wanted to share with the audience: Branch metadata, Fine-grained security, Branching Visualization, Tracking individual changesets across branches, forward/reverse integration … except for one little important merge action that I forgot to show!

After the creation of my dev branches (from main) I also renamed the solution in those dev branches with an additional suffix to avoid confusion while loading different solutions into Visual Studio 2010. This was done in changeset 121 and I renamed the solution from WebsiteSparkles to WebsiteSparkles_dev1.

DemoTechdays1

Afterwards I did some code changes in the dev branches and pushed some explicit changesets back (Reverse Integration) to the main branch, using the cherry pick option in the merge wizard to avoid merging also the solution rename.

As a result changeset 121 will always remain a merge candidate in the Source Control Merge Wizard.

DemoTechdays2

In some cases you really want to merge changes back to main on the latest version of the development branch without cherry-picking all required changesets. To be able to do that, you need to get rid of changeset 121 as a merge candidate.

This can only be done through the command-line with the tf merge /discard command.

DemoTechdays3

This discard command will make sure that changeset 121 will not be a merge candidate anymore. Note that you still need to commit this action to the repository after executing the command. The discard command will only update your local workspace but won’t do an automatic check-in.

DemoTechdays4

Next time you will run the merge wizard and look for merge candidates, changeset 121 won’t be listed anymore and you may merge from a latest version of this development branch for upcoming changes.

Providing this discard command from within the source control merge wizard would be a very nice addition!


Running Coded UI Tests (from action recordings with MTLM) in Team Builds (TFS2010)

January 21, 2010

With Visual Studio 2010 (Premium/Ultimate) we are able to create several types of automated tests. Automated tests will execute a sequence of test steps and determine whether the tests pass or fail according to expected results.

Coded UI Tests provide functional testing of the user interface and validation of user interface controls.

How to create Coded UI Tests? You could create them directly into Visual Studio, but for this blogpost I want to start from an action recording in Microsoft Test and Lab Manager (MTLM). An action recording is quite useful in manual tests that you need to run multiple times and for recycling common steps in different manual tests that contain shared steps.

I did create a simple test case with different test steps in MTLM to test some behavior on my website.

TestCaseDefinition

From MTLM I started a test run for this test case.

TestSuiteOverview

Before running the test, I do need to check the action recording to be sure to capture my actions for this test.

CreateActionRecording

The Test Runner will give a detailed overview of the recorded actions. Afterwards you will be able to replay all these stored actions in the Test Runner.

ActionRecordings 

After saving the results of this test run (all data is associated to my test case) it’s time to open Visual Studio 2010 and to create a Coded UI Test.

TestCaseAttachments

CodedUITestStart

Instead of choosing the default option to record actions I did choose to use an existing action recording after which I need to retrieve the appropriate test case to link to the associated actions.

ActionRecordingPicker

By clicking OK, Visual Studio will start generating code that will represent my actions that were recorded in Microsoft Test and Lab Manager. On top of that you are also able to add assertions on parts of the user interface in a separate Coded UI Test that you may reuse in other Coded UI Tests.

CodedUITestAssertions

Now, let’s integrate this entire UI test (MyCodedUITest) into the automated build. I created a default new build defintion where I also enabled to run the automated tests.

BuildDefinition

To run unit tests that interact with the desktop during a Team Build, we need to modify the Build Service Host properties in the Team Foundation Administration Console to run the build service as an interactive process instead of running the build service as a Windows Service.

BuildServiceHost

That’s about it. Make sure that the Build Service Host is running in the command line that will pop up after starting the BuildServiceHost. Queue the build and explore the results!

TestResults

Done!

With this post I wanted to highlight the powerful integration of (automated) testing into the upcoming Visual Studio 2010 offering.


New Training offering by Sparkles + Speaking at TechDays Belgium 2010

January 19, 2010

With my new company Sparkles I don’t only provide ALM consultancy services, but I also try to setup advanced training courses in Belgium with local and international experts.

An exclusive partnership with IDesign is set up to bring the best training to Belgium. IDesign’s training courses are among the world’s most intensive, most comprehensive .NET training classes given by the IDesign architects who have a world-renowned reputation as industry leaders. The IDesign architects are all frequent speakers at major international software development conferences, where they present their techniques, ideas, tools and breakthroughs.

In the week of March 1, 2010, Brian Noyes (Chief Architect at IDesign) will be in Belgium (Antwerp) for delivering an intensive 5 day training on Architecting WPF Applications.

On May 3-4, 2010, I will also deliver for the first time a detailed training on the new Application Lifecycle Management features of Visual Studio 2010. The Training class is called ALM with Visual Studio 2010.

This year Microsoft Techdays in Belgium are scheduled on March 30-31 + April 1 and I’m confirmed as a speaker. I will deliver a session on Branching and Merging with Team Foundation Server 2010.

In January I’m also starting to setup Team Foundation Server 2010 (Beta 2) at 2 new clients. More and more small development shops also see the benefits of a fully integrated development platform. Those companies that were still in doubt a few years ago are now convinced because of the promising upcoming release of Visual Studio 2010. The ALM train is on the rails! Very busy, but exciting times!