11/29/2012

Cloud Provider

This is the third of a series of blog posts around Cloud Computing. It gives an overview about the current Cloud Computing trends and explains how to set up a private cloud environment with Microsoft's System Center products.

In this post the different cloud solutions currently available are compared. "Cloud" solutions which are not related to infrastructure are not considered in this overview. Meaning every Software-as-a-Service or central data storage solutions like Skydrive or iCloud are out of scope.
Following you find an overview about the main infrastructure and platform cloud solutions:

VMWare Cloud Foundry Microsoft Cloud Amazon Elastic Compute Cloud (EC2) / Amazon Web Services (AWS) Elastic Beanstalk IBM SmartCloud Google AppScale
Description Open-Source solution from VMWare written in Ruby Public Cloud: Windows Azure, Private Cloud: System Center Suite Amazon's cloud hosting solution SaaS/PaaS/IaaS solution from IBM Google's framework for developing highly scalable applications
Cloud Type Private Cloud Public and Private Cloud Public Cloud Public Cloud Public Cloud
Service Model PaaS IaaS, PaaS IaaS, PaaS IaaS, PaaS IaaS, PaaS
Virtualization VMWare ESX Windows Server Hyper-V, VMWare ESX, Citrix Xen Citrix Xen IBM Xen, KVM, Eucalyptus
Supported Technologies ASP .NET, Java, Spring, Ruby on Rails, Sinatra, Node.js, Grails, Groovy, Scala, PHP, Python/Django ASP .NET, Java, PHP, Ruby, Node.js Java, PHP, Ruby, Python, ASP .NET Java, PHP Java, Python, Ruby
Supported Application Servers Java Application Server IIS, Apache Tomcat IIS, IBM WebSphere, Java Application Server, Oracle WebLogic Server, Apache IBM WebSphere (custom)
Supported Databases Microsoft SQL Server, MySQL, Redis, MongoDB Microsoft SQL Server Microsoft SQL Server, MySQL, Oracle Based on DB2 and compatible with Oracle HBase, Hypertable, Apache Cassandra, MySQL Cluster, Redis
Operation Systems Linux, Windows, custom Windows, Linux, custom VHD Windows, Linux Linux Linux
Price Model Not published yet (Beta) Pay-per-use Pay-per-use Price Table Quota-based Pricing

Public Cloud Solutions
Currently, the main players in Public Clouds are Microsoft and Amazon. Amazon comes from an IaaS approach and Microsoft started with PaaS. These are also the areas in which each provider leads the market.
Amazon has in general more experience with Cloud Computing. Their public cloud offers great flexibility and full control over the infrastructure. On the other hand the patch management has to be done by the client. Amazon introduced AWS Elastic Beanstalk which provides a standardized automated deployment approach and the possibility to scale out applications and services easily. But the cloud consumer is still responsible for the underlying virtual machine and the update management.
Microsoft provides an extremely flexible PaaS solution which can be completely customized by invoking custom scripts. As long as installations and configurations are completely scripted and running on the windows environment, there are no limitations with this approach. Microsoft even takes over that patch management for the underlying operating system.

Private Cloud Solutions
In the Private Cloud sector there are at the moment just two alternatives: Microsoft's System Center solution and VMWare's Cloud Foundry. Microsoft is the only vendor which actually supports both, the private cloud and public cloud scenarios as well as a way to shift applications and services smoothly between private and public cloud data centers.
In the following posts I am going to explain in detail what Microsoft’s System Center Suite can offer and how to configure a private cloud environment.

Cloud Terminology

This is the second of a series of blog posts around Cloud Computing. It gives an overview about the current Cloud Computing trends and explains how to set up a private cloud environment with Microsoft's System Center products.

This post explains the key criteria, different approaches and service models of cloud computing.

Cloud Computing Key Criteria
  • Elasticity and Flexibility
    The needed resources like CPU, Memory and Storage can be dynamically allocated. It is possible to quickly scale vertically and horizontally.
  • Self-Service
    The cloud is exposed to the end-user in a way that they can easily control their needed systems and resources. There is no human-interaction needed for creating new virtual machines, deploying applications or scaling them.
  • Completely Automated
    The most important criteria of cloud computing is the automation of the complete infrastructure. Systems and Applications are provided in an automated way where no human-interaction is needed.

Cloud Service Models
  • Infrastructure as a Service (IaaS)
    This service model gives the user the possibility to provision virtual machines on operation system level. It is possible to deploy and run arbitrary software. The customer does not have any control of the underlying cloud infrastructure. But she can manage operating systems, storage and network components like firewalls and load balancers.
  • Platform as a Service (PaaS)
    The customer can deploy applications on PaaS cloud solutions without having access to the underlying infrastructure like operating system, network or servers. The applications are restricted to the supported programming languages and technologies. PaaS solutions provide automated deployment processes, monitoring and easy horizontal scaling features (spreading multiple servers).
  • Software as a Service (SaaS)
    This model allows the customer to use applications hosted externally, like Office and Web Storage solutions. The customer does not have any access to the cloud platform itself.

In my opinion the SaaS model is not directly connected to cloud computing because it could be mainly also provided without a cloud infrastructure. If you take a look at the key criteria above, none of them can be really applied to Software-as-a-Service solutions. That does not mean that some applications are running on a cloud platform to reduce operational overhead and have better flexibility and scalability. It is just not needed to provide this service.

Cloud Computing Types
  • Private Cloud
    The infrastructure is running internally within a company. The applications itself can be exposed to the outside world but the self-service and administration is done within a company and the internal network.
  • Public Cloud
    Public Clouds are available over the internet and can be used by other companies to host their applications and services. The main public cloud solutions are Microsoft's Windows Azure, Google's App Engine and Amazon's Elastic Compute Cloud. They provide usually a pay-per-use price model where the costs depend on how much traffic and resources are used.
  • Hybrid Cloud
    Hybrid clouds are a mix of private cloud and public cloud infrastructures. They consist of company-internal and external infrastructure which builds a federation to establish a communication between each other.

In the following posts I am mainly talking about private Clouds and how they can be built within a company-internal data center.

How to create a Private Cloud?

In the next couple of posts, I would like to describe the current trends around Cloud Computing and how you can actually set up a Private Cloud environment.

First of all, it is important to clarify some terms which are used in the Cloud Computing context. Cloud Computing is used in many cases as a marketing buzzword and this causes a lot of confusion because everybody has another picture about cloud computing.

In this post (and in my opinion also in general) the term Cloud Computing represents an approach how a highly flexible, scalable and completely automated infrastructure can be build up in a data center. This is mainly achieved by virtualizing and automating the completely infrastructure. Additionally, cloud computing is about how this infrastructure can be made available as a self-service for end-users.

If you follow the advertisements around Cloud Computing you realize that it is more and more used for all web applications and web services which provide central storage for data and enable accessibility for a variety of devices, like laptops, tablets, mobile phones and TVs. The TelekomCloud or iCloud from Apple are examples for central data storages but they do not necessarily have a dynamic Cloud infrastructure to serve these services.

In the following posts I am going to explain how you can build up a private cloud based on Microsoft's System Center 2012 components.

- Post 1: How to create a Cloud?
- Post 2: Cloud Terminology
- Post 3: Cloud Provider
- ...

NDepend 4.0 available

The NDepend version 4.0 has been released. For all which do not know what NDepend is, take a look at my previous post.

NDepend is a great tool for static code analysis of .NET code. It is also available for Java "JArchitect" and C++ "CppDepend".

One of the main new features of the version 4.0 is the new query syntax which is based on Microsoft's LINQ. It provides a comprehensive way of analyzing your code.

The following example checks whether a base class uses one of its derivatives:
warnif count > 0 from baseClass in JustMyCode.Types
where baseClass.IsClass && baseClass.NbChildren > 0
let derivedClassesUsed = baseClass.DerivedTypes.UsedBy(baseClass)
    where derivedClassesUsed.Count() > 0
select new { baseClass, derivedClassesUsed }

This example is quite simple, but with these new query capabilities it is even possible to make more complex queries. One of the best examples you find here. These queries highlight namespace cycles and mutually dependent namespaces by using NDepend queries. These queries allow checking if the code follows a layered approach even on namespace level. Namespace cycles or mutually dependent namespaces cause usually a higher effort when changing certain code parts. The query allows to verify that and shows even suggestions which namespace should not use the other one based on the analysis which namespaces uses more types than the other one.


The image shows the mutually dependent namespaces. Furthermore it highlights that the namespace "TSTune.BL.DTO" should most probably not use the "TSTune.BL.Logic" because 11 types of DTO are used by Logic and just 1 type is used by DTO. There is a high chance that the use of this type is a mistake.

This example shows that the new query language of NDepend provides great possibilities to verify your code and identify bad parts. If these checks are performed as part of the continuous integration process, they can be easily avoided and result in the end to a much more maintainable solution.

9/22/2012

JSAnalyse for VS 2012 has been released

I am happy to announce a new release of JSAnalyse. You can download the latest version on the codeplex project JSAnalyse.

The new release has a couple of improvements:
- Visual Studio 2012 support
- Multiple JavaScript dependency diagrams can be created
- Enhanced Caching mechanism to support bigger object graphs
- Detects even more static references between JavaScript files

For those who do not know what JSAnalyse is, read my previous post "JSAnalyse published on codeplex".

8/27/2012

TFS 2012 Build Server Installation - Fails with error "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list."

If you get during the TFS 2012 Build Server Configuration an error message with the following error message:

"System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list."

This is a bug in the Build Server configuration tool. You can fix it by turning on your Windows Firewall.

The configuration tool tries to check the firewall and add an exceptional rule for the build server port which causes an exception because the firewall is not running. Microsoft actually handles this exception, but within the catch block they try to write a warning message out which unfortunately causes another exception.

Here are the results by using reflector:

Assembly: Microsoft.TeamFoundation.Build.Config.dll
Name: Microsoft.TeamFoundation.Build.Config, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Class: BuildServiceHostUtilities
private static void RemovePermissions(Uri baseUrl, bool deleteFirewallException)
{
    if (baseUrl != null)
    {
        string permissionedUrl = GetPermissionedUrl(baseUrl);
        try
        {
            ConfigurationHelper.FreeUrlPrefix(permissionedUrl);
        }
        catch (Exception exception)
        {
            LogWarning(Resources.Format("CannotFreeUrlPrefix", new object[] { permissionedUrl, exception.Message }));
        }
        if (deleteFirewallException)
        {
            try
            {
                ConfigurationHelper.DisableFirewallException(baseUrl.Port);
            }
            catch (COMException exception2)
            {
                if (exception2.ErrorCode != -2147023143)
                {
                    LogWarning(Resources.Format("FailedDeletingPortExceptionFor", new object[] { baseUrl.Port, ExceptionFormatter.FormatExceptionForDisplay(exception2) }));
                }
            }
        }
    }
}

The RemovePermissions method in the BuildServiceHostUtilities tries to add an exception for the Build Service port in the line ConfigurationHelper.DisableFirewallException(baseUrl.Port);

This causes an COMException which will be just logged as an warning. So far so good. But unfortunately the call for formatting the warning message gets just two parameters passed. If we take a log in the resources of the dll the "FailedDeletingPortExceptionFor" text has 3 parameters defined:

Resources.Format("FailedDeletingPortExceptionFor", new object[] { baseUrl.Port, ExceptionFormatter.FormatExceptionForDisplay(exception2) });

FailedDeletingPortExceptionFor=Failed to remove firewall exception {1} for port {0}. Details: {2}

This finally causes the "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.".

7/31/2012

Feature Roll-Out

This is the fifteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.

As funny as it sounds but one of the main problems about Continuous Delivery is the permanent delivery. It can be that one feature is not completely implemented yet and spans multiple releases to get ready. The main idea of Continuous Delivery is to split big requirements into smaller junks which still gives the user new possibilities. Unfortunately, this is not always possible and therefore the feature-flagging technique gets important.

Feature Flagging simply means that every big new functionality should be built in a way that it can be easily turned on and off like in the following listing shown:

public void PlaceOrder(Order order)
{
  var orderSystem = CreateNewInstance();
  orderSystem.Place(order);
}

public IOrderSystem CreateNewInstance()
{
  if (FeatureFlagManager.IsAvailable("NewOrderSystem"))
  {
    return new OrderSystem();
  }
  else
  {
    return new LegacyOrderSystem();
  }
}

Advantages of Feature Flagging
This approach gives a lot of advantages and great flexibility during the roll-out of a new version:
  • Features can be switched on and off, even for a certain group of users if the feature flag component has been implemented to support it.
  • A feature can be smoothly rolled out for a small group of users (like administrators, testers, people from a country, etc...) and does not affect the other users. Therefore some people can test the feature in the real world environment before it is available for public use.
  • The roll-out can be done step-by-step. It can be coordinated and monitored what effects it has on the whole system regarding performance or usability. This approach is especially in web applications with many users extremely useful where the load cannot be simulated on a staging environment anymore.
  • If any problem occurs the old variant is just one click away and there is no need for a big rollback with possible data inconsistency or loss.
  • Furthermore the problems can be identified by a small group of users and do not affect all users at once which might cause an extreme increase of the support tickets.

Problems with Feature Flagging
Of course, the trade-off of this approach is that the design of new components has to be thought through.
The code for implementing feature flagging (e.g. if clauses, factories or attributes) should not spread around the whole code and make it much more difficult to maintain. If a feature has been completely rolled out, it should be even removed to simplify the code afterwards again.
The applied changes (e.g. database schema change) have to be compatible for both code parts. This has to be considered anyhow to support hot deployments where the application stays online during a deployment.
Additionally, the test effort is higher because both cases have to be tested as well as the possible dependencies between these cases.

But in the end I think that Feature Flagging and step-by-step roll-out is a really important concept which is worth to use in bigger web applications. It helps to reduce the risk of deployments dramatically.

Automated UI Testing

This is the fourteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.

After an automated deployment has been set up, we have the chance to test our application on the target platform. We can figure out problems much earlier in the development phase and we can react on it. But manually testing applications is very time-consuming. Therefore, the basic key scenarios and regression tests should be automated. That gives the possibility to execute those tests after changes have been made to the software (e.g. every night). The result is fast feedback about the state of the application. These tests can and should be even executed after every deployment in order to check the health of the software and identify configuration issues. This gives us reliable feedback weather the applications main features are working or not. This results in a higher customer satisfaction even if not all the bugs are found upfront but at least the application does not break down after the first click and the main business can be still served. If a critical error is reported by the customer, which has not been found by the automated tests, they should be of course extended.

Microsoft and HP provide great tools for automated UI tests. HP QuickTest is the market leader in this field and provides a mature and stable framework for automating UI tests. Microsoft’s Coded UI Tests are much newer and do not support as many UI technologies as HP QuickTest (e.g. Java and Flash applications are not supported out-of-the-box by the Microsoft Test Framework). But I would still consider the Coded UI Tests from Microsoft if you are working with the Team Foundation Server and Windows/Web Applications. The coded UI tests can be integrated into the CI build of the TFS (like Unit Tests) and therefore easily executed after deployments and scheduled by the TFS build system.

Short comparison between Microsoft Coded UI Tests and HP QuickTest:

Microsoft Coded UI Tests HP QuickTest
Supported PlatformsWindows, Web Windows, Web, Java, Flash, SAP, etc...
Test TypesUI Tests, Functional Tests, Uni Tests, Performance Tests, Load Tests, Manual TestsUI Tests, Functional Tests
MaintainabilitySeparation between object identification and test methods (complex UI Maps) Separation between object identification and test methods (simple object repository)
TFS IntegrationHighly integrated with Test, Bug and Task Management as well as build integration Plug-in needed (see HP Quality Center Synchronizer - TFS Adapter)
Custom ExtensionsHas open architecture with support to write a variety of extensionsMainly not supported
SummaryProvides integrated environment with TFS and .NET Windows and Web Applications but does not provide many technologies out-of-the-boxShould be used when many different platforms and technologies take part

Automated UI Tests mainly fail and cause high effort in maintenance because of the following reasons:
  • The UI element identification is not separated from the test steps.
    That means that the different UI elements, like textboxes and buttons, are identified in many different places in the code because the same UI elements are used from different test cases. Usually, the criteria (e.g. an ID or Text) changes quite often. And that is the reason why it is important to centralize the object identification parameters. That changes have to be applied just once.
  • The test cases rely on instable data.
    They are usually designed as end-user tests which depend on the functionality and data of all the connected systems. Of course, you can try to write tests which do not depend on the data but that also means the test does not cover the most important parts of your application. Therefore it is very important to think about the data management before actually implementing UI tests.
  • The test cases depend on each other or a complex test setup.
    In order to minimize the maintenance effort, test cases should be independent from each other. Otherwise many or all tests fail because of one single problem.

I have rarely seen good written UI Tests because it will still not be taken as seriously as the productive code. It is important to make planning and architecture also for UI Test code because these tests can just help if they life together with the productive code. It is very important to define upfront how to handle the mentioned problems like the identification of UI elements, data management and test case dependencies.

Unit Tests are not enough to ensure the quality of software. They have, indeed, the advantage to give immediate feedback about changes because they can be executed quickly. But it is very important to test the application from the end-user perspective using automated UI tests.
Automated end user tests have to be performed as soon as possible in the development cycle. Usually, we deploy every night the latest source on a test system and execute the automated tests afterwards. This gives us instant feedback about the quality of the check-ins. If there are any problems they can be immediately investigated and fixed, and not just right before the software has to be delivered to the customer. With automated deployments and automated UI tests delivering high-quality software on a regular basis is much easier.

Automated Deployments

This is the thirteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.

The basis of Continuous Delivery is a completely automated deployment. It should not depend on single persons and manual clicks whether or not software will be delivered to the customer or just published on a staging server for quality tests. How often have I heard "we cannot deploy because X is on vacation" or "it is too high effort to deploy it now". In order to deliver new features to a customer on a regular basis, automated deployments are a must-have.

Of course, depending on the complexity of the software, automated deployments can be difficult to set up. But Microsoft provides, for instance, already an extensive set of tools for that. In the web environment MSDeploy and MSBuild are the most important once. Besides copying the application assemblies and files, MSDeploy can create automatically up- and downgrade scripts for the databases. Further details about the features of MSDeploy and the deployment process can be found at Enterprise Deployment Tutorial.

In most of the companies there are even organizational boundaries between the software development and operations staff. It is very important that the responsibilities are clearly defined. The operations team provides the mechanism for automated deployments but should not be needed during a regular deployment nor have any knowledge about the exact contain of the deployment package. They should be just responsible for the infrastructure and deployment process itself. The creation of the packages has to be in the hand of the software development team and, of course, is completely automated like mentioned before.

This approach has many advantages:
  • The development team is responsible for the software where they have a deep knowledge.
  • The development team does not have to investigate problems based on the filtered details from the operations team. They get full access to the tracing information for their application.
  • The operations team has is focus on their competences, the infrastructure.
  • The operations team can be easily scaled and shared because they do not have to build up the application knowledge anymore but just provide managed IT services.

Continuous Delivery is not just about delivering new features to the customer. It is also even more important to hand over the software regularly to the testing team. As faster they can test new features as sooner we know about certain problems, bugs or even architectural issues in our application. I have seen plenty of times that the software has been developed and handed over to the test team a few days or weeks before delivery. After that when bugs have been found, the panic started because they not just had to be fixed but also re-deployed and re-tested. This all could have been prevented by continuously deployment and testing from the first day of development.

This all should just explain how important automated deployments are and that they are the basis for Continuous Delivery.

Continuous Delivery

This is the twelfth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.

After I wrote mainly posts about continuous integration, I also want to spend some words about continuous delivery.

With the success of agile development processes, the iterative creation of shippable software pieces is getting more and more important. It actually sounds great to delivery regularly new features to the customer in order to get instant feedback and to understand the solution better. It is usually pretty difficult for non-technical guys to understand complex systems based on hundreds of pages in a software specification. But these popular agile development processes come with other difficulties, like how to ensure the quality of the software in short development cycles and how to give the customer an easy possibility to accept and release features.

The previously posted concepts about continuous integration and gated check-ins are a very important basis for delivering high quality software, of course. But they do not address the issues during delivery.

In the next posts I would like to write about the following important topics to establish a working continuous delivery process:

  • Automated Deployments
  • Automated UI Testing
  • Feature Roll-Out

4/06/2012

Static Code Analysis based on NDepend

This is the eleventh blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.

The static code analysis rules from Microsoft, which I introduced in the last post, are a great and easy start to find common problems in your code like memory leaks, security holes or application crashes. As soon as you want to define own customized rules and analyze your code very deeply, you should take a look at NDepend. It is an amazing tool which is quite easy to use and gives you great information about the quality of your code by analyzing the dependencies of every single line. Besides that, NDepend supports an own easy SQL-like query language to select the important information from the code analysis. And after all it can be easily integrated into the CI build to execute the defined queries and check if any of them are violated.

In the following post I am going to explain how to write own queries with NDepend and integrate them into the TFS build workflow.

Set up NDepend

First of all you have to download NDepend. The installation package comes with a Visual Studio Add-In which you should install. After that a new menu entry is available in the Visual Studio as well as a status icon in the bottom right corner. Load your current solution and select the NDepend menu entry "Attach new NDepend Project to current VS Solution". It is important that you build your solution before you use NDepend because the analysis is based on the created assemblies. After you attached NDepend to the solution an analysis will run and show the following web page:


You should spend some time on this web page and study all the information and metrics you get from NDepend. There is a Dependency Graph and Matrix as well as more than 80 code metrics like complexity, maintainability index or lines of code respectively intermediate language. You can easily identify the parts of your code which are used heavily (Type Rank metric) and should be tested more carefully because bugs in these components would have a higher impact. All the NDepend metrics are explained in detail on the page NDepend Code Metrics Definitions. The areas on the NDepend web page are defined in the CQL Query Explorer. Out-of-the-box NDepend already delivers hundreds of predefined queries to analyze your code deeply.


Write your own queries

NDepend is a powerful and complex tool to execute static code checks. But it also allows to define an own rule set by using the code query language (CQL). You can easily adapt the existing queries or just create your completely customized ones. Because of the SQL-like syntax it is really easy to understand.

The following query, for instance, selects all the methods which have more than 20 lines of code (comments and empty lines are not counted).
SELECT METHODS WHERE 
NbLinesOfCode > 20 
ORDER BY NbLinesOfCode DESC

Besides such general queries it is of course also possible to write specific queries for your solution. This query selects all assemblies which directly reference the data access component (DepthOfIsUsing "ASSEMBLY:DataAccess" == 1) but are not the business logic (!NameIs "BusinessLogic"). This can detect, for example, if the data access is used directly by the presentation layer.
SELECT ASSEMBLIES 
WHERE DepthOfIsUsing "ASSEMBLY:DataAccess" == 1
AND !NameIs "BusinessLogic"

Integrate NDepend into TFS Build

After you defined all your NDepend rules, it would be great if code which violates the rules cannot be checked-in anymore. Therefore, you have to flag first the queries you want to cause an error. First of all, the query has to give a warning if a certain threshold has been reached.

In the following example a warning is shown if just one method exceeds 20 lines of code.
WARN IF Count > 0 IN 
SELECT METHODS WHERE 
NbLinesOfCode > 20 
ORDER BY NbLinesOfCode DESC

Additionally, the rule has to be marked as critical to make the build fail. There is a red button in the upper right corner:


After you defined all your rules, limits and importance of the rule, the build has to be configured to fail in case of check-ins which do not fulfill these rules. First of all, enable the gated check-in feature that the bad code is not committed in the source control. An NDepend Activity is available on codeplex and has to be integrated in the TFS 2010 Workflow Build. (see also Integrate NDepend with TFS)


How to improve old legacy code step-by-step

All of these code checks are great if you introduce them before you actually write code. The problem is just that this is not all the time the case. In most of the cases an old legacy system with a lot of code already exists and should be improved. This can be usually just being done step-by-step.
Think about introducing a rule that every method should be a maximum of 20 statements long. With legacy code and thousands or ten-thousands methods introducing this rule is nearly impossible. But therefore NDepend provides a great feature to apply the rules just for changed code. This makes it possible to define strict rules and introduce them step by step. Every check-in makes the code better than before and even huge and complex systems can be improved over time.


Summary

NDepend is a great and extremely powerful tool to analyze your code. You can easily write your own queries and integrate them into the build. This saves a lot of time during the architecture, design and code review because the most common errors can be already detected before the code is actually check-in. Afterwards it is usually extremely difficult and can cost a lot of money to clean up and repair the bad code.

Hint: Try to identify after every code review the common errors and write a rule which detects them in the future.

3/18/2012

Static Code Analysis based on Microsoft Rules

This is the tenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In this post and the next one I want to show how static code analysis can be used to improve the code quality and execute them during the build or check-in process. First of all I want to show how to enable the build-in Microsoft Code Analysis. It is actually a great feature which is not well known and just rarely used.

There is a "Code Analysis" tab in the project settings where the static code analysis can be enabled and the rule sets can be selected:


There are a couple of predefined rule sets from Microsoft. At least the "Microsoft Minimum Recommended Rules" should be enabled because it includes checks for potential application crashes, security holes and other important issues. If, for instance, an IDisposable object is not released, a warning is shown by the Code Analysis during the CI Build:


The Code Analysis is a simple and fast way to enable static code checks to prevent typical errors based on rule definitions from Microsoft.

Client-side Architecture Validation

This is the ninth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

JSAnalyse
A couple of weeks ago I already blogged about JSAnalyse, which is an extension for the Visual Studio Layer Diagram to enable architecture validation for Client-Side Scripts. The JavaScript Client-Side code is still not threated in the same way like server-side code, with the same quality criteria. Nowadays, many projects have already a couple of unit tests and layer validations for the server-side code but do not care about testing and validating their JavaScript code at all. That is the reason why I want to mention JSAnalyse again. It helps defining a client-side architecture and to keep the JavaScript dependencies under control.


More details about Client-Side Validation, how to use it and how it works can be found on the following pages:
Blog about JSAnalyse
JSAnalyse on CodePlex

Additionally for testing JavaScript code I would recommend using JS-Test-Driver and reading the following blog:
JavaScript Unit Testing
JS-Test-Driver

Server-side Architecture Validation

This is the eighth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

The architecture layer diagram is one of the best features in Visual Studio. It gives an easy and great possibility to validate the defined architecture. There are a lot of applications which started with a well-defined architecture and a lot of thoughts and ideas behind it. But over time when the code is getting implemented, refactorings are done and time pressure comes up, the defined architecture is not followed anymore. It takes a lot of time to review the dependencies between the layers and assemblies during the development phase. Sometimes reviews point out that some unwanted assembly references have been added to the projects but this reference is already heavily used and it takes a high effort to get rid of it again.

The Visual Studio Validation Layer Diagram this mentioned problem can be solved or at least reduced.

How to create a Server-Side Validation Layer Diagram?
The following article explains how to use the Validation Layer Diagram in Visual Studio 2010. It even explains how to enable the Layer Validation in the CI Build and to reject check-in's which do not follow the defined architecture. This helps that the code which violates the layer definitions is not committed to the main branch and does not cause a lot of headache and effort to get fixed at a later point of time:
Favorite VS2010 Features: Layer Validation

Which layer diagram views should be created?
Here is an architecture project which defines different views on the applications and components. It is an example project which givens an idea which different views can be created.


Usually, the following three views should be at least defined in the Architecture Layer Diagram:
  • High-Level View (Overview.layerdiagram)
  • Second Level View - (Presentation.layerdiagram, ServiceLayer.layerdiagram, Business.layerdiagram, DataAccess.layerdiagram)
  • External Components View - Restricts the access to external components (ExternalComponents.layerdiagram)

High-Level View (First Level View)
This view defines how the different layers depend on each other. This is the most important view of the application and should be defined already during the project set up together with the assemblies. It is very important to keep this view up to date because it has a high impact on the maintainability of the solution. Here is an example how this high-level view can look like.


Second Level View
This view explains the internals of a single layer. Usually, there is at least one diagram per layer. If a layer is quite complex there can be even more diagrams. The following figure shows an example layer diagram for a Data Access Layer.


External Components View
This view is also very important because it restricts the layers / assemblies to use a defined set of external libraries. It helps to keep external code isolated by defining the exact places where a library is allowed to be used. It solves the problem that special API calls are spread over the whole applications. A good example is for instance the Entity Framework which should be just referenced and accessible from the Data Access component.


After defining the dependencies and configuring the gated CI Build, it is not possible anymore to check-in code which is against the basic architecture. The build process shows an error if a layer break happened even before the code is committed. In this example a call from the Web Application (Presentation Layer) directly to the Order Repository (Data Access Layer) is not allowed anymore:


3/17/2012

Code Coverage

This is the seventh blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Code coverage is a measure which indicates how much percentage of the code has been tested. There are different techniques but it usually describes how many lines of the code have been executed during the Unit Test and how many have not been. It does not say anything about the quality of the tests itself. Even a high percentage of Code Coverage does not help if the unit tests do not cover the use cases how the component / class is used. But it is a good indicator to find out which parts are tested at all and which have a lack of testing.

Enable Code Coverage
It can be easily activated in the Visual Studio Menu "Test", "Edit Test Settings" and select the Test Settings file.

After that the Code Coverage can be enabled on the "Data and Diagnostics" tab.


The assemblies which should be instrumented have to be selected by clicking the "Configure" button. It has to be checked that just productive assemblies are selected and no unit test projects.


Code Coverage Check during CI Build

A code coverage check can be implemented in order to ensure that a certain amount of unit tests are written and stay in a healthy state. It can check the code coverage percentage and fail the build if the value is under a defined amount. This prevents the tests from getting removed from the build because it would drop the code coverage value. Of course, this says nothing about the quality of the tests but it makes a least sure that the tests are executed and increase with the code basis.

The following example coverage output file is written during the build process if the code coverage has been enabled. The value can be checked reading the BlocksCovered and BlocksNotCovered nodes and compared to a defined value which is the criteria to fail the build or not.

<CoverageDSPriv>
  <xs:schema id="CoverageDSPriv">...</xs:schema>  
  <Module>
    <ModuleName>TSTune.CodeExamples.dll</ModuleName>
    <ImageSize>57344</ImageSize>
    <ImageLinkTime>0</ImageLinkTime>
    <LinesCovered>7</LinesCovered>
    <LinesPartiallyCovered>0</LinesPartiallyCovered>
    <LinesNotCovered>7</LinesNotCovered>
    <BlocksCovered>7</BlocksCovered>
    <BlocksNotCovered>6</BlocksNotCovered>
  </Module>
  <SourceFileNames>
    <SourceFileID>1</SourceFileID>
    <SourceFileName>OrderServiceProxy.cs</SourceFileName>
  </SourceFileNames>
  <SourceFileNames>
    <SourceFileID>2</SourceFileID>
    <SourceFileName>OrderManagement.cs</SourceFileName>
  </SourceFileNames>
</CoverageDSPriv>

In this simple example 7 of 13 blocks have been covered during the test, which is a code coverage of: 7 / (6 + 7) = 0.5385 = 53.85 %.

Coding Guidelines

This is the sixth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Many companies have written coding guidelines for the development which define naming, layout, commenting and a lot of other conventions. Even Microsoft has a couple of MSDN articles about coding conventions and design guidelines.

Coding guidelines are really important for the readability of the code and they can reduce the maintenance effort because the developer understands the code more quickly.

Many companies invest in writing documents about coding and design guidelines but the code does not follow most of the defined rules and the different components and classes have a completely different style. Just a document does not improve the code quality. The developers have to know the content of the document and have to follow it. The code has to be reviewed on a regular basis.

Usually, the guidelines document is just stored somewhere on a SharePoint or file share. It is also most of the time not up to date because another version of the programming language has been released. The new language features are not described or parts of the document are already obsolete.

This problem can be solved by using a tool like StyleCop. StyleCop checks during the build process if the code follows the defined rules. It can be checked, for instance, if all public methods are commented or every if-block is within curly brackets. The StyleCop rules can be defined instead of writing and updating the coding guidelines document. If the StyleCop rules are checked during the development process, some important time during the review can be saved. The reviews can focus on the architecture and design of the components instead of checking the style and naming conventions.

There are two ways to check StyleCop rules during the development process. Either via check-in policy or integrated into MS build. I would recommend to use the MS build integration because the check-in policy has to be installed on all develop machines and have to be kept up to date.

Integrate StyleCop into MS Build:
After downloading and installing StyleCop, there is a MS Build target in the installation folder:
StyleCop\<version>\Microsoft.StyleCop.targets

Just copy the file and check it in your source control. After that it can be referenced relatively in order to work on all developer machines.
<Import Project="..\StyleCop\Microsoft.StyleCop.targets" />

If the StylCop target is integrated in the MS Build, every violation is shown as a warning. In case you want to ensure the rules this might be not enough. I have seen projects with thousands of warnings in the build process. A warning is indeed not an error and the assembly can be still compiled but there are reasons why warnings are shown. That is why they should not be ignored. One possibility is to enable the build option "Treat warnings as errors". In combination with gated builds code cannot be checked-in anymore which do not fulfill the StyleCop rules.


But there is one big disadvantage in that approach. The developer cannot test easily changes on the code anymore because every time the violated StyleCop rules make the build fail. If, for instance, a new public method has been added and is not commented yet, because it is not finished completely, this code cannot be compiled and tested. That is the reason why I would enable this option just during the continuous integration build and disable it on the local machine. This can be done using different Configurations like in the following project file:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Local|AnyCPU' ">
  <TreatWarningsAsErrors>false</TreatWarningsAsErrors>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'TFS|AnyCPU' ">
  <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>

3/16/2012

JavaScript Unit Testing

This is the fifth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Modern applications use more and more JavaScript to provide a rich and interactive user interface. Especially with HTML 5, JavaScript code is getting even more. I am wondering that JavaScript is still not taken as serious as most of the other programming languages. There is still not enough awareness that JavaScript code is an important part of applications and has to have a good code quality as well. I have seen projects which were writing a lot of server-side unit tests but had no quality assurance on the client-side.

The tools have been improved in the last couple years but are still not as intuitive as they should be. For instance, Visual Studio does not support Unit Testing of JavaScript code out-of-the-box. But at least there are already a couple of JavaScript Unit Testing Frameworks available.

In the following blog, different JavaScript Test Frameworks shall be tested and compared. It is focused on the TFS integration in order to execute the tests during the CI build.

Browser-based or Browser-less?

There are two different approaches, either the test frameworks are using a browser to execute the tests or the JavaScript code is interpreted and executed from a host application.

Browser-based Frameworks: QUnit, JS-Test-Driver, ...
Browser-less Frameworks: JSTest.NET, google-js-test, ... Crosscheck, ...

The browser-less frameworks can be executed usually pretty easy. Also the integration in CI builds is much easier because the overhead of starting and stopping a browser is not needed. But there is one big disadvantage with browser-less frameworks. The execution runs in a virtual environment and the different browser habits cannot be tested. Additionally some features are usually not supported by these frameworks. That is the reason why I prefer browser-based frameworks.

Writing JavaScript Tests can be tricky

In general, writing JavaScript Unit Tests is not as easy like testing server-side code, because usually JavaScript code is calling web services and interacting with the DOM of the browser. Of course, you can separate your JavaScript logic from the DOM interaction and service calls (and you should always do that!). But that does not change the fact that loading data and manipulating the DOM are the main tasks of your JavaScript code. If you would just test the pure JavaScript logic without DOM interaction, you would miss a big part of your code.

Mocking Ajax-Request

The first problem with AJAX service calls can be solved by using a mocking framework. If you are using JQuery, you just need to include the JQuery Mockjax library and you can easily redirect your AJAX calls to return the data you need for your test:

$.mockjax({
  url: 'testurl/test',
  responseText: 'Result from the test operation.'
});

This line hooks into the JQuery library and returns the given response text for all JQuery requests to the defined url. The response text can be simple text, JSON or any other content.

DOM Manipulation

The DOM interaction problem is more difficult. Almost in all cases, JavaScript code communicates and manipulates the browser's DOM. Asynchronously retrieved data has to be displayed in a certain way. This topic is also the most important task of a JavaScript unit testing framework (besides the test execution, of course).

There are different approaches to support the declaration of HTML markup for unit tests. The most frameworks like QUnit for example, need a real HTML document for the test execution. The unit tests are written within this document and executed by simply loading the document. The results are shown afterwards by the testing framework within the browser as HTML output.

This approach has two big disadvantages:
  • All the tests have to work in the context of the HTML page. The JavaScript unit tests usually highly depend on the HTML markup. If a lot of different cases have to be tested, a new HTML page has to be created each time. These pages are usually just slightly different but cause a lot of troubles and effort in the test maintainance.
  • The test results are usually shown as HTML output in the browser and cannot be automatically processed. But this is very important to fail the Continuous Integration Build and deny the check-in.

But there is JS-Test-Driver, a tool especially made for the integration of JavaScript unit tests in CI builds as well as an easy definition of HTML markup. It makes it much easier to execute JavaScript unit tests within a CI build and to reduce the effort to write tests.

JS-Test-Driver

JS-Test-Driver is a great Unit Testing framework, which supports inline definition of DOM elements and a seamless integration into the Continuous Integration build.

The HTML markup for unit tests is not written in a separate HTML page. It can be defined with a special DOC comment, e.g. /*:DOC += */. The html document is automatically created and can be used within your test case.

TestCase.prototype.testMain = function() {   
  /*:DOC += <div class="main">
</div>
*/   
  assertNotNull($('.main')[0]); 
};

That is the reason why js-test-driver is my favorite Javascript Test Framework. It scales like a charm and allows to define HTML tags within the tests. Additionally it can be easily integrated into the build process.

Configuration of JS-Test-Driver:

The following script shows how to configure JS-Test-Driver. It is quite self-explaining. The "server" declaration defines the binding for the started server. "load" defines which scripts should be available during the tests. "test" defines where are the unit tests located. Additionally plug-ins like code coverage calculation can be integrated as well.

server: http://localhost:4224

load:
 - Script/Main/*.js
 - Script/Page/*.js

test:
 - Script/UnitTests/*.js

plugin:
 - name: "coverage"   
   jar: "coverage.jar"   
   module: "com.google.jstestdriver.coverage.CoverageModule"

Integrate JS-Test-Driver into Team Foundation Server Build

JS-Test-Driver starts a server and a browser instance, runs the tests for you and posts the result to the server. The result can be evaluated during the CI Build and check-ins can be even rejected when just one test-case fails. After that JS-Test-Driver is also shutting down the server and the browser.

To integration JS-Test-Driver into the TFS Build a configuration file (like above) and a build target has to be created:

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <!-- SolutionDir is the dir, where the solution file exist -->
    <SolutionRoot>$(MSBuildStartupDirectory)\..</SolutionRoot>
  </PropertyGroup>

 <Target Name="JSTestDriver">
    <PropertyGroup>
      <JSTestDriverJar>$(SolutionRoot)\JsTestDriver\JsTestDriver-1.3.4.b.jar</JSTestDriverJar>
      <JSTestDriverConfig>$(SolutionRoot)\JsTestDriver\jsTestDriver.conf</JSTestDriverConfig>
      <BrowserPath>C:\Program Files (x86)\Internet Explorer\iexplore.exe</BrowserPath>
    </PropertyGroup>

    <Exec Command='java -jar "$(JSTestDriverJar)" --port 40000 --basePath "$(SolutionRoot)" --browser "$(BrowserPath)" --config "$(JSTestDriverConfig)" --tests all --verbose' />
  </Target>

</Project>

This target starts JSTestDriver and can be easily executed from the local or TFS build:
build JSTestDriver

The screenshots show how the JSTestDriver target can be added to the TFS build workflow XAML. The MS Build activity uses the JSTestDriver target to start the Java jar-file and executes the javascript unit tests. If one of the tests fails the MS Build activity returns an error and therefore also the build fails. If the gated check-in is enabled, the code is not committed in the code basis until the tests are fixed.



1/04/2012

.NET Unit Testing

This is the forth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In my previous post I was explaining what role a CI build plays in the development process. Also Gated Check-in's are really important to ensure a certain code quality. But this quality highly depends on the checks and tests which are running during the CI build.

Writing unit tests and integrating them into the continuous integration build is essentially for writing code with good quality. Always assume that your code is not working until you have proven that it works by a unit test.

Software has to be permanently adapted and changed due to new requirements. That is the nature of software development because humans cannot overview the complexity of IT systems. That is also the reason why iterative and agile development processes are so successful compared to traditional waterfall and V-models.

But how to write good unit tests?
I have seen a couple of projects with totally different ideas and solutions how they created and organized their unit tests. One main goal and very important technique for writing successful unit tests is to keep the scope for a test very small. In other words just test a single class or even better a single method per test. What sounds pretty easy in theory can be challenging in the practice. Introducing unit tests in existing projects which have already a net of references can be very tricky. A good practice is to improve the code step by step. Every check-in has to make the code better. When you start a new project it is much easier to introduce good unit tests with a little effort and discipline. Patterns like inversion of control and dependency injection are techniques to reduce the dependencies between components without introducing more complexity. Do not try to write unit tests which test all your layers at once. This results in high effort for building and maintaining the test data and unit tests during the software lifecycle. Better introduce local unit tests step by step.

Here is a list of simplified best practices which can be applied in most of the cases to reach easily a looser coupling and therefore a better testability:
1. Every time you want to call another class add the interface of the class in the constructor of your class and store it as a field or property and call the interface instead of the class directly. If the class you want to call does not have an interface, what stops you from creating one? Tools like Visual Studio and ReSharper even support you doing that easily. If it is external code
just create a wrapper around this code which is anyhow a good practice to integrate external code in your application.

Assuming you want to test your business logic OrderManagement but unfortunately your business logic calls a web service through your OrderServiceProxy class. That makes testing your business logic much more difficult because every time the web service is not accessible your unit test would fail. We just add a new interface IOrderServiceProxy and add a constructor taking this interface in the Order Management class.

public class OrderManagement : IOrderManagement
{
    private IOrderServiceProxy OrderService { get; set; }

    public OrderManagement(IOrderServiceProxy orderService)
    {
        OrderService = orderService;
    }

    public bool ProcessOrder(Order order)
    {
        return OrderService.PlaceOrder(order);
    }
}


2. Now you can easily test your OrderManagement class und ProcessOrder method because you can pass a replacement for the OrderServiceProxy implementation and test against your dummy implementation.

public class OrderServiceProxyMock : IOrderServiceProxy
{
    public bool PlaceOrder(Order order)
    {
        return true;
    }
}

[TestClass()]
public class OrderManagementTest
{
    [TestMethod()]
    public void ProcessOrderTest()
    {
        // Create mock class
        IOrderServiceProxy orderService = new OrderServiceProxyMock();

        // Create test data
        Order order = new Order();
            
        // Create your class to test and pass your external references
        OrderManagement target = new OrderManagement(orderService);
            
        // Execute your test method
        var result = target.ProcessOrder(order);
            
        // Assertions
        Assert.IsTrue(result);
    }
}


3. You can use a mocking framework like RhinoMock, Typemock, Justmock, NMock, etc... to simplify testing your code and reduce the lines of code you have to write.

RhineMocks example:

[TestClass()]
public class OrderManagementRhinoMocksTest
{
    [TestMethod()]
    public void ProcessOrderTest()
    {
        // Create test data
        Order order = new Order();

        // Create mock
        MockRepository mock = new MockRepository();
        IOrderServiceProxy orderService = mock.StrictMock<IOrderServiceProxy>();
        orderService.Stub(x => x.PlaceOrder(order)).Return(true);
        mock.ReplayAll();
            
        // Create your class to test and pass your external references
        OrderManagement target = new OrderManagement(orderService);

        // Execute your test method
        var result = target.ProcessOrder(order);

        // Assertions
        Assert.IsTrue(result);
    }
}


4. You can use Dependency Injection Frameworks in order to inject the implementations in the constructors. Especially for your productive code you have in most of the cases just one implementation for an interface which can be easily mapped. There are a lot of Dependency Injection Frameworks available like Unity, StructureMap, Spring.NET, etc...

A dependency injection framework resolves the interfaces you placed in a constructor or property with the real implementation. Which interface maps to which implementation can be either configured in a xml file or just coded.

Unity Configuration example:

First you usually define your alias which maps to a full qualified type name. You have to do that for your interface as well as your implementation. After that you can register a mapping from the interface to the actual implementation.

<configuration>
  <configSections>
    <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" />
  </configSections>
  <unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
    <alias alias="IOrderServiceProxy" type="TSTune.CodeExamples.ServiceAgents.IOrderServiceProxy, TSTune.CodeExamples" />
    <alias alias="OrderServiceProxy" type="TSTune.CodeExamples.ServiceAgents.OrderServiceProxy, TSTune.CodeExamples" />
    <alias alias="IOrderManagement" type="TSTune.CodeExamples.BusinessLogic.IOrderManagement, TSTune.CodeExamples" />
    <alias alias="OrderManagement" type="TSTune.CodeExamples.BusinessLogic.OrderManagement, TSTune.CodeExamples" />
    <container>
      <register type="IOrderServiceProxy" mapTo="OrderServiceProxy"/>
      <register type="IOrderManagement" mapTo="OrderManagement"/>
    </container>
  </unity>
</configuration>


After you configured your unity container you have to load the configuration and initialize your unity container before you can use it:

IUnityContainer unityContainer = new UnityContainer();
UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
section.Configure(container); 


Unity Code example:

You can also register the mappings using code, which is much easier:

IUnityContainer container = new UnityContainer();
container.RegisterType<IOrderManagement, OrderManagement>();
container.RegisterType<IOrderServiceProxy, OrderServiceProxy>();


But this approach has two disadvantages:
First of all you have to recompile your code to exchange implementations. Secondly a static reference has to be added to all the assemblies you want to register, because the classes have to be known during registration. This can be problematic when you use the Visual Studio Layer Diagram Validation, which I am going to explain in one of my next posts.

Unity - How to use it:

Every time you call now UnityContainer.Resolve<IOrderManagement>(); you will get an instance of your OrderManagement class.

var orderManagement = unityContainer.Resolve<IOrderManagement>();
orderManagement.ProcessOrder(new Order());


5. If using a Dependency Injection Framework is too much of a pain for you (which it should not!), then you can add a default constructor which wires up all implementations with the interfaces. This is called poor man's dependency injection.

public OrderManagement()
{
    OrderService = new OrderServiceProxy();
}


If you are using interfaces instead of real implementations you lose the chance to easily navigate with F12 through your code during design time. Instead you end up looking at the interface
when you want to investigate the implementation and you have to search the actual implementation manually. ReSharper helps you navigating directly to the implementation by clicking Ctrl+F12.

How to integrate in the Team Foundation Server Build:

First, you should create your test lists. Usually there is a test list for CI, Nightly and maybe Manual tests.



After you placed your unit tests in the test lists, you can set up the TFS build to execute your test list in the CI build. Do not forget to fail the build if the test execution failed.





Final important note !!!

Think about unit test code like productive code. Use the same quality criteria. Unit test code has to be maintained together with your productive code and underlies the same changes!

This makes writing and maintaining of your unit test code much easier and increases the quality of your code.

CI and Nightly Builds

This is the third blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In this post I am going to show how to create builds with the TFS 2010 Build Workflow Engine. To ensure code quality and enable continuous delivery, we have usually two different types of builds: CI and Nightly Builds.

CI Builds are executed during the check-in of the developer. There are 3 types of CI builds:
  • Continuous Integration Build - Every check-in of the developer is build, but the code is always committed even if the build fails. Even code which is not working correctly will be stored in the productive source control.
  • Rolling Builds - Builds a set of check-in's which have been committed since the last build. It has the same disadvantage like normal CI builds and on top it is not all the time clear who created the faulty code.
  • Gated Check-in Builds - This type of CI builds just commit the code in the main source control when the build is green and every quality check has been successfully passed. This gives the possibility to ensure certain criteria's and forces the developer to adapt the code if just one of rules in broken.

The task of CI builds is to ensure the code quality. This works great with Gated check-ins because it does not allow to check-in anything which is not according the defined quality criteria.


Nightly Builds are used for integration tests which take a longer time to execute and it would not be feasible to execute them during each check-in process. The Nightly Builds are triggered on a certain time. A good example are Coded UI Tests, which test the user interface by performing clicks and other actions on controls of the screen. A usual practice is to deploy the application every night to the target system and performing completely automated integration and user interface tests.

Overview about a possible development process to prevent broken applications:
  • CI Build (with Gated Check-in) on every code change
  • Nightly Deployments to Development System
    In order to execute integration and user interface tests it is important to establish a completely automated deployment, which can be executed during the night. After that the automated tests can be executed and instant feedback can be given every morning. It is important that the automated integration and user interface tests cover the main functionality of the application and ensure the health of the application. Additionally, the customer should not test on this system, because it can be broken every morning.
  • Weekly Deployments to Staging System
    Only when CI Build, integration and user interface tests have been successfully passed, an automated deployment can be triggered for the staging system. After that the automated tests shall be executed on this system again, in order to ensure the health of the application and prevent configuration errors.
    In that case the customer gets always a stable version on the staging system and can focus on reviewing the implemented requirements.
This process should help to ensure that the customer does not see a broken application and should be much more satisfied about the software quality.

1/03/2012

Why to force so many check-in rules during the CI build?

This is the second blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Why to force so many check-in rules during the CI build?

The reason seems pretty simple: I have seen over and over again in the projects I was working that every rule which is not enforced during the check-in process will be broken sooner or later.

Usually, this is not done on purpose. There are multiple reasons for that. Most of the time due to high time pressure in the projects or because the developer is deeply focused on the current task and forgets just about it. But the rules and guidelines can also be misunderstood or just forgotten when definitions are discussed in a meeting or send as mail.

Another reason for check-in rules is that this approach saves a lot of time and money because problems are detected before they are actually integrated in the main code. During the architecture and code reviews, the architects can focus on other more important things than static dependencies and code metrics.

In general, every important change in the design of the application is forbidden by check-in rules. It will be all the time an explicit change and not done by accident without noticing it. A good example for that is ReShaper. It is a great tool of course, everybody should have it. But it has a feature, for instance, which detects the namespace and assembly when you just type-in the class name. It automatically adds a reference to the assembly in the current project. I often catch myself adding unwanted references while I am coding and trying to solve my local problem.

All of these problems can be avoided by using good static analysis tools during the check-in process:

Another good approach is:
When you find a problem in a code review, you can create a rule which detects this violation in the future. It is like writing unit tests to check the software architecture.

1/02/2012

Continuous Integration and Continuous Delivery

This is the first blog of a series of posts about the topics continuous integration and continuous delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Everybody of you faced for sure problems with incomplete or just bad specifications. Additionally a software has to be frequently changed due to new or updated requirements. That's why agile software development processes and techniques are so successful. But a good agile software development processes also implies a continuous delivery to the customer in my opinion. Only like this it is possible to verify if the features match the customers thoughts. But the software can or should be just delivered if it fulfils the quality standard.

I think (almost) everybody delivered already a piece of software which was crashing on the first use. This is frustrating for the customer and gives also a bad picture about the development team. And that is exactly where continuous integration comes into play. It ensures a certain defined quality standard and enables the possibility to define quality gates in order to prevent crucial software failures.

I would like to give you some hints what is important about continuous integration and how you can set up a configurable team foundation server 2010 build workflow with a lot of features and quality gates to ensure a software which works and fulfils the customer needs.

The following build workflow steps will be covered in the next posts:
I am going to explain how to set up each build process, the reason for each step as well as pros and cons.

After that I want to focus on the Delivery Process and which challenges are waiting in this area: