I am happy to announce a new release of JSAnalyse. You can download the latest version on the codeplex project JSAnalyse.
The new release has a couple of improvements:
- Visual Studio 2012 support
- Multiple JavaScript dependency diagrams can be created
- Enhanced Caching mechanism to support bigger object graphs
- Detects even more static references between JavaScript files
For those who do not know what JSAnalyse is, read my previous post "JSAnalyse published on codeplex".
9/22/2012
8/27/2012
TFS 2012 Build Server Installation - Fails with error "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list."
If you get during the TFS 2012 Build Server Configuration an error message with the following error message:
"System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list."
This is a bug in the Build Server configuration tool. You can fix it by turning on your Windows Firewall.
The configuration tool tries to check the firewall and add an exceptional rule for the build server port which causes an exception because the firewall is not running. Microsoft actually handles this exception, but within the catch block they try to write a warning message out which unfortunately causes another exception.
Here are the results by using reflector:
Assembly: Microsoft.TeamFoundation.Build.Config.dll
Name: Microsoft.TeamFoundation.Build.Config, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Class: BuildServiceHostUtilities
The RemovePermissions method in the BuildServiceHostUtilities tries to add an exception for the Build Service port in the line ConfigurationHelper.DisableFirewallException(baseUrl.Port);
This causes an COMException which will be just logged as an warning. So far so good. But unfortunately the call for formatting the warning message gets just two parameters passed. If we take a log in the resources of the dll the "FailedDeletingPortExceptionFor" text has 3 parameters defined:
FailedDeletingPortExceptionFor=Failed to remove firewall exception {1} for port {0}. Details: {2}
This finally causes the "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.".
"System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list."
This is a bug in the Build Server configuration tool. You can fix it by turning on your Windows Firewall.
The configuration tool tries to check the firewall and add an exceptional rule for the build server port which causes an exception because the firewall is not running. Microsoft actually handles this exception, but within the catch block they try to write a warning message out which unfortunately causes another exception.
Here are the results by using reflector:
Assembly: Microsoft.TeamFoundation.Build.Config.dll
Name: Microsoft.TeamFoundation.Build.Config, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Class: BuildServiceHostUtilities
private static void RemovePermissions(Uri baseUrl, bool deleteFirewallException) { if (baseUrl != null) { string permissionedUrl = GetPermissionedUrl(baseUrl); try { ConfigurationHelper.FreeUrlPrefix(permissionedUrl); } catch (Exception exception) { LogWarning(Resources.Format("CannotFreeUrlPrefix", new object[] { permissionedUrl, exception.Message })); } if (deleteFirewallException) { try { ConfigurationHelper.DisableFirewallException(baseUrl.Port); } catch (COMException exception2) { if (exception2.ErrorCode != -2147023143) { LogWarning(Resources.Format("FailedDeletingPortExceptionFor", new object[] { baseUrl.Port, ExceptionFormatter.FormatExceptionForDisplay(exception2) })); } } } } }
The RemovePermissions method in the BuildServiceHostUtilities tries to add an exception for the Build Service port in the line ConfigurationHelper.DisableFirewallException(baseUrl.Port);
This causes an COMException which will be just logged as an warning. So far so good. But unfortunately the call for formatting the warning message gets just two parameters passed. If we take a log in the resources of the dll the "FailedDeletingPortExceptionFor" text has 3 parameters defined:
Resources.Format("FailedDeletingPortExceptionFor", new object[] { baseUrl.Port, ExceptionFormatter.FormatExceptionForDisplay(exception2) });
FailedDeletingPortExceptionFor=Failed to remove firewall exception {1} for port {0}. Details: {2}
This finally causes the "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.".
7/31/2012
Feature Roll-Out
This is the fifteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.
As funny as it sounds but one of the main problems about Continuous Delivery is the permanent delivery. It can be that one feature is not completely implemented yet and spans multiple releases to get ready. The main idea of Continuous Delivery is to split big requirements into smaller junks which still gives the user new possibilities. Unfortunately, this is not always possible and therefore the feature-flagging technique gets important.
Feature Flagging simply means that every big new functionality should be built in a way that it can be easily turned on and off like in the following listing shown:
Advantages of Feature Flagging
This approach gives a lot of advantages and great flexibility during the roll-out of a new version:
Problems with Feature Flagging
Of course, the trade-off of this approach is that the design of new components has to be thought through.
The code for implementing feature flagging (e.g. if clauses, factories or attributes) should not spread around the whole code and make it much more difficult to maintain. If a feature has been completely rolled out, it should be even removed to simplify the code afterwards again.
The applied changes (e.g. database schema change) have to be compatible for both code parts. This has to be considered anyhow to support hot deployments where the application stays online during a deployment.
Additionally, the test effort is higher because both cases have to be tested as well as the possible dependencies between these cases.
But in the end I think that Feature Flagging and step-by-step roll-out is a really important concept which is worth to use in bigger web applications. It helps to reduce the risk of deployments dramatically.
As funny as it sounds but one of the main problems about Continuous Delivery is the permanent delivery. It can be that one feature is not completely implemented yet and spans multiple releases to get ready. The main idea of Continuous Delivery is to split big requirements into smaller junks which still gives the user new possibilities. Unfortunately, this is not always possible and therefore the feature-flagging technique gets important.
Feature Flagging simply means that every big new functionality should be built in a way that it can be easily turned on and off like in the following listing shown:
public void PlaceOrder(Order order) { var orderSystem = CreateNewInstance(); orderSystem.Place(order); } public IOrderSystem CreateNewInstance() { if (FeatureFlagManager.IsAvailable("NewOrderSystem")) { return new OrderSystem(); } else { return new LegacyOrderSystem(); } }
Advantages of Feature Flagging
This approach gives a lot of advantages and great flexibility during the roll-out of a new version:
- Features can be switched on and off, even for a certain group of users if the feature flag component has been implemented to support it.
- A feature can be smoothly rolled out for a small group of users (like administrators, testers, people from a country, etc...) and does not affect the other users. Therefore some people can test the feature in the real world environment before it is available for public use.
- The roll-out can be done step-by-step. It can be coordinated and monitored what effects it has on the whole system regarding performance or usability. This approach is especially in web applications with many users extremely useful where the load cannot be simulated on a staging environment anymore.
- If any problem occurs the old variant is just one click away and there is no need for a big rollback with possible data inconsistency or loss.
- Furthermore the problems can be identified by a small group of users and do not affect all users at once which might cause an extreme increase of the support tickets.
Problems with Feature Flagging
Of course, the trade-off of this approach is that the design of new components has to be thought through.
The code for implementing feature flagging (e.g. if clauses, factories or attributes) should not spread around the whole code and make it much more difficult to maintain. If a feature has been completely rolled out, it should be even removed to simplify the code afterwards again.
The applied changes (e.g. database schema change) have to be compatible for both code parts. This has to be considered anyhow to support hot deployments where the application stays online during a deployment.
Additionally, the test effort is higher because both cases have to be tested as well as the possible dependencies between these cases.
But in the end I think that Feature Flagging and step-by-step roll-out is a really important concept which is worth to use in bigger web applications. It helps to reduce the risk of deployments dramatically.
Automated UI Testing
This is the fourteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.
After an automated deployment has been set up, we have the chance to test our application on the target platform. We can figure out problems much earlier in the development phase and we can react on it. But manually testing applications is very time-consuming. Therefore, the basic key scenarios and regression tests should be automated. That gives the possibility to execute those tests after changes have been made to the software (e.g. every night). The result is fast feedback about the state of the application. These tests can and should be even executed after every deployment in order to check the health of the software and identify configuration issues. This gives us reliable feedback weather the applications main features are working or not. This results in a higher customer satisfaction even if not all the bugs are found upfront but at least the application does not break down after the first click and the main business can be still served. If a critical error is reported by the customer, which has not been found by the automated tests, they should be of course extended.
Microsoft and HP provide great tools for automated UI tests. HP QuickTest is the market leader in this field and provides a mature and stable framework for automating UI tests. Microsoft’s Coded UI Tests are much newer and do not support as many UI technologies as HP QuickTest (e.g. Java and Flash applications are not supported out-of-the-box by the Microsoft Test Framework). But I would still consider the Coded UI Tests from Microsoft if you are working with the Team Foundation Server and Windows/Web Applications. The coded UI tests can be integrated into the CI build of the TFS (like Unit Tests) and therefore easily executed after deployments and scheduled by the TFS build system.
Short comparison between Microsoft Coded UI Tests and HP QuickTest:
Automated UI Tests mainly fail and cause high effort in maintenance because of the following reasons:
I have rarely seen good written UI Tests because it will still not be taken as seriously as the productive code. It is important to make planning and architecture also for UI Test code because these tests can just help if they life together with the productive code. It is very important to define upfront how to handle the mentioned problems like the identification of UI elements, data management and test case dependencies.
Unit Tests are not enough to ensure the quality of software. They have, indeed, the advantage to give immediate feedback about changes because they can be executed quickly. But it is very important to test the application from the end-user perspective using automated UI tests.
Automated end user tests have to be performed as soon as possible in the development cycle. Usually, we deploy every night the latest source on a test system and execute the automated tests afterwards. This gives us instant feedback about the quality of the check-ins. If there are any problems they can be immediately investigated and fixed, and not just right before the software has to be delivered to the customer. With automated deployments and automated UI tests delivering high-quality software on a regular basis is much easier.
After an automated deployment has been set up, we have the chance to test our application on the target platform. We can figure out problems much earlier in the development phase and we can react on it. But manually testing applications is very time-consuming. Therefore, the basic key scenarios and regression tests should be automated. That gives the possibility to execute those tests after changes have been made to the software (e.g. every night). The result is fast feedback about the state of the application. These tests can and should be even executed after every deployment in order to check the health of the software and identify configuration issues. This gives us reliable feedback weather the applications main features are working or not. This results in a higher customer satisfaction even if not all the bugs are found upfront but at least the application does not break down after the first click and the main business can be still served. If a critical error is reported by the customer, which has not been found by the automated tests, they should be of course extended.
Microsoft and HP provide great tools for automated UI tests. HP QuickTest is the market leader in this field and provides a mature and stable framework for automating UI tests. Microsoft’s Coded UI Tests are much newer and do not support as many UI technologies as HP QuickTest (e.g. Java and Flash applications are not supported out-of-the-box by the Microsoft Test Framework). But I would still consider the Coded UI Tests from Microsoft if you are working with the Team Foundation Server and Windows/Web Applications. The coded UI tests can be integrated into the CI build of the TFS (like Unit Tests) and therefore easily executed after deployments and scheduled by the TFS build system.
Short comparison between Microsoft Coded UI Tests and HP QuickTest:
Microsoft Coded UI Tests | HP QuickTest | |
Supported Platforms | Windows, Web | Windows, Web, Java, Flash, SAP, etc... |
Test Types | UI Tests, Functional Tests, Uni Tests, Performance Tests, Load Tests, Manual Tests | UI Tests, Functional Tests |
Maintainability | Separation between object identification and test methods (complex UI Maps) | Separation between object identification and test methods (simple object repository) |
TFS Integration | Highly integrated with Test, Bug and Task Management as well as build integration | Plug-in needed (see HP Quality Center Synchronizer - TFS Adapter) |
Custom Extensions | Has open architecture with support to write a variety of extensions | Mainly not supported |
Summary | Provides integrated environment with TFS and .NET Windows and Web Applications but does not provide many technologies out-of-the-box | Should be used when many different platforms and technologies take part |
Automated UI Tests mainly fail and cause high effort in maintenance because of the following reasons:
- The UI element identification is not separated from the test steps.
That means that the different UI elements, like textboxes and buttons, are identified in many different places in the code because the same UI elements are used from different test cases. Usually, the criteria (e.g. an ID or Text) changes quite often. And that is the reason why it is important to centralize the object identification parameters. That changes have to be applied just once. - The test cases rely on instable data.
They are usually designed as end-user tests which depend on the functionality and data of all the connected systems. Of course, you can try to write tests which do not depend on the data but that also means the test does not cover the most important parts of your application. Therefore it is very important to think about the data management before actually implementing UI tests. - The test cases depend on each other or a complex test setup.
In order to minimize the maintenance effort, test cases should be independent from each other. Otherwise many or all tests fail because of one single problem.
I have rarely seen good written UI Tests because it will still not be taken as seriously as the productive code. It is important to make planning and architecture also for UI Test code because these tests can just help if they life together with the productive code. It is very important to define upfront how to handle the mentioned problems like the identification of UI elements, data management and test case dependencies.
Unit Tests are not enough to ensure the quality of software. They have, indeed, the advantage to give immediate feedback about changes because they can be executed quickly. But it is very important to test the application from the end-user perspective using automated UI tests.
Automated end user tests have to be performed as soon as possible in the development cycle. Usually, we deploy every night the latest source on a test system and execute the automated tests afterwards. This gives us instant feedback about the quality of the check-ins. If there are any problems they can be immediately investigated and fixed, and not just right before the software has to be delivered to the customer. With automated deployments and automated UI tests delivering high-quality software on a regular basis is much easier.
Automated Deployments
This is the thirteenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.
The basis of Continuous Delivery is a completely automated deployment. It should not depend on single persons and manual clicks whether or not software will be delivered to the customer or just published on a staging server for quality tests. How often have I heard "we cannot deploy because X is on vacation" or "it is too high effort to deploy it now". In order to deliver new features to a customer on a regular basis, automated deployments are a must-have.
Of course, depending on the complexity of the software, automated deployments can be difficult to set up. But Microsoft provides, for instance, already an extensive set of tools for that. In the web environment MSDeploy and MSBuild are the most important once. Besides copying the application assemblies and files, MSDeploy can create automatically up- and downgrade scripts for the databases. Further details about the features of MSDeploy and the deployment process can be found at Enterprise Deployment Tutorial.
In most of the companies there are even organizational boundaries between the software development and operations staff. It is very important that the responsibilities are clearly defined. The operations team provides the mechanism for automated deployments but should not be needed during a regular deployment nor have any knowledge about the exact contain of the deployment package. They should be just responsible for the infrastructure and deployment process itself. The creation of the packages has to be in the hand of the software development team and, of course, is completely automated like mentioned before.
This approach has many advantages:
Continuous Delivery is not just about delivering new features to the customer. It is also even more important to hand over the software regularly to the testing team. As faster they can test new features as sooner we know about certain problems, bugs or even architectural issues in our application. I have seen plenty of times that the software has been developed and handed over to the test team a few days or weeks before delivery. After that when bugs have been found, the panic started because they not just had to be fixed but also re-deployed and re-tested. This all could have been prevented by continuously deployment and testing from the first day of development.
This all should just explain how important automated deployments are and that they are the basis for Continuous Delivery.
The basis of Continuous Delivery is a completely automated deployment. It should not depend on single persons and manual clicks whether or not software will be delivered to the customer or just published on a staging server for quality tests. How often have I heard "we cannot deploy because X is on vacation" or "it is too high effort to deploy it now". In order to deliver new features to a customer on a regular basis, automated deployments are a must-have.
Of course, depending on the complexity of the software, automated deployments can be difficult to set up. But Microsoft provides, for instance, already an extensive set of tools for that. In the web environment MSDeploy and MSBuild are the most important once. Besides copying the application assemblies and files, MSDeploy can create automatically up- and downgrade scripts for the databases. Further details about the features of MSDeploy and the deployment process can be found at Enterprise Deployment Tutorial.
In most of the companies there are even organizational boundaries between the software development and operations staff. It is very important that the responsibilities are clearly defined. The operations team provides the mechanism for automated deployments but should not be needed during a regular deployment nor have any knowledge about the exact contain of the deployment package. They should be just responsible for the infrastructure and deployment process itself. The creation of the packages has to be in the hand of the software development team and, of course, is completely automated like mentioned before.
This approach has many advantages:
- The development team is responsible for the software where they have a deep knowledge.
- The development team does not have to investigate problems based on the filtered details from the operations team. They get full access to the tracing information for their application.
- The operations team has is focus on their competences, the infrastructure.
- The operations team can be easily scaled and shared because they do not have to build up the application knowledge anymore but just provide managed IT services.
Continuous Delivery is not just about delivering new features to the customer. It is also even more important to hand over the software regularly to the testing team. As faster they can test new features as sooner we know about certain problems, bugs or even architectural issues in our application. I have seen plenty of times that the software has been developed and handed over to the test team a few days or weeks before delivery. After that when bugs have been found, the panic started because they not just had to be fixed but also re-deployed and re-tested. This all could have been prevented by continuously deployment and testing from the first day of development.
This all should just explain how important automated deployments are and that they are the basis for Continuous Delivery.
Continuous Delivery
This is the twelfth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.
After I wrote mainly posts about continuous integration, I also want to spend some words about continuous delivery.
With the success of agile development processes, the iterative creation of shippable software pieces is getting more and more important. It actually sounds great to delivery regularly new features to the customer in order to get instant feedback and to understand the solution better. It is usually pretty difficult for non-technical guys to understand complex systems based on hundreds of pages in a software specification. But these popular agile development processes come with other difficulties, like how to ensure the quality of the software in short development cycles and how to give the customer an easy possibility to accept and release features.
The previously posted concepts about continuous integration and gated check-ins are a very important basis for delivering high quality software, of course. But they do not address the issues during delivery.
In the next posts I would like to write about the following important topics to establish a working continuous delivery process:
After I wrote mainly posts about continuous integration, I also want to spend some words about continuous delivery.
With the success of agile development processes, the iterative creation of shippable software pieces is getting more and more important. It actually sounds great to delivery regularly new features to the customer in order to get instant feedback and to understand the solution better. It is usually pretty difficult for non-technical guys to understand complex systems based on hundreds of pages in a software specification. But these popular agile development processes come with other difficulties, like how to ensure the quality of the software in short development cycles and how to give the customer an easy possibility to accept and release features.
The previously posted concepts about continuous integration and gated check-ins are a very important basis for delivering high quality software, of course. But they do not address the issues during delivery.
In the next posts I would like to write about the following important topics to establish a working continuous delivery process:
- Automated Deployments
- Automated UI Testing
- Feature Roll-Out
4/06/2012
Static Code Analysis based on NDepend
This is the eleventh blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is, to deliver software in short iterations to the customer with a high quality standard.
The static code analysis rules from Microsoft, which I introduced in the last post, are a great and easy start to find common problems in your code like memory leaks, security holes or application crashes. As soon as you want to define own customized rules and analyze your code very deeply, you should take a look at NDepend. It is an amazing tool which is quite easy to use and gives you great information about the quality of your code by analyzing the dependencies of every single line. Besides that, NDepend supports an own easy SQL-like query language to select the important information from the code analysis. And after all it can be easily integrated into the CI build to execute the defined queries and check if any of them are violated.
In the following post I am going to explain how to write own queries with NDepend and integrate them into the TFS build workflow.
Set up NDepend
First of all you have to download NDepend. The installation package comes with a Visual Studio Add-In which you should install. After that a new menu entry is available in the Visual Studio as well as a status icon in the bottom right corner. Load your current solution and select the NDepend menu entry "Attach new NDepend Project to current VS Solution". It is important that you build your solution before you use NDepend because the analysis is based on the created assemblies. After you attached NDepend to the solution an analysis will run and show the following web page:
You should spend some time on this web page and study all the information and metrics you get from NDepend. There is a Dependency Graph and Matrix as well as more than 80 code metrics like complexity, maintainability index or lines of code respectively intermediate language. You can easily identify the parts of your code which are used heavily (Type Rank metric) and should be tested more carefully because bugs in these components would have a higher impact. All the NDepend metrics are explained in detail on the page NDepend Code Metrics Definitions. The areas on the NDepend web page are defined in the CQL Query Explorer. Out-of-the-box NDepend already delivers hundreds of predefined queries to analyze your code deeply.
Write your own queries
NDepend is a powerful and complex tool to execute static code checks. But it also allows to define an own rule set by using the code query language (CQL). You can easily adapt the existing queries or just create your completely customized ones. Because of the SQL-like syntax it is really easy to understand.
The following query, for instance, selects all the methods which have more than 20 lines of code (comments and empty lines are not counted).
Besides such general queries it is of course also possible to write specific queries for your solution. This query selects all assemblies which directly reference the data access component (DepthOfIsUsing "ASSEMBLY:DataAccess" == 1) but are not the business logic (!NameIs "BusinessLogic"). This can detect, for example, if the data access is used directly by the presentation layer.
Integrate NDepend into TFS Build
After you defined all your NDepend rules, it would be great if code which violates the rules cannot be checked-in anymore. Therefore, you have to flag first the queries you want to cause an error. First of all, the query has to give a warning if a certain threshold has been reached.
In the following example a warning is shown if just one method exceeds 20 lines of code.
Additionally, the rule has to be marked as critical to make the build fail. There is a red button in the upper right corner:
After you defined all your rules, limits and importance of the rule, the build has to be configured to fail in case of check-ins which do not fulfill these rules. First of all, enable the gated check-in feature that the bad code is not committed in the source control. An NDepend Activity is available on codeplex and has to be integrated in the TFS 2010 Workflow Build. (see also Integrate NDepend with TFS)
How to improve old legacy code step-by-step
All of these code checks are great if you introduce them before you actually write code. The problem is just that this is not all the time the case. In most of the cases an old legacy system with a lot of code already exists and should be improved. This can be usually just being done step-by-step.
Think about introducing a rule that every method should be a maximum of 20 statements long. With legacy code and thousands or ten-thousands methods introducing this rule is nearly impossible. But therefore NDepend provides a great feature to apply the rules just for changed code. This makes it possible to define strict rules and introduce them step by step. Every check-in makes the code better than before and even huge and complex systems can be improved over time.
Summary
NDepend is a great and extremely powerful tool to analyze your code. You can easily write your own queries and integrate them into the build. This saves a lot of time during the architecture, design and code review because the most common errors can be already detected before the code is actually check-in. Afterwards it is usually extremely difficult and can cost a lot of money to clean up and repair the bad code.
Hint: Try to identify after every code review the common errors and write a rule which detects them in the future.
The static code analysis rules from Microsoft, which I introduced in the last post, are a great and easy start to find common problems in your code like memory leaks, security holes or application crashes. As soon as you want to define own customized rules and analyze your code very deeply, you should take a look at NDepend. It is an amazing tool which is quite easy to use and gives you great information about the quality of your code by analyzing the dependencies of every single line. Besides that, NDepend supports an own easy SQL-like query language to select the important information from the code analysis. And after all it can be easily integrated into the CI build to execute the defined queries and check if any of them are violated.
In the following post I am going to explain how to write own queries with NDepend and integrate them into the TFS build workflow.
Set up NDepend
First of all you have to download NDepend. The installation package comes with a Visual Studio Add-In which you should install. After that a new menu entry is available in the Visual Studio as well as a status icon in the bottom right corner. Load your current solution and select the NDepend menu entry "Attach new NDepend Project to current VS Solution". It is important that you build your solution before you use NDepend because the analysis is based on the created assemblies. After you attached NDepend to the solution an analysis will run and show the following web page:
You should spend some time on this web page and study all the information and metrics you get from NDepend. There is a Dependency Graph and Matrix as well as more than 80 code metrics like complexity, maintainability index or lines of code respectively intermediate language. You can easily identify the parts of your code which are used heavily (Type Rank metric) and should be tested more carefully because bugs in these components would have a higher impact. All the NDepend metrics are explained in detail on the page NDepend Code Metrics Definitions. The areas on the NDepend web page are defined in the CQL Query Explorer. Out-of-the-box NDepend already delivers hundreds of predefined queries to analyze your code deeply.
Write your own queries
NDepend is a powerful and complex tool to execute static code checks. But it also allows to define an own rule set by using the code query language (CQL). You can easily adapt the existing queries or just create your completely customized ones. Because of the SQL-like syntax it is really easy to understand.
The following query, for instance, selects all the methods which have more than 20 lines of code (comments and empty lines are not counted).
SELECT METHODS WHERE NbLinesOfCode > 20 ORDER BY NbLinesOfCode DESC
Besides such general queries it is of course also possible to write specific queries for your solution. This query selects all assemblies which directly reference the data access component (DepthOfIsUsing "ASSEMBLY:DataAccess" == 1) but are not the business logic (!NameIs "BusinessLogic"). This can detect, for example, if the data access is used directly by the presentation layer.
SELECT ASSEMBLIES WHERE DepthOfIsUsing "ASSEMBLY:DataAccess" == 1 AND !NameIs "BusinessLogic"
Integrate NDepend into TFS Build
After you defined all your NDepend rules, it would be great if code which violates the rules cannot be checked-in anymore. Therefore, you have to flag first the queries you want to cause an error. First of all, the query has to give a warning if a certain threshold has been reached.
In the following example a warning is shown if just one method exceeds 20 lines of code.
WARN IF Count > 0 IN SELECT METHODS WHERE NbLinesOfCode > 20 ORDER BY NbLinesOfCode DESC
Additionally, the rule has to be marked as critical to make the build fail. There is a red button in the upper right corner:
After you defined all your rules, limits and importance of the rule, the build has to be configured to fail in case of check-ins which do not fulfill these rules. First of all, enable the gated check-in feature that the bad code is not committed in the source control. An NDepend Activity is available on codeplex and has to be integrated in the TFS 2010 Workflow Build. (see also Integrate NDepend with TFS)
How to improve old legacy code step-by-step
All of these code checks are great if you introduce them before you actually write code. The problem is just that this is not all the time the case. In most of the cases an old legacy system with a lot of code already exists and should be improved. This can be usually just being done step-by-step.
Think about introducing a rule that every method should be a maximum of 20 statements long. With legacy code and thousands or ten-thousands methods introducing this rule is nearly impossible. But therefore NDepend provides a great feature to apply the rules just for changed code. This makes it possible to define strict rules and introduce them step by step. Every check-in makes the code better than before and even huge and complex systems can be improved over time.
Summary
NDepend is a great and extremely powerful tool to analyze your code. You can easily write your own queries and integrate them into the build. This saves a lot of time during the architecture, design and code review because the most common errors can be already detected before the code is actually check-in. Afterwards it is usually extremely difficult and can cost a lot of money to clean up and repair the bad code.
Hint: Try to identify after every code review the common errors and write a rule which detects them in the future.
Subscribe to:
Posts (Atom)