Difference between revisions of "Crap4J and other metric tools"

From CitconWiki
Jump to navigationJump to search
(New page: '''Available tools''' * [http://www.crap4j.org/ Crap4J] (Change Risk Analysis and Predictions software metric) * [http://www.atlassian.com/software/clover/ Clover] (code coverage analysis...)
 
m (Reverted edits by Upinson (talk) to last revision by Simon.brandhof)
 
(8 intermediate revisions by 7 users not shown)
Line 1: Line 1:
'''Available tools'''
+
In this session I was specifically interested in discussing metric tools that have a bias towards identifying bad code. Having a metric that "measures code health" is not the same as one that "finds code sickness", and I'm more interested (for the moment) in the later.
 +
 
 +
To start the discussion I introduced CRAP4J, a free tool that tries to identify code that, if you had to inherit it, you'd probably declare as crappy. The purpose of the crap score is act like a cholesterol test. If your cholesterol score is above 200 mg/dL you need to lower it; if a method crap score is above 30 you need to either write more tests or refactor it. This strong bias to action is what sets CRAP4J apart from other tools that can identify the same problems but require more effort on the part of the user to read the tea leaves.
 +
 
 +
After the discussion of crap we moved onto Dependometer which is being developed by ValTech. Their point was that circular dependencies are a serious code smell, a problem both for testing and maintenence more generally. They use this tool to perform architecture reviews very quickly. Apparently an Eclipse plug-in version is being developed.
 +
 
 +
Next mention was Panopticode which provides interesing visualizations for a whole set of metrics.
 +
 
 +
java2cdiff creates files that can be used by [http://www.inf.unisi.ch/faculty/lanza/codecrawler.html CodeCrawler], a language independent metrics/reverse-engineering tool. In the post-talk discussion and web browsing we found that there's now an Eclipse plug-in that provides similar views, [http://atelier.inf.unisi.ch/~malnatij/xray.html X-Ray]. This looks very cool!
 +
 
 +
JUCA is an interesting attempt to estimate coverage without actaully running the tests. I don't quite understand why you don't just run the tests with a coverage tool, but I'm sure there must be a reason. (?)
 +
 
 +
One good idea that came up was to use check-in velocity and to enforce code rules & refactor in those areas that have the highest velocity. StatSVN came up as a useful tool to measure check-in velocity.
 +
 
 +
It didn't come up during the session but the new Clover has a cloud of classes that identify very similar information to CRAP4J, the most complex + least tested code, as [http://downloads.atlassian.com/software/clover/samples/lucene/project-risks.html project risk]. Size is complexity, color is coverage. I discuss it and other ways to visualize complexity in a project in [http://www.developertesting.com/archives/month200710/20071025-VisualizingComplexityAndCoverage.html this blog].
 +
 
 +
== Available tools ==
  
 
* [http://www.crap4j.org/ Crap4J] (Change Risk Analysis and Predictions software metric)
 
* [http://www.crap4j.org/ Crap4J] (Change Risk Analysis and Predictions software metric)
 +
* [http://www.javaworld.com/podcasts/jtech/2007/102507jtech003.html Podcast on CRAP4J] with Alberto Savoia and Andy Glover
 
* [http://www.atlassian.com/software/clover/ Clover] (code coverage analysis tool)
 
* [http://www.atlassian.com/software/clover/ Clover] (code coverage analysis tool)
 
* [http://cobertura.sourceforge.net/ Cobertura] (coverage tool based on jcoverage)
 
* [http://cobertura.sourceforge.net/ Cobertura] (coverage tool based on jcoverage)
Line 10: Line 27:
 
* [http://clarkware.com/software/JDepend.html JDepend] (java design quality metrics)
 
* [http://clarkware.com/software/JDepend.html JDepend] (java design quality metrics)
 
* [http://checkstyle.sourceforge.net/ Checkstyle] (checking Java code)
 
* [http://checkstyle.sourceforge.net/ Checkstyle] (checking Java code)
 +
* [http://www.lattix.com Lattix] (Uses the amazingly simple Design Structure Matrix style. Java, .NET, C/C++, others)
 +
* [http://squadlimber.com/chris/mostLikelyToBeBuggyTop10.txt mostLikelyToBeBuggy.py] (For Java: based on TODOs and comments. See the related article: [http://squadlimber.com/chris/2007/04/26/sloppy-code-equals-buggy-code/ Sloppy Code Equals Buggy Code?])
 +
* [http://sonar.hortis.ch/ Sonar] (open-source projects dashboard and quality control tool)

Latest revision as of 05:27, 10 September 2012

In this session I was specifically interested in discussing metric tools that have a bias towards identifying bad code. Having a metric that "measures code health" is not the same as one that "finds code sickness", and I'm more interested (for the moment) in the later.

To start the discussion I introduced CRAP4J, a free tool that tries to identify code that, if you had to inherit it, you'd probably declare as crappy. The purpose of the crap score is act like a cholesterol test. If your cholesterol score is above 200 mg/dL you need to lower it; if a method crap score is above 30 you need to either write more tests or refactor it. This strong bias to action is what sets CRAP4J apart from other tools that can identify the same problems but require more effort on the part of the user to read the tea leaves.

After the discussion of crap we moved onto Dependometer which is being developed by ValTech. Their point was that circular dependencies are a serious code smell, a problem both for testing and maintenence more generally. They use this tool to perform architecture reviews very quickly. Apparently an Eclipse plug-in version is being developed.

Next mention was Panopticode which provides interesing visualizations for a whole set of metrics.

java2cdiff creates files that can be used by CodeCrawler, a language independent metrics/reverse-engineering tool. In the post-talk discussion and web browsing we found that there's now an Eclipse plug-in that provides similar views, X-Ray. This looks very cool!

JUCA is an interesting attempt to estimate coverage without actaully running the tests. I don't quite understand why you don't just run the tests with a coverage tool, but I'm sure there must be a reason. (?)

One good idea that came up was to use check-in velocity and to enforce code rules & refactor in those areas that have the highest velocity. StatSVN came up as a useful tool to measure check-in velocity.

It didn't come up during the session but the new Clover has a cloud of classes that identify very similar information to CRAP4J, the most complex + least tested code, as project risk. Size is complexity, color is coverage. I discuss it and other ways to visualize complexity in a project in this blog.

Available tools

  • Crap4J (Change Risk Analysis and Predictions software metric)
* Podcast on CRAP4J with Alberto Savoia and Andy Glover