And how about correctness of the measured values? The
class comment
states: "Right now only Number of MessageSends is computed in a
correct manner." On the first glance, I cannot see which of the
measured values could be wrong. Any hint on inappropriate
computation or improvement would be appreciated. The only thing I
see a bit confusing is the computation of two McCabe numbers
#cyclomaticNumber and #cyclomaticNumber2. Unfortunately, the class
comment does not tell enough.
Correctness of measured values is a big issue. The correctness of
the
numbers that Moose produces is unknown. Even such
simple
measurements
as the number of classes or method invocations
might be wrong!
I think that adrian is a bit simplifying too much.
What is the language that you want to analyse?
For Smalltalk, since the langage is dynamically typed nobody on earth
can be precise when talking about invocations!
Now for the number of classes I think that this is correct.
Now if you take any "professional" tools you have to ask yourself
Example of error range in senders:
FAMIX does not model shared variables and their initializers, thus
any sender in an initializer is lost. If you browse senders with VW
you get them. And RBCrawler gets them as well, but filters senders by
the result of a flow analysis with method scope (by doing the same
abstract interpretation as RoelTyper does, as I found out later when
comparing the two tools).
That is, three numbers with different precision for the same metric.
Example of error range in number of classes and methods:
Fame uses 4 anonymous subclasses of Fame.MetaDescription to
instantiate primitive descriptions. FAMIX will not model them, but
Object withAllSubclasses will list them.
That is, two numbers for the same metric.
i.e., even something as simple as the number of classes is more than
one number and thus the correct number of classes is rather a range
of possible numbers. In physics error ranges are given as N+/-Err, in
software analysis they could be given by making clear what has been
measured and what is missing. See examples above.
Of course, whether the above errors matter or not will depend on your
use case. For some use cases they might be no problem, for other use
cases they are critical or at least annoying.
Whether you care are not also depends on your distance from source
code. Consider for example a class blueprint. In smalltalk most #new
methods call an #initialize method that typically creates new objects
of a different type by calling #new again. Moose will thus visualize
the two methods as calling each other even though any developer can
tell they do not! For a consultant doing an offline analysis at
10'000 feet altitude that might be good enough, but for the developer
using the tool while working at ground level the visualization must
be precise or they stop using it because its results are obviously
false... and this is why RBCrawler takes the whole pain of running a
flow analysis, because I was eating my own dogfood :)
cheers,
AA