Hi holger
> The project is for company CAST that has software
for software
> quality assessment.
The castSoftware company
http://www.castsoftware.com/?
Can you tell us more?
Because another company told me that their metrics were fuzzy too :)
>
> I know them, that's cool!
>> I have a simple question: Do you have
more analysis methods and
>> metrics that are not yet published with Moose?
> You can find a pre-release of (some of) my current work here
>
http://www.iam.unibe.ch/~akuhn/d/Kuhn-2008-WCRE-SoftwareMap.pdf
>> And how about correctness of the
measured values? The class comment
>> states: "Right now only Number of MessageSends is computed in a
>> correct manner." On the first glance, I cannot see which of the
>> measured values could be wrong. Any hint on inappropriate
>> computation or improvement would be appreciated. The only thing I
>> see a bit confusing is the computation of two McCabe numbers
>> #cyclomaticNumber and #cyclomaticNumber2. Unfortunately, the class
>> comment does not tell enough.
> Correctness of measured values is a big
issue. The correctness of the
> numbers that Moose produces is unknown. Even such simple measurements
> as the number of classes or method invocations might be wrong!
I think that adrian is a bit simplifying too much.
What is the language that you want to analyse?
For Smalltalk, since the langage is dynamically typed nobody on earth
can be precise when talking about invocations!
Now for the number of classes I think that this is correct.
Now if you take any "professional" tools you have to ask yourself
Why?
Moose uses the FAMIX model that has the aim to be language
independent, which can only be achieved at the cost of less
precision. Famix is this a lossy representation of software rather
than precise, think JPEG vs PNG.
This is not only that. With certain program you do no have statically
the information. Now for example for C++ we do not have pointer
analysis tools.
Moose is not that.
For that reason I suggested some
time ago to add an error range to all numbers,
I do not see how this would really help. If you do not have the
pointer analysis or call-flow analysis the range error can be as
meaningless than
certain metrics.
and started to take a
look at some of the numbers. That is why there are two diff McCabe
measurements. As far I recall, #cyclomaticNumber2 is more correct
than #cyclomaticNumber. I only looked at McCabe and found it is not
correct, so I guess other measurements need careful review too. But
alas, I did not have time to complete that work...
Holger we use moose for a lot of projects and we did not notice
problems.
For basic metrics Moose is correct. Now of course we may have
different definition
but as with any non trivail measuring tools we have check and
calibrate it first
Stef