Hi Stef,
Sorry but the problem is not that there are no 64-bit vms and images
available. The problem is that the available 32-bit vm and image cannot be
pushed further 500MB. Even worst is that as far as I understood we are not
even sure why is like that.
For me a reasonable size for a Moose image containing an average size Java
Enterprise application is between 500MB and 1500MB. So a 32-bit vm\image
should be perfectly able to store the whole model and to have enough free
space for computations.
Partial loading could be a solution in some cases but we need tool support
for that. I cannot invest 2 weeks every time I need to script a 10 minutes
analysis trying to figure out how to partially load the information that I
"might" need. Without having a full model available the entire idea of
prototyping analysis behind Moose goes down the drain and so Moose itself
lose a lot of its meaning.
I think the whole point is to have all the data on the system in analysis
at hand. Either having 10GB model stored in an image or loading the needed
entities on demand it is not relevant as soon as it is transparent for the
user and the performances are not too bad.
Cheers,
Fabrizio
2012/7/25 <stephan(a)stack.nl>
Hi Doru,
When thinking about scalability of Moose, what scenarios do you have
in mind? Up to about half a terabyte, you can run out of main memory
on a single machine cost-effectively. The main limitation there is
the lack of a 64-bit vm and image. As far as I understand the access
patterns involved, a main memory based or distributed main memory
solution is far preferable for actually analyzing systems. What do you
hope to achieve by going to disk? When we did the data conversion project,
we thought about partitioning over multiple images but finally managed
with partial loading.
Stephan
______________________________**_________________
Moose-dev mailing list
Moose-dev(a)iam.unibe.ch
https://www.iam.unibe.ch/**mailman/listinfo/moose-dev<https://www.iam.un…