Partial loading could be a solution in some cases but we need tool support
for that. I cannot invest 2 weeks every time I need to script a 10 minutes
analysis trying to figure out how to partially load the information that I
"might" need. Without having a full model available the entire idea of
prototyping analysis behind Moose goes down the drain and so Moose itself
lose a lot of its meaning.
I think the whole point is to have all the data on the system in analysis at
hand. Either having 10GB model stored in an image or loading the needed
entities on demand it is not relevant as soon as it is transparent for the
user and the performances are not too bad.
Cheers,
Fabrizio
2012/7/25 <stephan(a)stack.nl>
Hi Doru,
When thinking about scalability of Moose, what scenarios do you have
in mind? Up to about half a terabyte, you can run out of main memory
on a single machine cost-effectively. The main limitation there is
the lack of a 64-bit vm and image. As far as I understand the access
patterns involved, a main memory based or distributed main memory
solution is far preferable for actually analyzing systems. What do you
hope to achieve by going to disk? When we did the data conversion project,
we thought about partitioning over multiple images but finally managed
with partial loading.
Stephan
_______________________________________________
Moose-dev mailing list
Moose-dev(a)iam.unibe.ch
https://www.iam.unibe.ch/mailman/listinfo/moose-dev
_______________________________________________
Moose-dev mailing list
Moose-dev(a)iam.unibe.ch
https://www.iam.unibe.ch/mailman/listinfo/moose-dev
--
Best regards,
Igor Stasenko.