Hi Fabrizio,
On Jul 25, 2012, at 4:58 PM, Fabrizio Perin wrote:
Sorry but the problem is not that there are no 64-bit
vms and images available. The problem is that the available 32-bit vm and image cannot be
pushed further 500MB. Even worst is that as far as I understood we are not even sure why
is like that.
Is that windows only? I run larger mac images.
For me a reasonable size for a Moose image containing
an average size Java Enterprise application is between 500MB and 1500MB. So a 32-bit
vm\image should be perfectly able to store the whole model and to have enough free space
for computations.
For me the difference between a 500 MB model and a 2GB model is not really meaningful. It
still provides
a significant limit to the size of models I can handle. I try to avoid loading as much as
possible. A 588 MB
image starts in 3 seconds on my smallest machine, so that is fast enough.
Partial loading could be a solution in some cases but
we need tool support for that. I cannot invest 2 weeks every time I need to script a 10
minutes analysis trying to figure out how to partially load the information that I
"might" need. Without having a full model available the entire idea of
prototyping analysis behind Moose goes down the drain and so Moose itself lose a lot of
its meaning.
+1
I think the whole point is to have all the data on the
system in analysis at hand. Either having 10GB model stored in an image or loading the
needed entities on demand it is not relevant as soon as it is transparent for the user and
the performances are not too bad.
I don't understand how performance can be good using a nosql or rdbms system. Gemstone
with enough ram,
or multiple pharo images with distributed processing, yes, but copying all that data
around sounds to me
like a non-starter.
Stephan