On Thu, Mar 30, 2017 at 07:15 Nicolas Anquetil <nicolas.anquetil@inria.fr> wrote:

Hi stephan,

thanks for your thoughts

(further comments below)


On 30/03/2017 13:31, Stephan Eggermont wrote:
> Hi Cyrille,
> Long time no see!
>
> On 30/03/17 10:07, Cyrille Delaunay wrote:
>> With the current memory limit of Pharo
>> and the size of the generated moose models being potentially huge,
>>
>> maybe some of you already though about (or even experimented)
>> persistence
>> solutions with query mechanisms that would instantiate famix objects
>> only “on demand”,
>>
>> in order to only have part of a model in memory when working on a
>> specific area.
>>
>> If so, I would be really interested to hear about (or play with) it :)
> The current FAMIX based models are not suitable for large models.
> The inheritance based modeling results in very large, nearly empty
> objects.
>
> Moose models tend to be highly connected and tend to be used using badly
> predictable access patterns. That makes "standard databases" a bad match,
> especially if you cannot push querying to them.
>
> We are very close to having 64bit Moose everywhere, shifting the
> problem from
> size of the model directly to speed.
"very close" seems a bit optimistic. For example, it will take some time
for windows yet
The problem is that Synectique is already having difficulties right now
and is looking for shorter term solution(s)

> As the VM uses only one native thread and
> 8-thread machines are everywhere, the best speed-up should be expected
> from
> splitting the model over multiple pharo images, and possibly over
> multiple machines.
>
interesting idea,
I am having some difficult seeing how to split a model in several parts
that would have to link somehow one to the other.

how do they link

Do you have any further thoughts on this point?

nicolas

--
Nicolas Anquetil -- MCF (HDR)
Project-Team RMod

_______________________________________________
Moose-dev mailing list
Moose-dev@list.inf.unibe.ch
https://www.list.inf.unibe.ch/listinfo/moose-dev