----- Original Message -----
| From: "Stéphane Ducasse" <stephane.ducasse(a)inria.fr>
| To: "Moose-related development" <moose-dev(a)iam.unibe.ch>
| Sent: Tuesday, August 20, 2013 1:54:04 PM
| Subject: [Moose-dev] Re: persisting moose models
| On Aug 20, 2013, at 7:47 PM, Dale K. Henrichs <
| dale.henrichs(a)gemtalksystems.com > wrote:
| | Doru,
|
| | I would be willing to spend time into helping with a GemStone-based
| | persistence solution ... I will be at ESUG in September so that
| | would be a great time to discuss the issues ...
|
| | I know that you are not planning on being there, but perhaps I
| | could
| | meet with someone else who is familiar with the Moose
| | requirements...there are several approaches that I think would make
| | sense, but it really depends upon your requirements…
|
| Thanks dale.
| We could organize a skype meeting. I will be there. Usman/Guillaume
| are not coming but we can arrange a skype meeting.
That would be good!
| In essence Moose has models that are graphs of objects like a code
| metamodel: a package contain classes, contains methods, Methods
| access IV, Methods access other methodds.
| So as soon as we program something on top of FAMIX then we navigate
| pointers in this graph. We did an experiment (mooseOnTheWeb) with
| Amber as a client and Moose as a server.
| The point is that it is working when you do a query and then you get
| JSON objects (but just the shallow objects) and if you need to work
| in the graphs you need to do multiple queries to get from shallowed
| information to the next one.
| Now in essence a solution with GS to me would mean to move Moose on
| GS on the long term and I'm not sure that this is the way to go.
I'm not thinking in terms of develop in Pharo and deploy in GemStone for this ...
The basic problem is that you've got Moose models that are too big to fit in the
memory of Pharo ... so my basic idea is to provide a smart data store of the Moose model
and allow you to make queries against the GemStone db until the size of the result set is
small enough to fit in memory ... then the data (subgraph?) would be transferred to pharo
and then all processing would be done completely in Pharo from that point forward ...
perhaps we would use something like Fuel to ship these subgraphs efficiently over the wire
... there are other "tricks" that can come into play, but I think it is worth
exploring the idea of using GemStone as a "smart datastore" where the line
between the pharo client and gemstone server is somewhat blurred ... pharo can do some
hefty data processing on its own so gemstone isn't required to do all of the analysis
... I'm just imagining that in the face of a "too big for memory" dataset,
some sort of "data reduction queries" can be performed on the server and shipped
to pharo for further analysis and visualization ... I assume that only the Moose core
classes need to be ported to gemstone...
| May be I'm wrong.
Maybe I'm naive:)
Dale