On Sun, Jan 24, 2016 at 12:26 PM, Tudor Girba <tudor(a)tudorgirba.com> wrote:
Thanks for looking at this.
Please let us know what support you need, or what kind of experiments we
can do on our side.
What I want most of all is a test case that I can run on Mac. I'm assuming
that I can just copy the job that Vincent mentioned in his email and use
that as a test case. Do you see any issues with that? If so, how do I get
a really bug MOOSE test case to run on Mac?
> On Jan 24, 2016, at 9:20 PM, Eliot Miranda <eliot.miranda(a)gmail.com>
> Hi Vincent,
> I'll take a look early this week. There's clearly a bug; the Spur
> GC is /not/ collecting those dictionaries :-( (thanks Stephan!).
> Assuming the bug is fixed times should come down (see below). It may
> be that the bug in Slang that I introduced in December has broken
> things because I don't see these symptoms in my daily work (but I use
> the most up-to-date VM version possible ;-) ). But I'm not in denial
> and look forward to using MOOSE as a good stress case.
> I do want to say that the GC is not complete. Right now we have a
> scavenger that works well, and a global GC that has a slow compaction
> algorithm, and hence there are significant pauses. For example here's
> what I see as typical in using SPur for VMMaker work:
> memory 160,432,128 bytes
> old 153,658,624 bytes (95.8%)
> young 4,838,224 bytes (3%)
> used 127,009,928 bytes (79.2%)
> free 28,126,456 bytes (17.5%)
> GCs 7,265 (?? ms between GCs)
> full 36 totalling 13,229 ms (0% uptime), avg 367.5 ms
> incr 7,229 totalling 6,546 ms (0% uptime), avg 0.9 ms
> tenures 3,589,063 (avg 0 GCs/tenure)
> (There's no uptime in the above stats because we're still
> transitioning Squeak to the 64-bit clock and there are consequently
> bugs in computing uptime).
> The plan is to add an incremental global GC so this work is broken up
> into much smaller pieces. I don't want to see 700ms pauses in global
> GC; one can't do game animation with that. So an incremental
> mark-sweep is needed. There are two nice papers we're considering,
> one from Lua and one for a truly concurrent collector. But time is
> pressing, so if anyone out there knows GC and is interested in helping
> this is a nicely self-contained project for which we'd love to have
> _,,,^..^,,,_ (phone)
>> On Jan 24, 2016, at 3:43 AM, Vincent BLONDEAU <
>> I made the benchmarks with the files you provided. I have more or less
>> same magnitude:
>> Version 504: 0:00:01:09.021
>> Version 1175: 0:00:02:37.507
>> However, by launching it in the time profiler (MooseModel new
>> importFromMSEStream: (StandardFileStream readOnlyFileNamed:
>> 'd:/ArgoUML-0-34.mse')), it takes
>> 504: 1 min 55
>> 1175: 4 min 25
>> Well there is a delta...
>> After investigation, the standard process has almost the same duration
>> secs for prespur and 140 secs for spur).
>> But, there is a large difference in GC time:
>> 504: not spur
>> old +144,822,000 bytes
>> young -8,293,660 bytes
>> used +136,528,340 bytes
>> free -104,186,788 bytes
>> full 1 totalling 965ms (1.0% uptime), avg 965.0ms
>> incr 3264 totalling 42,279ms (33.0% uptime), avg 13.0ms
>> tenures 2,497 (avg 1 GCs/tenure)
>> root table 0 overflows
>> 1175: spur
>> old +0 bytes
>> young +340,048 bytes
>> used +340,048 bytes
>> free -340,048 bytes
>> full 7 totalling 145,003ms (66.0% uptime), avg
>> incr 3288 totalling 30,912ms (14.0% uptime), avg 9.0ms
>> tenures 7,146,505 (avg 0 GCs/tenure)
>> root table 0 overflows
>> Total GC time
>> 504: 43 secs
>> 1175: 176 secs
>> See the performance reports attached.
>> I let VM people take care of the issue ;)
>> -----Original Message-----
>> From: moose-dev-bounces(a)list.inf.unibe.ch
>> [mailto:email@example.com] On Behalf Of Tudor Girba
>> Sent: dimanche 24 janvier 2016 09:08
>> To: Moose-related development
>> Subject: [Moose-dev] Re: mse loading looks slower :(
>> I am talking about the difference between Moose 6 images:
>> - October 7:
>> - yesterday:
>> Multiple things did change, but not in Moose. In the end, I would like
>> understand where the slowness comes. Maybe it comes from Spur itself,
>> maybe it comes from somewhere else.
>>>> On Jan 24, 2016, at 1:41 AM, Mariano Martinez Peck <
>>> Doru...just to be sure it is not a Pharo (image change), when you said
>> before and after Spur, do you mean a Pharo 5.0 exactly (just before
>> and a Pharo JUST after it? Otherwise, the slowness may come from the
>> difference between the 2 Pharos you are running.
>>> On Sat, Jan 23, 2016 at 5:55 PM, Tudor Girba <tudor(a)tudorgirba.com>
>>> I am doing some performance testing of Moose with the Spur VM on Mac.
>>> I tried to load an MSE file with ArgoUML 0.34 and on my machine it
>> twice as slow with Spur than before:
>>> - PreSpur: 0:00:01:07.272
>>> - Spur: 0:00:02:10.508
>>> Here is the reference file:
>>> And here is the script:
>>> MooseModel new
>>> importFromMSEStream: (StandardFileStream
>>> (FileSystem workingDirectory / 'src' /
>> 'ArgoUML-0-34' / 'ArgoUML-0-34.mse') fullName).
>>> ] timeToRun
>>> Do you get the same?
>>> "Problem solving should be focused on describing the problem in a way
>>> that makes the solution obvious."
>>> Moose-dev mailing list
>>> Moose-dev mailing list
>> "What is more important: To be happy, or to make happy?"
>> Moose-dev mailing list