Hi Doru,
On Sun, Jan 24, 2016 at 12:42 PM, Tudor Girba <tudor(a)tudorgirba.com> wrote:
I take it that this is a Spur image. t would be great to have both a Spur
and a V3 mage to compare, as similar as is easily achievable. But as long
as there's both a V3 and a Spur image...
On my Mac, this takes some 3 minutes, while before it took about 1 minute.
Let me know if you need anything else.
Cheers,
Doru
On Jan 24, 2016, at 9:33 PM, Eliot Miranda
<eliot.miranda(a)gmail.com>
wrote:
Hi Doru,
On Sun, Jan 24, 2016 at 12:26 PM, Tudor Girba <tudor(a)tudorgirba.com>
wrote:
Hi Eliot,
Thanks for looking at this.
Please let us know what support you need, or what kind of experiments we
can do on
our side.
What I want most of all is a test case that I can run on Mac. I'm
assuming
that I can just copy the job that Vincent mentioned in his email
and use that as a test case. Do you see any issues with that? If so, how
do I get a really bug MOOSE test case to run on Mac?
Cheers,
Doru
> On Jan 24, 2016, at 9:20 PM, Eliot Miranda <eliot.miranda(a)gmail.com>
wrote:
>
> Hi Vincent,
>
> I'll take a look early this week. There's clearly a bug; the Spur
> GC is /not/ collecting those dictionaries :-( (thanks Stephan!).
> Assuming the bug is fixed times should come down (see below). It may
> be that the bug in Slang that I introduced in December has broken
> things because I don't see these symptoms in my daily work (but I use
> the most up-to-date VM version possible ;-) ). But I'm not in denial
> and look forward to using MOOSE as a good stress case.
>
> I do want to say that the GC is not complete. Right now we have a
> scavenger that works well, and a global GC that has a slow compaction
> algorithm, and hence there are significant pauses. For example here's
> what I see as typical in using SPur for VMMaker work:
>
> memory 160,432,128 bytes
> old 153,658,624 bytes (95.8%)
> young 4,838,224 bytes (3%)
> used 127,009,928 bytes (79.2%)
> free 28,126,456 bytes (17.5%)
> GCs 7,265 (?? ms between GCs)
> full 36 totalling 13,229 ms (0% uptime), avg 367.5 ms
> incr 7,229 totalling 6,546 ms (0% uptime), avg 0.9 ms
> tenures 3,589,063 (avg 0 GCs/tenure)
>
> (There's no uptime in the above stats because we're still
> transitioning Squeak to the 64-bit clock and there are consequently
> bugs in computing uptime).
>
> The plan is to add an incremental global GC so this work is broken up
> into much smaller pieces. I don't want to see 700ms pauses in global
> GC; one can't do game animation with that. So an incremental
> mark-sweep is needed. There are two nice papers we're considering,
> one from Lua and one for a truly concurrent collector. But time is
> pressing, so if anyone out there knows GC and is interested in helping
> this is a nicely self-contained project for which we'd love to have
> volunteers.
>
> _,,,^..^,,,_ (phone)
>
>> On Jan 24, 2016, at 3:43 AM, Vincent BLONDEAU <
vincent.blondeau(a)polytech-lille.net> wrote:
>>
>> Hi,
>>
>> I made the benchmarks with the files you provided. I have more or
less
the
>> same magnitude:
>> Version 504: 0:00:01:09.021
>> Version 1175: 0:00:02:37.507
>>
>> However, by launching it in the time profiler (MooseModel new
>> importFromMSEStream: (StandardFileStream readOnlyFileNamed:
>> 'd:/ArgoUML-0-34.mse')), it takes
>> 504: 1 min 55
>> 1175: 4 min 25
>> Well there is a delta...
>>
>> After investigation, the standard process has almost the same
duration
(120
>> secs for prespur and 140 secs for
spur).
>> But, there is a large difference in GC time:
>>
>> 504: not spur
>> **Memory**
>> old +144,822,000 bytes
>> young -8,293,660 bytes
>> used +136,528,340 bytes
>> free -104,186,788 bytes
>>
>> **GCs**
>> full 1 totalling 965ms (1.0% uptime), avg 965.0ms
>> incr 3264 totalling 42,279ms (33.0% uptime), avg 13.0ms
>> tenures 2,497 (avg 1 GCs/tenure)
>> root table 0 overflows
>>
>> 1175: spur
>> **Memory**
>> old +0 bytes
>> young +340,048 bytes
>> used +340,048 bytes
>> free -340,048 bytes
>> **GCs**
>> full 7 totalling 145,003ms (66.0% uptime), avg
>> 20715.0ms
>> incr 3288 totalling 30,912ms (14.0% uptime), avg 9.0ms
>> tenures 7,146,505 (avg 0 GCs/tenure)
>> root table 0 overflows
>>
>> Total GC time
>> 504: 43 secs
>> 1175: 176 secs
>>
>> See the performance reports attached.
>>
>> I let VM people take care of the issue ;)
>>
>> Cheers,
>> Vincent
>>
>> -----Original Message-----
>> From: moose-dev-bounces(a)list.inf.unibe.ch
>> [mailto:moose-dev-bounces@list.inf.unibe.ch] On Behalf Of Tudor Girba
>> Sent: dimanche 24 janvier 2016 09:08
>> To: Moose-related development
>> Subject: [Moose-dev] Re: mse loading looks slower :(
>>
>> Hi,
>>
>> I am talking about the difference between Moose 6 images:
>> - October 7:
>>
https://ci.inria.fr/moose/job/moose-6.0/504/artifact/moose-6.0.zip
>>
>> - yesterday:
>>
https://ci.inria.fr/moose/job/moose-6.0/1175/artifact/moose-6.0.zip
>>
>> Multiple things did change, but not in Moose. In the end, I would
like
to
>> understand where the slowness comes.
Maybe it comes from Spur itself,
but
>> maybe it comes from somewhere else.
>>
>> Cheers,
>> Doru
>>
>>
>>
>>>> On Jan 24, 2016, at 1:41 AM, Mariano Martinez Peck <
marianopeck(a)gmail.com>
>>> wrote:
>>>
>>> Doru...just to be sure it is not a Pharo (image change), when you
said
>> before and after Spur, do you mean a
Pharo 5.0 exactly (just before
Spur)
>> and a Pharo JUST after it? Otherwise,
the slowness may come from the
>> difference between the 2 Pharos you are running.
>>>
>>> Cheers,
>>>
>>> On Sat, Jan 23, 2016 at 5:55 PM, Tudor Girba <tudor(a)tudorgirba.com>
wrote:
>>> Hi,
>>>
>>> I am doing some performance testing of Moose with the Spur VM on Mac.
>>>
>>> I tried to load an MSE file with ArgoUML 0.34 and on my machine it
loads
>> twice as slow with Spur than before:
>>> - PreSpur: 0:00:01:07.272
>>> - Spur: 0:00:02:10.508
>>>
>>> Here is the reference file:
>>>
https://dl.dropboxusercontent.com/u/18323746/Tmp/ArgoUML-0-34.mse.zip
>
> And here is the script:
> [
> MooseModel new
> importFromMSEStream: (StandardFileStream
readOnlyFileNamed:
> (FileSystem workingDirectory / 'src' /
'ArgoUML-0-34' / 'ArgoUML-0-34.mse') fullName).
> ] timeToRun
>
> Do you get the same?
>
> Cheers,
> Doru
>
>
> --
>
www.tudorgirba.com
>
www.feenk.com
>
> "Problem solving should be focused on describing the problem in a way
> that makes the solution obvious."
>
>
>
>
>
> _______________________________________________
> Moose-dev mailing list
> Moose-dev(a)list.inf.unibe.ch
>
https://www.list.inf.unibe.ch/listinfo/moose-dev
>
>
>
> --
> Mariano
>
http://marianopeck.wordpress.com
> _______________________________________________
> Moose-dev mailing list
> Moose-dev(a)list.inf.unibe.ch
>
https://www.list.inf.unibe.ch/listinfo/moose-dev
--
www.tudorgirba.com
www.feenk.com
"What is more important: To be happy, or to make happy?"
_______________________________________________
Moose-dev mailing list
Moose-dev(a)list.inf.unibe.ch
https://www.list.inf.unibe.ch/listinfo/moose-dev
<report504.txt>
<report1175.txt>
--
_,,,^..^,,,_
best, Eliot
_______________________________________________
Moose-dev mailing list
Moose-dev(a)list.inf.unibe.ch
https://www.list.inf.unibe.ch/listinfo/moose-dev