Good point on both remarks. Obviously if the image size can be shrank down
it is an advantage and most likely will boost performance as well making
pharo experience snappier. That was my experience using Cuis (
) . About the benchmarks maybe use
a common pharo command as a reference point (like Transcipt >> open) and
then do the benchmark relative to that so the benchmark does not rely on
absolute values that may differ from machine to machine.
On Thu, Jul 30, 2015 at 12:28 PM Peter Uhnák <i.uhnak(a)gmail.com> wrote:
On Thu, Jul 30, 2015 at 10:51 AM, Dimitris Chloupis
<kilon.alios(a)gmail.com
wrote:
Frankly I dont mind big images or big data ,
neither I share the
obsession to shrink things down to few mbs in a time that we are talking in
TBs .
I usually do it because time to save an image increases with image size
(so instead of ~instant for 60MB it takes couple seconds for 600 MB)...
since I have HDD.
I was wondering whether it would worth the effort beyond the unit tests
that check for behaviour of the code to have also benchmark tests that will
have to pass specific standards so that specific method must perform under
a strict timetable specific tasks, this way CI may alert not only unit
tests that fail but also benchmarks that fail , automagically.
We already sort of have that:
TestAsserter>>should: aBlock notTakeMoreThanMilliseconds: anInteger
Although I see a problem that each machine will have a different
performance. So it would need to establish a baseline and then base the
execution time on that. (e.g. execution shouldn't take more than 300% of a
baseline).
Peter
_______________________________________________
Moose-dev mailing list
Moose-dev(a)iam.unibe.ch
https://www.iam.unibe.ch/mailman/listinfo/moose-dev