You should run those benchmarks longer, perhaps 600 times instead of
300, to get a more stable result. I loaded your most-recent package
into a clean image and got similar results to what you got, with the
current non-converting version being slightly faster. However, in my
development image (with all of the changes I have made since my last
release), the converting version is slightly faster, and both
versions are overall faster. I haven't been able to work much on the
parsers and tokenizer yet, but it appears they are still largely
string-based, so I am not sure if making changes like this is good
idea at this point.
Ok, I will increase it to 600. And leave this story about convertion
for later.
Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel
http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.