Hi all,
I am trying to implement a small "island parser" in PetitParser, but I have a problem.
The following is the most self contained example I can think of,
it is able to extract the word 'island' from any context.
I have the class IslandSyntax, subclass of PPCompositeParser, with the following methods:
===
IslandSyntax>>start
^world end
IslandSytntax>>world
world := PPUnresolvedParser new.
world def: (island , world) / island / (water,world) / water.
^world.
IslandSyntax>>island
^'island' asParser
IslandSyntax>>water
^((island not), #any asParser) ==> #second
===
Then, I have IslandParser a subclass of IslandSyntax:
===
IslandParser>>island
^super island ==> [:result | result inspect]
===
If I open a workspace and do:
IslandParser new parse: 'blablablaislandblablabla'.
The inspect on island is called only once,
while if I do:
IslandParser new parse: 'island'.
the inspector is called twice, probably because in the "world" production
I first put (island , world) and then (island).
Is there a way to avoid this double calling?
Am I doing anything wrong here?
Thank you!
Alberto
I redirect this to the moose list, please as there or in the pharo
list in the future.
> One thing I've noticed is the error messages (PPFailure). I like that
> it tells you what is wrong and where. Â What I don't like is how it
> decides to tell you that.
This can be customized.
> For instance, take your PetitSQL package. Â If you do:
> Â PPSqlGrammer new parse: 'select * form table'
> it will tell you that 'UPDATE' is expected at 0. Â I'd much rather it
> determine what the best match was and tell you it failed there. Â If
> you change the #command from that class to read:
> Â command
> Â Â Â Â ^ createCommand / deleteCommand / insertCommand / updateCommand / selectCommand
> and then run the above, it will instead tell you that 'FROM' expected
> at 9, which is what I would really like it to do.
The choice always reports the last error. Earlier version of
PetitParser used to report the error that consumed most input, but I
changed it because this was less predictable and less efficient than
the current implementation. You can create your own choice parser and
return the deepest failure if you think the old behavior is better.
> Is this possible out of the box? Â If not, can you give me some
> guidance on how I could make it work this way?
What I typically do is to insert failures at particular choices in the
grammar, for example:
PPSqlGrammer>>command
^ createCommand / deleteCommand / insertCommand / selectCommand /
updateCommand / (PPFailingParser message: 'Command expected')
Lukas
--
Lukas Renggli
www.lukas-renggli.ch
Hi Sebastian,
I am not aware of anyone that actually did that. However, there would be two possibilities if you want to use Moose for your VASmalltalk code:
1. Migrate the Smalltalk importer, FAMIX and the Fame (from Pharo) or Meta (from VW) to VASmalltalk.
This will enable you to export an MSE file and load it on the other side
2. Get your code loaded (it does not have to work, just to load) into Pharo or VW
This will enable you to use the importer from those platforms
Regarding CodeCity, you should know that it works on the VW version only (which is Moose 3.2). The current Moose version is 4.2 and is available only in Pharo.
Cheers,
Tudor
On 8 Dec 2010, at 20:02, Sebastian Heidbrink wrote:
> Hello Tudor,
>
> there's one little question I have. I wasn't able find instructions or hint on this on the web.
>
> Do you know a solution or somebody how might know, how to handle VASmalltalk codeing with Moose and CodeCity?
> On ESUG in Amsterdam somebody mentioned that there is a way to import VASmalltalk, but there are some "adapters" needed, or need some tweaking?
> I'm not really sure anymore.
>
> I would be very grateful if you could provide me with some hints.
>
> Cheers!
> Sebastian
--
www.tudorgirba.com
"Being happy is a matter of choice."
FYI
Cheers,
Doru
Begin forwarded message:
>
> LDTA 2011 Call for Papers and Tool Challenge Submissions
>
> 11th International Workshop on
> Language Descriptions, Tools, and Applications
>
> www.ldta.info
>
> Saarbrucken, Germany
> March 26 & 27, 2011
> an ETAPS workshop
>
> LDTA is an application and tool-oriented workshop focused on
> grammarware - software based on grammars in some form. Grammarware
> applications are typically language processing applications and
> traditional examples include parsers, program analyzers, optimizers
> and translators. A primary focus of LDTA is grammarware that is
> generated from high-level grammar-centric specifications and thus
> submissions on parser generation, attribute grammar systems,
> term/graph rewriting systems, and other grammar-related
> meta-programming tools, techniques, and formalisms are encouraged.
>
> LDTA is also a forum in which theory is put to the test, in many cases
> on real-world software engineering challenges. Thus, LDTA also
> solicits papers on the application of grammarware to areas including,
> but not limited to, the following:
> - program analysis, transformation, generation, and verification,
> - implementation of Domain-Specific Languages,
> - reverse engineering and re-engineering,
> - refactoring and other source-to-source transformations,
> - language definition and language prototyping, and
> - debugging, profiling, IDE support, and testing.
>
> This year LDTA will also be putting theory, as well as techniques and
> tools, to the test in a new way - in the LDTA Tool Challenge. Tool
> developers are invited to participate in the Challenge by developing
> solutions to a range of language processing tasks over a simple but
> evolving set of imperative programming languages. Tool challenge
> participants will present highlights of their solution during a
> special session of the workshop and contribute to a joint paper on the
> Tool Challenge and proposed solutions to be co-authored by all
> participants after the workshop.
>
> Note that LDTA is a well-established workshop similar to other
> conferences on (programming) language engineering topics such as SLE
> and GPCE, but is solely focused on grammarware.
>
> Paper Submission
> ----------------
> LDTA solicits papers in the following categories.
>
> - research papers: original research results within the scope of LDTA
> with a clear motivation, description, analysis, and evaluation.
>
> - short research papers: new innovative ideas that have not been
> completely fleshed out. As a workshop, LDTA strongly encourages
> these types of submissions.
>
> - experience report papers: description of the use of a grammarware
> tool or technique to solve a non-trivial applied problem with an
> emphasis on the advantages and disadvantages of the chosen approach
> to the problem.
>
> - tool demo papers: discussion of a tool or technique that explains
> the contributions of the tool and what specifically will be
> demonstrated. These papers should describe tools and applications
> that do not fit neatly into the specific problems in the Tool
> Challenge.
>
> Each submission must clearly state in which of these categories it
> falls and not be published or submitted elsewhere. Papers are to use
> the standard LaTeX article style and the authblk style for
> affiliations; a sample of which is provided at www.ldta.info.
> Research and experience papers are limited to 15 pages, tool
> demonstration papers are limited to 10 pages, and short papers are
> limited to 6 pages. The final version of the accepted papers will,
> pending approval, be published in the ACM Digital Library and will
> also be made available during the workshop.
>
> Please submit your abstract and paper using EasyChair at
> http://www.easychair.org/conferences/?conf=ldta2011.
>
> The authors of each submission are required to give a presentation at
> LDTA 2011 and tool demonstration paper presentations are intended to
> include a significant live, interactive demonstration.
>
> The authors of the best papers will be invited to write a journal
> version of their paper which will be separately reviewed and, assuming
> acceptance, be published in journal form. As in past years this will
> be done in a special issue of the journal Science of Computer
> Programming (Elsevier Science).
>
> Invited Speaker
> ---------------
> Rinus Plasmeijer, Radboud University Nijmegen, The Netherlands
>
> Important Dates
> ---------------
> Abstract submission: Dec. 15, 2010
> Full paper submission: Dec. 22, 2010
> Author notification: Feb. 01, 2011
> Camera-ready papers due: TBD
> LDTA Workshop: March 26-27, 2011
>
> LDTA Tool Challenge
> -------------------
>
> The aim of the LDTA Tool Challenge is to foster a better
> understanding, among tool developers and tool users, of relative
> strengths and weaknesses of different language processing tool
> techniques as well as different implementations and realizations of
> those techniques. Tool developers are invited to participate in the
> Tool Challenge and demonstrate their solution to the problems during a
> special session of LDTA 2011.
>
> The problems in the LDTA Tool Challenge Problem Set can be viewed as
> points in a two dimensional space: one dimension specifying language
> processing tasks and the second dimension specifying the set of
> languages to which these tasks are to be applied. Along the task
> dimension are several traditional language processing tasks such as
> parsing, pretty printing, semantic analysis, optimization, and code
> generation. The language dimension is comprised of a simple, but
> evolving, suite of imperative programming languages. These two
> dimensions form a problem space in which various techniques and
> implementations will find problems in which they excel and others in
> which they find some challenges; no single technique or tool is
> expected to be optimal for all problems. Thus, this is a challenge
> and not a competition; no winner is declared. The full description of
> the problem set can be found in the LDTA Tool Challenge Problem Set
> document on the LDTA web page at ( http://www.ldta.info ).
>
> The Tool Challenge is open to developers of all kinds of grammarware
> tools and techniques. To participate, tool developers must submit the
> following by March 5, 2011. Names of participants and the name of
> their tool or technique. Presentation title and abstract. The short
> abstract should specify on what aspects of the problem set the tool
> was applied, where it excelled and where no solution was offered
> and/or the solution was considered less than optimal. We expect these
> to be only a few paragraphs in length.
>
> This information is used for scheduling purposes only and is not used
> for evaluation; as all tool developers interested in participating are
> welcome and will be given an opportunity to present their solution at
> the workshop. Submission of this information indicates a commitment
> to attend LDTA and to participate in the workshop. This information
> will be listed in the program.
>
> Authors of submissions that appear to be outside of the scope of LDTA
> will be contacted to discuss the relevance of their work to the
> workshop. Of course tool developers who question whether their work
> falls with the scope of LDTA are encouraged to contact the PC chairs
> early on for clarification.
>
> After the workshop a joint paper will be written by participants and
> submitted to a journal, most likely Science of Computer Programming.
> It is separate from the proceedings of the workshop and any special
> journal issue for the workshop.
>
> Program Committee
> -----------------
> Emilie Balland, INRIA, France
> Anya Helene Bagge, University of Bergen, Norway,
> Paulo Borba, Federal University of Pernambuco, Brazil
> John Boyland, University of Wisconsin, USA
> Claus Brabrand, IT University of Copenhagen, Denmark, (co-chair), brabrand(a)itu.dk
> Jim Cordy, Queen's University, Canada
> Kyung-Goo Doh, Hanyang University, Ansan, South Korea
> Giorgios Robert Economopoulos, University of Southampton, UK
> Laurie Hendren, McGill University, Canada
> Nigel Horspool, University of Victoria, Canada
> Roberto Ierusalimschy, Pontifà cia Universidade Católica do Rio de Janeiro, Brazil
> Johan Jeuring, Utrecht University, The Netherlands
> Shane Markstrum, Bucknell University, USA
> Sukyoung Ryu, Korea Advanced Institute of Science and Technology, Korea
> Joao Saraiva, Universidade do Minho, Portugal
> Sylvain Schmitz, Ecole Normale Superieure de Cachan, France
> Sibylle Schupp, Hamburg University of Technology, Germany
> Eli Tilevich, Virginia Tech, USA
> Eric Van Wyk, University of Minnesota, USA (co-chair), evw(a)cs.umn.edu
> Eelco Visser, Delft University of Technology, The Netherlands
>
>
> Organizing Committee
> --------------------
> Emilie Balland, INRIA, France
> Giorgios Robert Economopoulos, University of Southampton, UK
--
www.tudorgirba.com
"Not knowing how to do something is not an argument for how it cannot be done."
Hi!
Between 2 and 4 times faster than in a non-jitted vm. No big surprise.
Report produced on 2010-12-07T12:10:45-03:00
System version Pharo-1.1.1-- of 12 September 2010 update 11414
Benchmark ManyNode (simple rendering of nodes) :
100 nodes => 2 ms
200 nodes => 4 ms
300 nodes => 4 ms
400 nodes => 8 ms
500 nodes => 10 ms
600 nodes => 11 ms
700 nodes => 12 ms
800 nodes => 12 ms
900 nodes => 17 ms
1000 nodes => 16 ms
1600 nodes => 29 ms
3200 nodes => 53 ms
6400 nodes => 105 ms
Benchmark ManyEdges (simple rendering of edges) :
10 edges => 0 ms
20 edges => 2 ms
30 edges => 6 ms
40 edges => 10 ms
50 edges => 15 ms
60 edges => 16 ms
70 edges => 211 ms
80 edges => 35 ms
90 edges => 39 ms
100 edges => 51 ms
200 edges => 217 ms
300 edges => 976 ms
Benchmark ManyInnerNodes :
5 nodes => 45 ms
10 nodes => 614 ms
15 nodes => 2948 ms
Benchmark Displaying ManyInnerNodes :
5 nodes => 78 ms
10 nodes => 623 ms
15 nodes => 6569 ms
Benchmark Displaying ManyInnerNodesAndEdges :
1 nodes => 4 ms
2 nodes => 124 ms
3 nodes => 2520 ms
4 nodes => 29783 ms
Benchmark Displaying elementAt :
100 nodes => 2 ms
500 nodes => 4 ms
1000 nodes => 2 ms
1500 nodes => 4 ms
2000 nodes => 4 ms
2500 nodes => 6 ms
Benchmark many small nodes :
2000 nodes => 1346 ms
Benchmark edges bounds :
500 nodes => 71 ms
Benchmark subnodes lookup :
20000 nodes => 2376 ms
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
Hi!
Doru fixed the mess with the configuration. I do not know what went wrong with my tries. Configuration must be simpler to manage in the future. There is a lot of redundancies. For example:
- ConfigurationOfMoose loads Shout, Mondrian, and DSM
- Mondrian loads Shout
- DSM loads Moose
This all should get simpler.
Doru, what are the list of actions we need to do? How can we be sure that the problem we had does not appear again?
Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
Best is to send PetitParser questions to the Moose or Pharo list.
To make your parser fail you have to enforce that it consumes until
the end of the stream with #end:
identifier := (#letter asParser , #letter asParser star) flatten end.
identifier parse: 'ffff:gggg'
Also note that you can replace
p , p star
with
p plus
Lukas
On 6 December 2010 10:08, Alain Plantec <alain.plantec(a)univ-brest.fr> wrote:
> Hi Lukas,
>
> identifier := (#letter asParser , #letter asParser star) flatten.
> identifier parse: 'ffff:gggg'
>
> returns 'ffff'.
> shouldn't it raise an error  because of the $: ?
>
> Maybe you prefer me to ask this kind of question somewhere else.
> just let me know
>
> Cheers
> Alain
>
>
--
Lukas Renggli
www.lukas-renggli.ch