Hello.
I currently looking at how to render the GLMDashboard in Glamour-seaside. I
already tried several things but my main problem is:
In the 'renderOn:' method of my SGLDashboardPresenter , if I write :
html div
with: [
self render: (self firstColumnPanesFrom: self browser) last on: html
]
].
Hi list,
Loading Glamour on Pharo 1.2.2-12353 gives me the following warning:
This package depends on the following classes:
SubscriptionRegistry
You must resolve these dependencies before you will be able to load these definitions:
SubscriptionRegistry>>glmSubscriptions
I did:
Gofer new
squeaksource: 'Glamour'; package: 'ConfigurationOfGlamour'; load. (Smalltalk at: #ConfigurationOfGlamour) perform: #loadDefault
Proceeding gives more warnings:
This package depends on the following classes: SubscriptionRegistry You must resolve these dependencies before you will be able to load these definitions: SubscriptionRegistry>>hasHandlerFor: SubscriptionRegistry>>lookFor: SubscriptionRegistry>>lookFor:ifNone: SubscriptionRegistry>>unsubscribeForEvent:
Proceeding loads but fails executing: GLMBasicExamples open
Thanks!
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium
New issue 682 by step...(a)stack.nl: The basic examples don't show the usage
of format
http://code.google.com/p/moose-technology/issues/detail?id=682
The Glamour examples should be complete. There is currently no example
showing the usage of format:
Add a method to GLMBasicExamples:
formatAsWords
"self new formatAsWords openOn: (1 to: 100)"
<glmBrowser: 'Format' input: '(1 to: 100)'>
| browser |
browser := GLMTabulator new.
browser row: #list.
browser showOn: #list; using: [
browser tree
format: [ :x | x asWords];
display: [:x | x]].
^ browser
Glamour-Examples-StephanEggermont.187
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium
New issue 683 by vonbecm...(a)gmail.com: MessageNotUnderstood: receiver
of "sender" is nil
http://code.google.com/p/moose-technology/issues/detail?id=683
Moose: 4.5
Pharo image: Pharo1.3a#13258
Virtual machine used: Croquet Closure Cog VM [CoInterpreter
VMMaker-oscog.51]
Platform Name: unix
Class browser used (if applicable): OBSystemBrowserAdaptor
Steps to reproduce:
#.select a model
#.press left button and select Visualize>>Overview pyramid
Actual Result:
MessageNotUnderstood: receiver of "sender" is nil
Expected Result:
to visualize "Overview Pyramid"
* Type-Defect
Attachments:
PharoScreenshot.1.png 87.7 KB
PharoDebug.log 34.8 KB
Hi Damien,
I reply to the Moose list because this might be interesting to other people too.
> I have some question/remarks about PetitParser:
>
> - it is not clear what PPParser>match* are used for. After reading the
> source code of the implementors, it looks like you want to know if two
> parsers are equal. Why would you do that? Why are the methods called
> match* and not equal*?
I guess you are referring to the extension methods in the package
'PetitAnalyzer'? The methods #matches:, #matchesIn:.
#matchingRangesIn:, ... are part of the core package 'PetitParser' and
are well commented (I think).
The methods in the package 'PetitAnalyzer' are called match*, because
this is not an equality operation. They do not only support the
comparison of two parsers, but can also compare patterns with parser
instances (essentially this is a little Prolog engine, very similar to
the refactoring engine). The matching and rewriting of parsers is
explained in my PhD (http://scg.unibe.ch/archive/phd/renggli-phd.pdf)
in Section "6.2.5 Declarative Grammar Rewriting".
For example:
" matches a sequence of any two parsers that are the same "
any := PPPattern any.
pattern := any , any.
pattern asParser match: $a asParser , $b inContext: Dictionary new.
" --> false, because $a and $b are different "
pattern asParser match: $a asParser , $a asParser inContext: Dictionary new.
" --> true, because $a and $a are the same "
If the match is successful, the patterns are bound to the matching
parsers. In the example above the dictionary would contain an entry:
any -> $a asParser
There are many tests in PPSearcherTest. Fancy patterns you can also
see in PPRewriterTest and PPOptimizer.
> - PPParser>>matchList* do never refer to self (but to call themselves
> recursively).
This looks correct to me. This is to recurse into the graph of parsers.
> - Why is #def: not defined in PPUnresolvedParser? You implemented it
> in PPParser. It might be useful for other parsers, but do you have an
> example?
PPParser is a superclass of PPUnresolvedParser, therefore you can send
#def: to any instance of PPUnresolvedParser.
PPUnresolvedParser and #def: are nice to quickly hack something ugly
together, better not (over)use them ...
Lukas
--
Lukas Renggli
www.lukas-renggli.ch
Hi all,
We are working to develop an Architecture Description Language (ADL) in
Moose. The objective of the development is to define a language that allows
to specify different components of an architecture so that these entities
can be manipulated directly (analysis, visualization, etc). The architecture
definition will be used to check rule conformance, for example. However, we
would not want to restrict ourselves to any particular usage of the ADL.
Today, we have implemented a preliminary version of ADLFamix by implementing
modules that actually contain MooseGroups. Based on these modules, we can
now write Arki queries for rule-checkling, for example. Now, we are
contemplating about the next step because different people implement
different things in an ADL. Some describe rules that specify connectors that
can exist between modules. However, this approach ties connectors to rules
and we cannot define a connector without any rules associated. Connectors
can be defined separately or these can also be inferred from Famix
associations of the Famix entities contained in modules. Also, rules can be
built into modules so that each one has its own repository (however, it is
not always possible to associate a rule to any particular module).
The purpose of the mail is to get feedback from the people on the group
about an ADL in Moose and its features. We are thinking in terms of, but not
limited to:
1) fundamental features (modules, connectors, rules, ??)
2) objectives: rule checking, architecture inference (e.g. reconstructing
plug-ins from java models in moose), ??
3) ??
thanx
Moosecians @ RMod
Hi,
I like this good energy of getting the metrics properly integrated in Moose. Currently, there are two mechanisms that are overlapping:
1. the Fame properties denoted by <MSEProperty: ...>. These are used both for import/export and for the UI
2. the Moose-specific properties denoted by <property: ...>. These are used only in the UI
Of these, we should eliminate the second one by transforming all <property:> annotations into <MSEProperty:> ones.
To get the incentives aligned, I now changed the MooseFinder to only work with the Fame properties. So, basically, at this point we have no reason to keep the old <property:> annotations.
Next are the <navigation:> properties. I suspect we want to declare them as <MSEProperty:> as well, and mark them as <derived>. But, this in the second step.
Cheers,
Doru
--
www.tudorgirba.com
"Every now and then stop and ask yourself if the war you're fighting is the right one."
All tests are green again.
Cheers,
Doru
Begin forwarded message:
> From: admin(a)moosetechnology.org
> Date: 16 July 2011 10:10:18 CEST
> To: tudor(a)tudorgirba.com, simon.denier(a)gmail.com, cy.delaunay(a)gmail.com, alexandre(a)bergel.eu, stephane.ducasse(a)inria.fr, jannik.laval(a)inria.fr
> Subject: Jenkins build is back to normal : moose-latest-dev #496
>
> See <http://hudson.moosetechnology.org/job/moose-latest-dev/496/>
>
>
--
www.tudorgirba.com
"Value is always contextual."
Hi Alexandre,
Is the defaultMinimal configuration up-to-date?
I tried it and it is loading a version of ConfigurationOfHealthReportProducer that does not exist... So I am wondering if I can rely on this minimal configuration in Mondrian.
Regards,
Veronica
Hi guys
I could not find metrics in the saved MSE metrics like LOC that cannot be computed by moose.
Are metrics saved? I remember writing tests to make sure that this was correct.
In addition it would be good to be able to select some metrics to be exported to other formats.
Stef