24 Practical and joyful research with Moose
Researchers hunt for new ideas that advance the current state-of-the-art. In a practically oriented field, these ideas need to be materialized so that they can be tested against real situations and be compared effectively with the related work.
Thus, tools are required to support research. In many fields, like chemistry, tools cannot be manufactured by the scientist, but they need to be created by a team of engineers. In software engineering, the situation is significantly different: the knowledge of building the software tools needed for experiments is within the reach of scientists as they are typically software engineers themselves.
This constitutes a unique opportunity. It should follow that researchers build tools because having them readily available offers better innovation conditions:
- A new tool can be built on top of existing tools to embody a new idea at a lower cost, thus allowing for faster innovation cycles.
- Having the related work easily accessible and applicable, makes the comparison with other approaches or case studies easier.
- Having tools that are robust allows the researchers to apply them on industrial case studies, thus easing the link between research and practice.
Unfortunately, the practice of building tools to support research is not widely spread among software engineering researchers. There are at least two reasons for why this is the case:
- Researchers are typically only rewarded for scientific results. There are no direct brownie points awarded for tools. Thus, the tendency is not to spend significant effort into something that might not return the investment.
- Even if a researcher does have the will to invest into tool development, the cost of building a robust tool typically exceeds the effort he can invest by himself. This is due to the high cost induced by testing, fixing bugs, or in improving the scalability and performance.
So, on the one hand having tools enables better research and researchers can theoretically build them, but on the other hand they do not get built because of the high cost and because there are no direct incentives. How can we reconcile these factors? What should the process be to allow researchers to have the required tools, while still keeping the development costs under a manageable limit?
Over the past couple of years, I advocated a simple idea: share the engineering work among researchers. In other words, get research in the open source space. Rather than pushing researchers to work alone on small scale prototypes, we should strive to bring them together and build a larger platform in which these smaller tools can coexist.
Moose is a platform for software analysis research that shows that such an idea can be put in practice. The longevity of the project (it started in 1996 and it is still actively used and developed) shows that the engineering effort can be sustained beyond the initial investment. In fact, it is now that the largest profit can be seen, as a large part of previous work can be directly used in new contexts, thus shortening the innovation cycle.
That is not to say that all the work invested into Moose is now accessible and valuable. On the contrary, many parts died, and most parts have changed significantly. What we created with Moose is a market place in which tools and ideas live and die, depending on what the community is willing to invest in at each moment. Following this process two things happens:
- Tools that present interest attract stakeholders, and stakeholders offer incentives for keeping these tools developed and maintained.
- Reliable tools offer strong basis for further research.
Several technical characteristics played an important part in the success of Moose.
First, Moose has a modular architecture with a small core around which a conglomerate of cooperative tools are built. Each tool defines a unique capability and uses other tools to accomplish its tasks. This model fits very well the research context in which each researcher has its own distinct territory, and collaborates with his peers in a loose fashion.
Second, another successful trait is given by the focus on fast, even aggressive, prototyping while keeping data at the center. Smalltalk plays an important role as it offers a powerful scripting language. In the same time, generic engines like Glamour or Mondrian offer enhanced possibilities to explore various facets of data fast. This makes the attitude of just give it a try possible and profitable as more ideas can be tried in a shorter time.
Third, Moose is not concerned with a single aspect of analysis, but it takes a holistic approach dealing with multiple issues, starting from data extraction and meta-modeling and continuing all the way to data mining algorithms and even to interactive tools. This approach has at least two benefits: we get more synergies and collaborations, and we get better integration as the tools interact with each other within the same platform.
But, more important than technology is the humane face of Moose. Great ideas come from a state of play in which the current state of facts is taken apart and put together in a new way. That is why Moose is not a fixed thing, but a continuously moving target defined by what the community decides to play with. The only steady concern is to offer at every moment a set of engineered toys and the proper environment for a joyful and perhaps a successful play. All the rest is left for us, the players.
If you feel like playing, come and join us. Let’s discover the next game. Together.