Best,
Timo
-------- Weitergeleitete Nachricht --------
[Sie erhalten nicht häufig E-Mails von
icsme2025_research@easychair.org. Weitere Informationen, warum
dies wichtig ist, finden Sie unter
https://aka.ms/LearnAboutSenderIdentification ]
Dear Timo Kehrer,
Thanks again for serving on the PC of ICSME 2025.
This is a rather long email regarding the paper assignments,
review instructions and review criteria, but please read it in
full as it will make your job as a reviewer easier.
We have received 188 submissions. 21 papers were withdrawn by the
full paper deadline. We removed 18 other submissions ourselves for
which the authors did not submit a full paper. Out of the
remaining 149 papers, we desk-rejected 3 papers for violating
submission guidelines. This leaves 146 papers for review. Given
the PC size, we have assigned 4-5 papers per PC member. You will
find your paper assignment at the end of this email.
1) FORMAT AND DOUBLE-BLIND REQUIREMENTS
Papers must strictly adhere to the two-column IEEE conference
proceedings format. Also, papers must be in PDF, anonymized, and
must not exceed 10 pages (including figures and appendices) plus
up to 2 pages that contain ONLY references.
We have already screened papers regarding these requirements. If
you identify any missed cases, please report them to us as soon as
possible.
For minor format violations (e.g., slightly different font or
template), we have decided to handle these during the camera-ready
stage. Please note such font/template violations in your reviews,
but do not reject a paper because of them.
Some papers are shorter than 10 pages. While these are shorter
than the typical full paper length, our CfP does not have a
minimum page length. Therefore, please judge these papers as full
papers (i.e., whether the provided content contains all the
necessary information for a full paper) rather than forming your
opinion solely based on their length.
Also, report to us immediately any missed cases of Conflicts of
Interest. We ask you to NOT actively try to determine the authors
of the papers.
2) REVIEW FORM
Following the tradition of ICSME and other major conferences, our
goal is not to reject papers, but to accept them. Therefore,
please be *positive* and provide *constructive* reviews. To
support you in preparing quality reviews and to provide
transparent reviews to authors, the review form on EasyChair
includes the following fields (for your convenience we also
include the descriptions of fields in the form on EasyChair):
* Overall recommendation: Reject, Weak reject, Weak accept,
Accept, Accept & nominate for distinguished paper (only use
‘Accept & nominate for distinguished paper’ if you believe a
paper is outstanding and should be considered for a Distinguished
Paper award)
* Reviewer’s confidence: Expert, Knowledgeable, Some familiarity,
No familiarity
* Paper summary: Brief summary of the paper (as you understand it)
* Strengths (bullet list): Summarize the key strengths of the
paper, in light of the review criteria.
* Weaknesses (bullet list): Summarize the key weaknesses of the
paper, in light of the review criteria.
* Artifacts: Artifact provided and in line with what is declared
in the paper. Artifact provided, but not inline with what is
declared in the paper، Artifact not provided, but authors provide
a convincing argument as to why it cannot be provided. Artifact
not provided, without explanation (or a non-convincing
explanation)
* Summary comments for authors: Please provide a summary of your
assessment to justify your overall recommendation. You may
elaborate on the strengths and weaknesses and refer to the
evaluation criteria (see below). Please do not mention your
overall recommendation in the comments.
* Questions for the authors: Please fill in this section ONLY if
answers to these questions will affect your score/decision for
this paper. Otherwise, clearly indicate "No Questions."
* Confidential remarks for the program committee
* Review criteria: see below
3) REVIEW CRITERIA
For ICSME 2025 we use structured reviews, i.e., there will be a
field for each review criterion on the review form. For your
convenience, we also explain each criterion in the review form
itself.
3.1) Originality and novelty
The extent to which the contribution is sufficiently original and
is clearly explained with respect to the state-of-the-art. Note
that originality is not about providing surprising or unexpected
results or the complexity of a proposed solution, but how the work
advances the body-of-knowledge. If a paper lacks important
references, we ask reviewers to provide suggestions, but avoid
self-citations. When a reviewer’s own work is extremely relevant,
they should always contact the PC co-chairs and provide potential
alternatives of other related work. Replications that bring new
knowledge are welcome.
Do *not* reject papers just because:
– the work is preliminary (especially if an idea is very novel)
– the work is incremental (we always build on top of previous
research)
– it reports negative results
– the idea is simple
– you think some other technique that doesn’t exist yet would be
better
3.2) Importance of contribution (significance)
The extent to which the paper’s contributions can impact the field
of software maintenance and evolution, and, if needed, under which
assumptions.
Do *not* reject papers just because:
– it does not include empirical evidence of impact (potential
impact is more important)
– you don’t like the topic or don’t find it interesting (would
someone in the community?)
– the impact is not immediate for the software industry, the
immediate impact may be in the software engineering research
community (e.g., methodological contribution)
3.3) Soundness (proper use of research methods)
The extent to which the paper’s contributions and/or innovations
address its research questions and are supported by rigorous
application of appropriate research methods. Papers should employ
rigor in their research methodology (including choosing
appropriate methods and procedures). Soundness is relative to
claimed contributions (e.g., if a paper finds a correlation, and
that is a notable discovery, do not critique it for not also
demonstrating causality).
Do *not* reject papers just because:
– the methods are not the ones you would have selected (are they
appropriate?)
– the results may not generalize (papers should clearly explain
assumptions and scope of contribution)
– Balance the novelty with the extent of the evaluation (very
novel papers may be more preliminary)
Furthermore:
– Avoid applying criteria for quantitative methods to qualitative
methods or industrial studies (e.g., critiquing a case study for a
“small N”).
– Avoid critiquing a lack of a statistically significant
difference for case study research; a lack of statistical
difference can be a discovery, too (but must be supported by
enough statistical power).
– Avoid asking for the paper to do more than it claims if the
demonstrated claims are sufficiently publishable (e.g., “I would
publish this if it had also demonstrated knowledge transfer”).
– Avoid relying on inexpert, anecdotal judgements (e.g., “I don’t
know much about this but I played with it once and it didn’t
work”).
– Do take into account the effort it took to run the study; this
contributes to the value of results.
3.4) Evaluation (where applicable)
The paper’s claimed contributions are supported by empirical
evidence. Papers claiming improvements (e.g., over state of the
art or baseline techniques) should also design proper empirical
evaluations to confirm that such improvements are achieved. Judge
papers based on the match between claims and evaluation.
Do *not* reject papers just because:
– further experiments would be possible, if the included ones are
sufficient to support the claims
some hypothetical baseline was not considered, if such a baseline
is not available and is not easily re-implementable
– alternative design choices or implementations could provide even
larger improvements
– there is no empirical evaluation in a paper claiming a purely
theoretical contribution (e.g., the authors demonstrate
theoretically an improved computational complexity)
3.5) Quality of presentation
The quality of writing, including clear descriptions, adequate use
of the English language, absence of major ambiguity, clearly
readable figures and tables, and adherence to the formatting
instructions provided. Papers should be clear and concise,
comprehensible to diverse audiences, contain sufficient information
to understand how an innovation works and to understand how data
was obtained, analyzed, and interpreted.
Do *not* reject papers just because:
– it has easily fixable spelling and grammar issues.
– it does not use all of the page count
– it does not follow a particular paper structure or order of
sections
Avoid asking for more detail unless you are certain there is
space; if there is not enough space, provide concrete suggestions
for what to cut.
3.6) Comparison to related work
The extent to which the submission adequately reviews the prior
literature.
Do *not* reject papers just because:
– minor prior related work is missing (only if missed work would
have altered the study)
– non-peer reviewed work (including arXiv preprints, theses, blog
posts, or tech reports) or work that has been accepted after the
submission deadline (you can point these out to the authors if
relevant but they are not alone grounds to reject)
3.7) Replicability (if relevant)
The extent to which the paper provides sufficient detail on
methods and experiments, and shares information and artifacts that
are practical and reasonable to share, to support replication and
reproducibility.
4) AUTHOR QUESTIONS AND EARLY ACCEPTS/REJECTS
When filling in the review form, please use the “Questions for the
Authors” field wisely and ask questions *only* if you think that
the answers will affect your scores and decision about the paper.
Otherwise, clearly indicate "No Questions."
Also, author responses will be only required for papers where
answers to reviewers’ questions can affect the outcomes. For
papers whose decision is already clear, we will send out direct
accept/reject notifications at the beginning of the author
response period.
5) REVIEW QUALITY
ICSME aims for an inclusive and transparent review process. We
encourage reviewers to be open, positive and professional:
* Review authorship: PC members were invited because of their
expertise. Therefore, we expect PC members to author their
reviews, asking for sub-reviewers only for additional feedback.
This means that reviewers may solicit help from others. However,
reviewers should rewrite the review in their own words, and adjust
the scores accordingly. The opinions should be represented as the
PC member’s opinions, not those of a sub-reviewer.
* Be clear about what is missing: Even if, in the view of a
reviewer, a paper does not meet the standards required for
acceptance, we encourage reviewers to highlight what, in their
opinion, would be necessary to make it acceptable for ICSME (while
acknowledging that ICSME submissions are subject to the
limitations of conference papers regarding lengths, etc.).
* Ethical issues: PC members should inform PC co-chairs if they
detect any evidence related to plagiarism, concurrent submission,
etc.
* Update reviews: Reviews can be updated at any time, i.e., we
encourage reviewers to follow the submitted reviews of submissions
assigned to them and make adjustments even before the official
discussion period.
In summary, quality reviews are:
* Constructive, explicitly identifying the merits of the work, as
well as feasible ways of addressing any of its weaknesses.
* Insightful, going deeply into the topic and the research
methods.
* Organized, helping the authors clearly understand the reviewer’s
opinions of strengths and weaknesses of the work.
* Impartial, demonstrating a commitment to the reviewing criteria
of the conference, and not personal interests, speculation, or
bias.
To protect authors' rights and research confidentiality, ICSME
2025 does not currently allow the use of Generative AI or
AI-assisted technologies such as ChatGPT or similar services for
peer review.
Reviewers, as well as authors and organizers, are expected to
uphold the IEEE Code of Conduct.
6) LEAD REVIEWERS / DISCUSSION LEADS
You will be designated as the lead reviewer for at most 2 of the
papers you review. We ask lead reviewers to do the following:
* Check reviews for any issues (e.g., quality, length) and work
with reviewers to improve if needed; if necessary, bring issues to
our attention.
* Kickstart and moderate discussion among the reviewers.
* Build consensus where possible.
* Make a recommendation on the paper.
* Write a meta-review to summarize the discussion among reviewers
and explain the rationale behind their final recommendation (do
not disclose reviewer identities in the meta-review).
7) REVIEWING TIMELINE
* Assignment of papers: March 17, 2025
* First half of reviews due: April 2, 2025
* Second half of reviews due: April 22, 2025
* First review discussion period: April 23, 2025 - April 29, 2025
* Early decisions due: April 30, 2025
* Early decisions notification: May 7, 2025
* Author response period: May 8, 2025 - May 15, 2025
* Final discussion period: May 16, 2025 - May 27, 2025
Thank you again for helping us make ICSME 2025 a success. We
appreciate that reviewing is hard work. If you have any questions,
please do not hesitate to contact us.
Matthias Galster and Dan Hao
ICSME 2025 Research Track Program Co-Chairs
==============
Paper Assignment
==============
(Should the assignment in this email not match the paper
assignment in EasyChair, please reach out and report the
mismatch.)
---------------------------------------------
(79) ADPP: Automated Data-centric Program Partitioning
(81) Towards Better Understanding of Code Changes: An Algorithm to
Remove Unintended Moves in GumTree
(117) Understanding Commercial Low-code Application Bugs
(122) Configurable Ensembles for Software Similarity: Challenging
the Notion of Universal Metrics
(146) Predicting Clone-proneness: An Exploratory Study with Deep
Learning Models