FYI, please consider this workshop colocated with EDCC (European Dependable Computing Conference) next year.
ACQUIRE 2026 The 1st International Workshop on AI Code QUality, Integrity & REliability Tue 7 April 2026, Canterbury, UK
https://acquire-workshop.github.io/2026/index.html
Large Language Models are already writing, reviewing, and repairing code, yet the community lacks a rigorous, shared basis for judging whether these AI-produced artifacts are dependable throughout real software lifecycles. Today’s emphasis on benchmark accuracy obscures risks that matter in practice: silent hallucinations, insecure toolchains and prompts, brittle behavior under distribution shift, opaque provenance, and evidence that cannot be audited or reproduced. ACQUIRE’26 responds to this gap by convening AI and Software Engineering researchers and practitioners to refocus the conversation from raw performance to verifiable quality, grounded in auditable taxonomies and metrics, assurance cases and transparent and reproducible evidence. The workshop’s aim is to make AI-for-code not just powerful, but trustworthy and dependable, encompassing all aspects of software quality, including security, maintainability, correctness and performance.
We welcome contributions that address the following areas:
Quality & assurance: taxonomies, auditable metrics, conformance profiles, and safety/assurance cases for LLMs and agents for code tasks; Security of models & supply chain: threat models spanning models, data, prompts, and toolchains, with provenance, SBOM/AI-BOM, signing, and attestation; Robustness in practice: hallucination detection/mitigation, shift- and fault-tolerance, vulnerability detection and patch quality, and CI/CD gating with runtime guards; Evidence & reproducibility: open benchmarks, standardized reporting for datasets, metrics, prompts, agents, and protocols, and certification-oriented evaluation. Cross-cutting themes include human-AI collaboration (uncertainty display, attribution), and the impact of AI on maintainability and technical debt. We welcome empirical studies, methods, tools, and experience reports, especially those that deliver auditable evidence, align on taxonomies and reporting schemas, advance provenance and attestation practices, and demonstrate robust, reproducible evaluation under real-world and adversarial conditions.
Submissions reporting negative results or unexpected findings are also welcome, as they offer valuable insights.