Call for Papers
CSET 2023 is sponsored by the USC Information Sciences Institute. We plan to hold the workshop in hybrid format due to the continued covid-19 related travel uncertainty — on Monday, August 7, preceding USENIX Security Symposium 2023. The workshop proceedings will be published through ACM Digital Library.
- Paper submissions due: May 22, 2023 (extended)
- Notification to authors: June 23, 2023
- Final paper files due: July 10, 2023
- Workshop: Monday, August 7, 2023 - hybrid participation
What is CSET all about? For 15 years, the Workshop on Cyber Security Experimentation and Test (CSET) has been an important and lively space for discussing all encompassing cybersecurity topics related to reliability, validity, reproducibility, transferability, ethics, and scalability — in practice, in research, and in education. Submissions are particularly encouraged to employ a scientific approach to cybersecurity and demonstrably grow community resources.
For CSET '23, we solicit exciting work across a broad range of areas relevant to security and privacy. A list of topics of interest (broadly interpreted):
- Cybersecurity research methods: designing and conducting large and complex system evaluations in the context of cybersecurity, i.e., experiences and discussions of qualitative and quantitative methods, what are the models and tools that facilitate research.
- Measurement and metrics: measurement required to understand performance and optimization of cybersecurity systems and defining appropriate metrics, i.e., valid metrics, test cases, and benchmarks, effectiveness of metric in evaluation, impact of measurement on evaluation.
- Data sets: collection, analysis, and interpretation of data for cyber security, i.e., what makes good data sets and how do we compare data sets, methods to collect, generate, or derive new ones, historical impact of a dataset.
- Experimental infrastructure: testbeds, simulations, emulations, and virtualizations, i.e., designing and deploying testbeds, tools for improving testbed operation and maintenance, usage and impact studies of experimentation infrastructure.
- Education: surveys, tools, and techniques for cyber security education and training, i.e., evaluating and presenting educational approaches to cybersecurity, approaches that leverage datasets, utilize testbeds, or promote awareness of research methods and sound measurement approaches.
- High-consequence cyber systems: rigorous experimental methodologies, i.e., appropriate design of experiments, parameter sampling strategies, quantifying uncertainties with confidence intervals, ensuring reproducibility, and validating models.
- Ethics in cybersecurity: tussle between security and privacy, i.e., experiences balancing stakeholder considerations, participant privacy, frameworks for evaluating the ethics of cybersecurity experiments, and illustrative experiment samples for ethical conduct that upholds the code outlined in http://cpsr.org/issues/ethics/cei/
- Evaluating real-world security controls: measurements of deployed security frameworks, i.e., evaluation methodologies for real-world security performance, user-related characteristics (behavior, demographics) modeling, cybersecurity and social media.
CSET 2022 proceedings will be published through ACM Digital Library.
Sharing Research Artifacts
CSET is focused on advancing the state-of-the-art and the state-of-the-practice in cybersecurity in practice, research, and education. Authors are strongly encouraged to share artifacts whenever possible in order to enable transparency (e.g., analysis or validation) and to facilitate the accumulation of resources for the cybersecurity community. Sharing will be taken into account by reviewers; however, it is not a requirement for acceptance. If the research presented in a paper produced research artifacts (e.g., code, data), authors should include in the paper an artifact-sharing statement describing whether some or all of the artifacts will be made available to the community, and if so, how they will be shared (e.g., web site). This statement should be present during both submission and in the final version of the paper.
Submission Length Options
Page length limits vary by the type of submission:
- Short Paper: Submissions must be no longer than four pages. Short papers should provide enough context and background for the reader to understand the contribution. We envision that short papers will be preliminary work or extended work papers, but this is not a hard requirement.
- Long Paper: Submissions must be no longer than eight pages. We envision that long papers will be the more traditional type of CSET research paper, but this is not a hard requirement.
Submissions should use (2-column) ACM proceedings templates. The page length limits include the space allowed for all tables and figures. References and appendices are excluded from page limits, however reviewers are not required to view the appendices when evaluating submissions.
During the submission process, authors will be asked to categorize their submission as either a research paper, position paper, experience paper, preliminary work paper, or extended work paper. While reviewers can see this categorization, the review process for all submission types will be identical. For all submissions, the program committee will give greater weight to papers that lend themselves to interactive discussion among workshop attendees.
- Research Paper: make a novel contribution in line with one of the topics of interest.
- Position Paper: discuss new or provocative ideas of interest to the CSET community.
- Experience Paper: describe activities and recount lessons learned (e.g., from experiments or deployments) that might help researchers in the future.
- Preliminary Work Paper: describe early results from interesting and new ideas. We anticipate that such works-in-progress papers may eventually be extended as full papers for publication at a conference.
- Extended Work Paper: expand upon unpublished aspects of a previous work (published in any venue). We welcome papers that provide a compelling addition to a previously developed approach, method, tool, measurement, benchmark, data set, simulation/emulation, evaluation results, etc. Note that submissions in this category can extend the work of others (e.g., using a previously published tool in a new way). Submissions in this category should clearly explain which sections are novel compared to prior work.
Anonymization and the Review Process
The review process will be double-blind; all submissions should be anonymized so as not to reveal the authors’ names or affiliations during the review process.
Ethics & Human Subjects
Submissions that have potential ethical implications must include a section on ethics. Any submission that introduces research or results related to human subjects (including their data) must include a statement on its review by the appropriate institutional review boards.
Respectful and Inclusive Terminology
Please refer to the ACM’s list of charged terminology and use alternatives in your submissions.
All anonymized papers must be submitted in PDF format via the submission system https://cset23.hotcrp.com/. Please do not email submissions.
At least one author from every accepted paper must register for the workshop and present the paper.
Fraud and dishonesty are prohibited, including: simultaneous submission of the same work to multiple venues, submission of previously published work, and plagiarism.
Papers accompanied by nondisclosure agreement forms will not be considered.
Please see the Committees page.