SPGAI-2026: Secure and Private System Design for Generative & Agentic AI Long Beach Convention Center Long Beach, CA, United States, July 26, 2026 |
| Conference web page | https://www.qlou.org/SPGAI/ |
| Submission link | https://easychair.org/conferences/?conf=spgai2026 |
| Abstract registration deadline | May 25, 2026 |
| Submission deadline | May 25, 2026 |
SPGAI 2026 Workshop, co-hosted with DAC, confronts this dual frontier: hardening AI agents and the artifacts they generate against new and largely unmapped attack surfaces, while harnessing those same agents as a force multiplier for vulnerability discovery, secure-by-construction generation, automated compliance, and trustworthy verification. The program is organized around three intertwined themes and welcomes contributions across EDA, chip and SoC security, quantum and neuromorphic computing, electro-optic co-design, and embedded and cyber-physical systems.
Submission Guidelines
The workshop invites regular papers of 2 to 6 pages in the IEEE Conference format (IEEE templates), with page limits excluding references. Submissions are not anonymized: author names, affiliations, and acknowledgments should be included in the paper. We encourage submissions with a commitment to open and reproducible research, including datasets, models, and methods. Papers with open-source implementations will be highlighted at the workshop. Submissions will be handled via EasyChair; the submission link will be posted on the workshop website.
List of Topics
- Secure, Private, and Trustworthy AI and Agents
- Privacy-preserving training and fine-tuning of foundation models for design (federated, differentially private, and confidential computing approaches)
- Defenses against prompt injection, jailbreaks, and tool misuse in agentic design workflows
- Secure multi-agent orchestration, sandboxing, and least-privilege tool use for design automation
- Trust, alignment, and reliability of LLMs and agents in safety- and security-critical design loops
- Confidentiality of proprietary IP, netlists, and source code during LLM-aided generation
- Inference-time defenses, guardrails, and verifiable reasoning for design agents
- Robustness against data poisoning, model extraction, and supply-chain attacks on generative design pipelines
- Securing the Artifacts of AI and Agents
- Detection, attribution, and watermarking of LLM- and agent-generated RTL, HLS, layouts, and software
- Hardware Trojans, backdoors, and malicious patterns introduced (intentionally or unintentionally) by generative AI
- Verification, formal methods, and testing tailored to AI-generated designs
- Provenance, traceability, and reproducibility of AI-generated design artifacts
- Copyright, licensing, and IP-leakage analysis of generated code and designs
- Secure release and open-sourcing practices for AI-generated artifacts and benchmarks
- Privacy leakage and memorization analysis in LLM-generated designs
- Generative and Agentic AI for Security, Privacy, and Trustworthiness
- LLM- and agent-driven vulnerability discovery, fuzzing, and exploit generation for hardware and software
- Generative AI for secure-by-construction RTL, HLS, EDA scripting, and physical design
- Agentic workflows for security verification, side-channel analysis, and threat modeling
- LLMs for cryptographic protocol design, post-quantum migration, and confidential computing
- AI-aided regulatory compliance, audit, and policy enforcement for design and IT automation
- Generative and agentic AI for privacy-enhancing technologies (PETs)
- LLM-aided security of emerging technologies: quantum, neuromorphic, and electro-optic systems
- Datasets, benchmarks, and competitions for evaluating secure and private generative/agentic design
Committees
Organizing Committee
- Qian Lou
- Mengxin Zheng
- Hongyi " Michael" Wu
Publication
Authors of accepted papers will have the opportunity to have oral presentation or a poster at the workshop. Selected authors may also be invited to co-author a Systematization of Knowledge (SoK) paper on the topic together with the workshop organizers.
Venue
The conference will be held on Sunday, Jul 26, 2026, Co-Located with DAC 2026, in Long Beach, CA.
Contact
All questions about submissions should be emailed to Qian Lou (qian.lou@ucf.edu).
