Work Packages
Research to implementation
CodeGuard is structured around five interconnected work packages that span research, practical implementation, knowledge transfer, and broad dissemination.
Five pillars of the project
The CodeGuard project is organized into five work packages, each addressing a critical aspect of safe AI code assistant integration.
Knowledge Building on LLM Issues and Model Comparison
In this work package, in-depth knowledge is built regarding the risks and limitations of LLM-based code generation. The researchers analyze typical vulnerabilities such as hardcoded secrets, hallucinated packages, and insecure authentication. Additionally, various LLM models such as Codex, GPT-4.5, GPT-5, Claude 3.5, Gemini 2.5, Code LLaMA, and DeepSeek R1 are compared in terms of security, reliability, and usability. The findings are compiled into an overview report with recommendations for secure LLM integration.
Translation Research into Prompt Engineering and RAG
This work package investigates how secure and context-aware prompts can be designed to improve the quality of LLM output. Guidelines and templates are developed for secure prompt engineering, and various prompt strategies are experimented with. Additionally, research is being conducted into how Retrieval-Augmented Generation (RAG) can be integrated via tools such as LlamaIndex, so that LLMs gain access to current and domain-specific information during code generation.
Translation Research into Guardrails, Static Analysis, and Tracking
In WP3, the focus is on validating and securing LLM output via analysis tools and guardrails. Static analysis tools such as CodeQL and SAST/SCA scanners are being evaluated, and integration templates are being developed for CI/CD pipelines. Furthermore, research is being conducted into how interactions between developers and LLMs can be logged and analyzed, with the aim of making generated code traceable throughout the development process and ultimately setting up dashboards that provide insight into the security, efficiency, and impact of LLM usage.
Knowledge Transfer via Workshops
This work package organizes interactive knowledge and hands-on workshops for companies. Participants learn how LLMs can be safely deployed in their development processes, using practical examples and demonstrators. The workshops cover topics including safe prompt formulation, the use of analysis tools, and interpreting LLM output. The sessions are supported by practice material and feedback rounds to maximize impact.
Public Dissemination via Conferences, Blogs, and White Papers
Finally, WP5 ensures the broad dissemination of the project results. Blog articles and white papers are published, and presentations are given at relevant conferences and events. In addition, a community of practice is established in which companies, researchers, and policymakers can exchange experiences regarding safe LLM integration. This community contributes to the long-term consolidation and dissemination of knowledge and further valorization of project results.