Safe AI integration,
for software development
CodeGuard enables software companies in Flanders to safely, efficiently, and responsibly integrate AI code assistants into their software development processes.
What we do
Risk Awareness
Make software companies aware of the security risks associated with the use of AI code assistants based on Large Language Models in software development.
Mitigation Strategies
Develop strategies such as prompt engineering, retrieval-augmented generation (RAG), and guardrails via static and dynamic analysis.
Knowledge Transfer
Demonstrate strategies through workshops, demonstrators, and hands-on training to reach companies with practical, actionable guidance.
Knowledge Dissemination
Provide broad digital knowledge dissemination via mailing lists, blog articles, and social media.
Our focus
LLM Models and Knowledge Transfer
Model Knowledge
Focus on knowledge transfer regarding the use of several popular LLMs such as Codex, Gemini, Claude, Llama, and Deepseek.
Practical Strategies
Develop and demonstrate strategies such as prompt engineering, retrieval-augmented generation (RAG), and the use of guardrails via static and dynamic analysis to improve the safety and quality of generated code.
Industry Reach
Reach software companies through workshops, demonstrators, and hands-on training with concrete outcomes including recommendations, explorations, R&D trajectories, and integrations of safe LLM practices into existing development processes.
About CodeGuard
CodeGuard is a project that aims to enable software companies in Flanders to safely, efficiently, and responsibly integrate AI code assistants into their software development processes. This should lead to increased productivity, improved code quality, and reduced security risks when using AI in software development.
Goals
- Make software companies aware of the security risks associated with the use of AI code assistants based on Large Language Models in software development.
- Provide concrete knowledge on how to identify and mitigate these risks.
- Develop and demonstrate strategies for safe integration of LLMs, including prompt engineering, retrieval-augmented generation (RAG), and guardrails via static and dynamic analysis.
- Transfer scientific knowledge to software companies and share contribution to continuous improve secure code generation with the use of AI.