Prompt injection, tool misuse, and uncontrolled tool interactions create attack vectors when operating and using agent-based systems and LLM-powered applications, leading to consequences such as data exfiltration or unauthorized system access. These are real risks that software architects and developers should understand and address.
In our 2-day training course “Agentic Software Security,” we’ll teach you the fundamentals you need to know.
The training will enable you to understand attack vectors such as direct and indirect prompt injection, insecure tool interactions, supply chain risks (e.g., tool poisoning or rug pulls), and cross-context effects; methodically assess risks during the development and integration of agentic applications; and prevent them in a targeted manner with appropriate security measures.
This training is designed for software architects and developers who want to plan, develop, and deploy AI-powered applications with a clear focus on security throughout their entire lifecycle.
Understand attack vectors related to GenAI and how to assess them using threat modeling.
Implement targeted protective measures such as guardrails, sandboxing, and proper authentication and authorization.
Develop a strategy for the secure operation of MCP-based agent-based systems within the organization, from onboarding through usage to offboarding.
Embed security practices into architecture, development, and operations, regardless of frameworks or programming languages.