CVE-2026-42208 exploited within 36 hours of disclosure, exposing LiteLLM credentials, risking cloud account compromise.
Flaws in OpenEMR's platform — used by more than 100,000 healthcare providers — enabled database compromise, remote code ...
New research exposes how prompt injection in AI agent frameworks can lead to remote code execution. Learn how these ...
Read more about Agentic AI red teaming could become essential for securing future AI systems: Here's why on Devdiscourse ...
TL;DR AI risk doesn’t live in the model. It lives in the APIs behind it. Every AI interaction triggers a chain of API calls across your environment. Many of those APIs aren’t documented or tracked.
Making headlines everywhere is the CopyFail Linux kernel vulnerability, which allows local privilege escalation (LPE) from any user to root privileges on most kernels and distributions. Local ...
How indirect prompt injection attacks on AI work - and 6 ways to shut them down ...
Security leaders must adapt large language model controls such as input validation, output filtering and least-privilege access for artificial intelligence systems to prevent prompt injection attacks.
A North Korean APT has crafted malicious software packages to appeal to AI coding agents, while ‘slopsquatting’ shows the ...
Security researchers have discovered 10 new indirect prompt injection (IPI) payloads targeting AI agents with malicious instructions designed to achieve financial fraud, data destruction, API key ...