Hacking AI and Retraining LLMs

The API Hour - A podcast by Christine Bevilacqua

Categories:

Artificial Intelligence is transforming every industry, but with that transformation comes new security risks. In this episode of The API Hour, host Dan Barahona interviews Robert Herbig, Senior Engineer at SEP and instructor of the APIsec University course, Building Security into AI, to explore the emerging world of AI attacks, data poisoning, and model tampering.From poisoned stop sign datasets to prompt injections that trick LLMs into revealing dangerous information, this episode is packed with eye-opening examples of how AI can be manipulated, and what builders and security teams can do to defend against it.What You’ll LearnData poisoning in action: how mislabeled stop signs and manipulated datasets can cause catastrophic AI failuresWatering hole attacks & typosquatting: why malicious datasets and libraries pose a hidden riskPrompt injection & jailbreaking: real-world cases where LLMs were manipulated into revealing restricted informationBlack box vs. white box attacks: what attackers can infer just by observing model confidence scoresRetraining & RAG: how AI models ingest new information and why continuous updates create new vulnerabilitiesThe API connection: why exposing models via APIs ties AI security directly to API security best practicesEpisode Timestamps00:45 – Stop signs, stripes, and poisoned training data07:00 – Data poisoning in Gmail spam detection17:00 – SEO hacks and AI summaries: a new frontier for attackers22:00 – Typo-squatting and malicious packages25:00 – Pliny the Liberator and “memetic viruses” in training data33:00 – Black box vs. white box attacks on computer vision models43:00 – Prompt injection and roleplay exploits52:00 – APIs and AI security: two sides of the same coin