Welcoming Llama Guard 4 on Hugging Face Hub
Meta has released Llama Guard 4, a 12B multimodal safety classifier designed to detect and filter unsafe content in both image and text inputs/outputs for production LLM deployments. The model …
AML.T0054 - LLM Jailbreak
AML.T0051 - LLM Prompt Injection
AML.T0043 - Craft Adversarial Data