Gen AI Security Researcher
ActiveFence
Gen AI Security Researcher
- R&D
- Ramat Gan, IL
- Full-time
Description
As a GenAI Security Researcher, you’ll dive deep into the challenges of AI safety, conducting redteaming operations to identify vulnerabilities in generative AI systems and their infrastructure.
You will conduct redteaming operations for finding and addressing risks to ensure AI models are robust, secure, and future-proof.
As a Security Researcher, you will:
- Conduct sophisticated black-box redteaming operations to uncover vulnerabilities in generative AI models and infrastructure.
- Design new techniques to bypass the latest AI security mechanisms.
- Evaluate and strengthen the security of AI systems, identifying weaknesses and collaborating to implement improvements.
- Work with cross-functional teams to automate security testing processes and establish best practices.
- Stay ahead of emerging trends in AI security, ethical hacking, and cyber threats to ensure we’re at the cutting edge.
Requirements
Must Have
- 3+ years in offensive cybersecurity, especially focused on web applications and API security OR Advanced Ph.D. Candidates with a proven record of research in AI/Cybersecurity
- Strong programming and scripting skills (e.g., Python, JavaScript) relevant to AI security.
- In-depth understanding of AI technologies, particularly generative models like GPT, DALL-E, etc.
- Solid knowledge of AI vulnerabilities and mitigation strategies.
- Excellent problem-solving, analytical, and communication skills.
Preferred Skills That Set You Apart:
- Certifications in offensive cybersecurity (e.g., OSWA, OSWE, OSCE3, SEC542, SEC522) OR Master's degree and above in Computer Science with a focus on Data Science or AI.
- Experience in end-to-end product development, including infrastructure and system design.
- Proficiency in cloud development.
- Familiarity with AI security frameworks, compliance standards, and ethical guidelines.
- Ability to thrive in a fast-paced, rapidly evolving environment.
About ActiveFence
Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines.
In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.