Bachelor’s degree in Computer Science, similar technical field of study or equivalent practical experience.
5 years of experience working with Large Language Models (LLMs) from various providers (e.g., OpenAI, Google, Hugging Face) and fine-tuning techniques.
Experience in developing and deploying Artificial Intelligence (AI) agents, including agent architectures, planning, and multi-agent systems.
Experience with frameworks and libraries for LLM orchestration and agent development (e.g., LangChain, LlamaIndex, Haystack).
Nice to haves
Master’s degree or PhD in Engineering, Computer Science, or other technical related field.
Experience with one or more general purpose programming languages including but not limited to: C/C++, Go, Python, JavaScript.
Ability to communicate in English fluently in order to communicate with your team.
What you'll be doing
Design, develop, test, deploy, maintain, and improve software.
Manage individual project priorities, deadlines, and deliverables.
Research, design, and implement Artificial Intelligence (AI)-driven solutions leveraging Large Language Models (LLMs) for various security-related use cases (e.g., natural language understanding of threat intelligence, automated report generation, intelligent query processing).
Develop and deploy autonomous and semi-autonomous AI agents capable of performing complex tasks, such as automated malware analysis, threat hunting, or incident response support.
Evaluate and integrate LLMs and agentic frameworks, considering performance, cost, and security implications.
Perks and benefits
Dynamic work environment at Google Cloud with opportunities for growth and project variety.
Contribute to making the internet a safer place at Google Threat Intelligence (GTI).
Be part of a team developing cutting-edge technologies for threat detection and analysis.