Passion to learn new things and grow, and enthusiasm to get things done are key to success on this team
2+ years of experience building complex distributed systems
Be a team player - help others, be respectful with each other, and do your part to make the work day fun and impactful
Experience with one of the following is required:
Working on a compiler
Working on a database, data warehouse, data processing engine like Spark, or ETL application
Working on a scalable and distributed system in public or private clouds
We mainly write code in Python, C++ and Java, but expertise is not a prerequisite
What You'll Be Doing:
Be part of the story of development of an industry-leading platform for execution of AI/ML and Data Engineering code that works seamlessly with the Snowflake’s data cloud
Work across functions and across teams - we don’t only work on code that we own, we work with other parts of Snowflake every day
Learn about and contribute to query engine internals, code execution environments, performance debugging, building highly scalable and maintainable systems, and much more
This area of data engineering through Snowpark has significant market opportunity and customer demand, and therefore presents an ideal opportunity for high impact
Our team culture is a priority - transparency, knowledge sharing, fun events, and helping each other are all part of our work environment
This is also a great opportunity to work with and learn from some of the most skilled engineers in the industry, and across sites: our engineers are based in Poland, Germany, and the United States
Nice to Haves:
Knowledge in Python, C++, and Java
Hands-on experience with Snowflake platform
Familiarity with Spark-like data engineering environments
Perks and Benefits:
Opportunity to work with skilled engineers across multiple locations
Team culture focused on transparency, knowledge sharing, and fun events
High-impact role in an area with significant market opportunity