As a Staff Software Engineer, you own new initiatives, designs and build world-class platforms in order to measure and optimize ad performance. You ensure industry-leading scalability and reliability of mission-critical systems processing billions of real-time transactions a day. You apply state of the art technologies, frameworks, and strategies to address complex challenges with Big-Data processing and analytics.
What will you do?
- Architect, design and build application pipelines handling tens of TBs/Day, serving thousands of clients and supports advanced analytic workloads
- Mentor a great team of motivated engineers by reviewing, understanding, and providing actionable feedback on code, architecture designs, and infrastructure deployments created and maintained by other engineers
- Explore the technological landscape for new ways of producing, processing, and analyzing data in order to gain insights into both our users and our product features
- Design, develop, and test data-driven products, features, and APIs that scale
- Continuously improve the quality of deliverables and SDLC processes
- Operate production environments, investigate issues, assess their impact, and come up with feasible solutions.
- Understand business needs and work with product owners to establish priorities
- Bridge the gap between Business / Product requirements and technical details
- Work in multi-functional agile teams with end-to-end responsibility for product development and delivery
Who you are?:
- Lead by example - design, develop and deliver quality solutions.
- Love what you do and are passionate about crafting clean code and have a steady foundation with 8+ years of programming experience in coding, object-oriented design and/or functional programming
- Deep understanding of distributed system technologies, standards, protocols, and have 3+ years of experience working in distributed systems like Hadoop, Big Query, Spark, Kafka Eco System ( Kafka Connect, Kafka Streams), and building data pipelines at scale.
- Hands-on experience building low latency, high-throughput APIs, and are comfortable using external APIs from platforms.
- Excellent SQL query writing abilities and data understanding
- Care about agile software processes, data-driven development, reliability, and responsible experimentation
- Genuine desire to automate decision making, processes, and workflows
- Experience working with dependency management tools such as Luigi/Airflow
- Experience with DevOps domain - working with build servers, docker and containers clusters (kubernetes)
- Experience in Mentoring and growing a diverse team of talented data engineers
- B.S./M.S. in Computer Science or a related field
- Excellent communication skills and a team player
Extra credits for having experience with:
- Columnar data stores
- Google BigQuery
- Spark Streaming or other live stream processing technology
- Cloud environment, Google Cloud Platform
- Container technologies - Docker / Kubernetes
- Ad serving technologies and standards