Senior Data Engineer (Kafka) at Akido Labs

Posted on: 05/21/2022

Location: (REMOTE)

contract

Original Source

Tags: scala spark aws kafka python kubernetes

As a Senior Data Engineer on the Akido Applications Engineering team, you will build the data infrastructure and pipelines as well as architecture and optimization to support critical business functions and help teams get access to the data they need faster. Your domain expertise and computer science instincts will help to solve a range of problems facing individual modules and multi-system integrations. What you'll do... Data engineering and migrations from various cloud and database resources Work closely with our product management and design teams to define feature specifications and build products for our healthcare and government customers Collaborate with Engineering, DevOps and Platform teams to build front end and mobile experiences that integrate with back-end microservices and APIs Architect efficient and reusable data ingestion and data management practices As a senior individual contributor, take a leading role to build and improve Engineering @ Akido, by proactively seeking out process improvements and performance optimizations, modeling high standards and mentoring junior developers, and generally ensuring high quality of our codebases and operational excellence. What you have... * 8+ years of data engineering with python and jvm language (e.g. Java, Scala) experience * Building and supporting business-critical Kafka infrastructures at scale including standing up new Kafka clusters and guiding them through production implementation. * Have a focus on high quality code, including automated testing and coding best practices * Have experience with messaging/queuing systems or stream processing systems * Have architected distributed systems with infrastructure automation, monitoring and alerting Nice to haves * Have hands-on experience working with large datasets, pipelines, and warehouses using technologies such as - Snowflake and Kubernetes * AWS, Spark experience, Debezium (change data capture) This is a remote based contract role (pacific timezone preferred)