Data Engineer
Über die Position
Quicklizard, a leader in dynamic pricing solutions, is looking for a Data Engineer to join our growing Data Science team. This role sits at the core of our AI-based pricing platform and is instrumental in shaping the data infrastructure that powers decision-making for some of the world’s top retailers.
Our platform optimizes pricing strategies across millions of SKUs, enabling clients to make real-time, data-driven decisions. Serving customers in over 40 markets, across industries ranging from consumer electronics to fashion and beauty, Quicklizard offers a unique blend of startup agility with large-scale impact.
Ihre Rolle und Ihr Einfluss
As a data engineer you’ll have end-to-end ownership - from system architecture and software development to operational excellence. Specifically, you will:
● Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
● Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
● Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
● Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
● Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Anforderungen
Was wir suchen
Um sich für diese Aufgabe zu qualifizieren, sollten die Bewerber folgende Qualifikationen und Erfahrungen mitbringen
● A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
● 3+ years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
● Strong Python skills, with experience building modular, testable, and production-ready code.
● Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
● Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
● A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
● Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
● Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
● Familiarity with API development frameworks (e.g., FastAPI).
● Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
● Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Mitmachen
We’re building the future of retail pricing. You’ll become part of a passionate, multidisciplinary team working on high-impact projects where data is central to our success. You’ll have ownership, autonomy, and significant influence on how we scale our data platform, enabling smarter pricing decisions worldwide.