“Follow our blogs, photography and travel tips.”

Senior DataOps Engineer - in Columbus

Headquarters: Barcelona / Madrid
URL: http://dlocal.com

Why should you join dLocal? dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record, we make it possible for our merchants to make inroads into the world’s fastest-growing markets. By joining us you will be part of an amazing global team working in a flexible, remote-first culture with travel, health and learning benefits. You will collaborate with 1000+ teammates from over 30 nationalities and develop an international career that impacts millions of people’s daily lives.

What’s the opportunity? As a Senior DataOps Engineer, you will strategically shape the foundation of our data platform. You will design and evolve scalable infrastructure on Kubernetes, operate Databricks as our primary data platform, enable data governance and reliability at scale, and ensure our data assets are clean, observable, and accessible.

What will I be doing?

  • Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently, using Kubernetes and Databricks as core building blocks.
  • Design, build, and maintain Kubernetes-based infrastructure, owning deployment, scaling, and reliability of data workloads.
  • Operate Databricks as our primary data platform, managing workspace, cluster configuration, job orchestration, and ecosystem integration.
  • Improve existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency across batch and streaming workloads.
  • Build and maintain CI/CD pipelines for data applications, automating testing, deployment, and rollback.
  • Implement release strategies such as blue/green, canary, and feature flags for data services and platform changes.
  • Establish robust data governance practices including contracts, catalogs, access controls, and quality checks.
  • Develop frameworks to transition raw datasets into clean and well-modeled assets for analytics and reporting.
  • Define and track SLIs/SLOs for critical data services covering freshness, latency, availability, and quality.
  • Implement monitoring, logging, tracing, and alerting for data workloads and platform components.
  • Participate in on-call rotations, manage incidents, and conduct postmortems for continuous improvement.
  • Investigate and resolve complex data and platform issues with a focus on root-cause analysis.
  • Maintain high standards for code quality, testing, and documentation to ensure reproducibility and observability.
  • Collaborate with the Data Enablement team, BI, and ML stakeholders to evolve the data platform.
  • Stay current with industry trends and emerging technologies in DataOps and DevOps.

What skills do I need?

  • Bachelor’s degree in Computer Engineering, Data Engineering, Computer Science, or a related field, or equivalent practical experience.
  • Proven experience in data engineering, platform engineering, or backend software development, ideally in cloud-native environments.
  • Deep expertise in Python and/or SQL with strong skills in building data or platform tooling.
  • Experience with distributed data processing frameworks such as Apache Spark (Databricks experience preferred).
  • Solid understanding of cloud platforms, especially AWS and/or GCP.
  • Hands-on experience with containerization and orchestration tools like Docker and Kubernetes (EKS/GKE/AKS).
  • Proficiency in Infrastructure-as-Code technologies such as Terraform, Pulumi, or CloudFormation.
  • Experience implementing CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, CircleCI, ArgoCD, or Flux.
  • Experience in monitoring and observability using tools like Prometheus, Grafana, Datadog, or CloudWatch.
  • Experience with incident management, including on-call rotations and postmortem analysis.
  • Strong analytical and problem-solving skills across infrastructure, networking, and application layers.
  • Ability to work autonomously and collaboratively.
  • Nice to have: Experience with Apache Airflow or similar orchestration tools.
  • Familiarity with modern data and table formats such as Parquet, Delta Lake, or Iceberg.
  • Experience as a Databricks admin/developer managing workspaces, clusters, and jobs.
  • Exposure to data quality, data contracts, or data observability tools and practices.

What do we offer?

  • Flexibility: Enjoy flexible schedules driven by performance.
  • Fintech Industry: Work in a dynamic, ever-evolving environment that boosts creativity.
  • Referral Bonus Program: Refer a suitable candidate and get rewarded.
  • Learning & Development: Access a Premium Coursera subscription.
  • Language Classes: Enjoy free English, Spanish, or Portuguese classes.
  • Social Budget: Receive a monthly budget to connect with your team in person or remotely.
  • Dlocal Houses: Rent a house worldwide for a week of coworking with your team.

What happens after you apply? Our Talent Acquisition team is committed to providing an excellent candidate experience. We will review your application and keep you informed via email at every step of the process.

To apply: https://weworkremotely.com/remote-jobs/dlocal-senior-dataops-engineer

Source: weworkremotely

Published: 2026-02-02 15:09:25

  • Favorites 153
  • Views 687
  • Comments 10

More opportunities