Senior DataOPS Engineer

PTC Vezi toate joburile

  • București
  • Permanent
  • Full-time
  • Acum 21 de zile
Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business.Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible.As a Senior DataOPS Engineer, you will architect, build, and operate highly reliable data platforms and pipelines across CDOPS. You will be responsible for ensuring operational excellence, strong observability, secure and compliant deployments, and production-grade performance for data ingestion, transformation, and analytics systems. In addition to building robust data infrastructure, you will play a critical role in enabling enterprise-grade semantic models, ensuring BI teams can develop performant, governed, and reusable analytical assets.This role requires deep expertise across data engineering, Linux systems, container orchestration, CI/CD automation, monitoring, and DataOPS best practices. You will play a key part in designing scalable batch and real-time pipelines, maintaining cloud and on-prem data infrastructure, modernizing our orchestration stack, and mentoring engineers across the organization.Your work will directly empower our BI and AI teams through stable, well-instrumented, high-performance data systems.Day-to-Day ResponsibilitiesData Platform Architecture & Engineering· Design, develop, and maintain data pipelines, focusing on reliability, performance, and idempotency.· Build, optimize, and operationalize complex workflows using Apache Airflow, including scalable configurations), robust retries, SLAs, backfills, and data quality enforcement.Platform Reliability, Operations & Automation· Architect and run containerized data applications using Docker and Docker Compose, including secure healthchecks, and environment-based configuration overlays.· Operate and debug complex containers and Linux hosts (CPU steal, I/O wait, kernel parameters, networking, DNS behavior).· Build CI/CD workflows for data pipelines, container deployments, and data applications.Monitoring, Observability & DataOPS Excellence· Design and operate full observability stacks using Prometheus, and Grafana, including exporters for containers, Postgres, and internal services.· Lead incident response, root-cause analysis, runbooks, and reliability improvements across CDOPS data platforms.Collaboration & Cross-Functional Support· Partner with Data Engineering, BI, AI, and Infrastructure teams to design robust data flows and operational solutions.· Provide technical leadership in code reviews, architectural discussions, and design documentation.· Mentor junior engineers in DataOPS practices, and production engineering craftsmanship.Best Practices & Continuous Improvement· Champion DataOPS standards across the organization, including automation, observability, testing, governance, and secure operations.· Maintain operational checklists, system diagrams, and platform documentation.· Drive internal process improvements: eliminating toil, improving scalability, enhancing cost-efficiency, and reducing deployment friction.· Stay current on new technologies in data orchestration, data ingestion, container platforms, warehouse optimization, monitoring ecosystems, and cloud infrastructure.Preferred Skills & Knowledge· Expert-level SQL and strong proficiency in Python for data and systems automation.· Deep expertise in Docker (multi-stage builds, Compose, debugging), Linux internals, and Bash tooling.· Strong experience with Airflow architecture, executor selection, scaling, and DAG best practices.· Skilled in observability stack operation (Prometheus, Grafana), exporters, and metric governance.· Understanding of modern data modeling best practices.· Strong knowledge of Git-based workflows, CI/CD automation, and infrastructure-as-code.· Exposure to Microsoft Fabric, Snowflake, DBT, Terraform is a plus· Excellent communication skills, structured thinking, and ability to influence best practices across teams.Preferred Experience· Operating data platforms using Snowflake, or similar systems.· Running Airflow in production at scale (Celery or Kubernetes executors).· Designing monitoring architectures with Prometheus and grafana· Experience in SaaS environments.· Managing multi-container applications, secure secrets handling, and production troubleshooting.Basic Qualifications· Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.· 5+ years hands-on experience in Data Engineering, DevOps, or DataOPS roles.· English fluency· Proven experience designing and operating production-grade data pipelines and containerized applications.· Strong understanding of cloud environments, Linux systems, and data pipeline orchestration.· Demonstrated success in supporting cross-functional data teams.Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you.If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us?We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."

PTC