Aún no hay resultados para tu búsqueda
Encontramos estas vacantes similares que podrían interesarte.
Hace 1 día
Data engineer - GCP
Si el reclutador te contacta podrás conocer el sueldo
Sobre el empleo
Detalles
Contratación:
PermanenteHorario:
Tiempo completoEspacio de trabajo:
HíbridoBeneficios
- Prestaciones de ley
- Prestaciones superiores a las de la ley
Descripción
We are looking for a Data Engineer who thrives in building robust, scalable, and production-grade pipelines in GCP. This role focuses on owning the complete data journey—from source ingestion to consumption—ensuring efficiency, traceability, and high quality throughout every layer of the platform. You will lead pipeline development efforts, implement monitoring and validation frameworks, and collaborate closely with data analysts and engineers to deliver reliable, high-performance data flows.
¿What you will be doing? ???
- Build and maintain high-performance data pipelines using GCP-native tools, covering batch and streaming needs.
- Collaborate with senior data engineers to define modular and scalable data architecture.
- Own the transformation logic, version control, and performance tuning of all pipelines.
- Implement monitoring, logging, and data validation mechanisms to ensure trust and reliability in data flows.
- Empower analysts by ensuring data is discoverable, accessible, and consumption-ready.
- Review code, provide technical mentorship, and elevate engineering standards across the team.
- Identify and execute on opportunities to modernize and streamline the platform.
- Take ownership of data quality and pipeline robustness for both day-to-day operations and long-term scaling.
¿What are we looking for? ????
- 4–6 years of experience in data engineering roles (no data scientists or general software engineers), with at least 2 years in technical or team leadership.
- Advanced proficiency in BigQuery, SQL, and GCP services like Cloud Functions and Cloud Storage.
- Hands-on experience with Python, pipeline orchestration (Airflow, Composer), and transformation tools like dbt or Dataform.
- Strong understanding of data modeling, pipeline performance optimization, and cloud-native design patterns.
- Familiarity with CI/CD workflows (Git, GitLab) and automation/testing best practices.
- Experience with structured and semi-structured formats (JSON, Parquet) and streaming (Pub/Sub).
- Strong commitment to code quality, performance, and documentation.
- A problem-solving mindset and ownership over delivery, stability, and monitoring.
We value if you have ???
- Deep expertise in query optimization and cloud storage performance tuning.
- Proven track record of implementing automated data validation and monitoring systems.
- GCP certification is a plus, but hands-on experience is a must.
- Experience creating reusable transformation logic for cross-domain use cases.
¿Why join us? ??
You'll be part of a dynamic, fast-growing team where your work will have a direct impact on process optimization and data-driven decision-making. If you're passionate about data engineering and ready to take your skills to the next level, this is your chance to apply!
ID: 20524652
También puedes buscar
También puedes buscar
Refina la ubicación de tu búsqueda
Refina la ubicación de tu búsqueda