The Data Engineering Engineer collaborates with a multidisciplinary Agile team to build high-quality data pipelines that drive analytic solutions. This role focuses on generating insights from connected data, enabling advanced data-driven decision-making capabilities.
How you will contribute:
Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals
Define data requirements, gather and mine large-scale structured and unstructured data, and validate data using various tools in the Big Data Environment
Support?Data Scientists?in data sourcing and preparation to visualize data and synthesize insights of commercial value
Collaborates with analytics and business teams to improve data models that feed business intelligence tools and dashboards, increasing data accessibility and fostering data-driven decision making across the organization.
Implement processes and systems to ensure data reconciliation, monitor data quality, and ensure production data is accurate and available for key stakeholders, downstream systems, digital products, and business processes.
Write unit, integration, and performance test scripts, and contribute to data engineering documentation.
Perform data analysis to troubleshoot and resolve data-related issues.
Work closely with AGILE SCRUM Teams, specifically frontend and backend engineers, product managers, scrum masters, quality engineers to deliver integrated and scalable data products.
Collaborate with enterprise teams, including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering, to design and develop data integration patterns and models supporting various digital products.
Partner with DevOps and the Cloud Center of Excellence to deploy data pipeline solutions in the Takeda AWS environments, meeting security and performance standards.
Support and align with Data Trustees, Data Stewards, and Master Data Management functions following Data Governance principles.
Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics
Skills and qualifications:
Education Requirements:
Bachelor's or Master's degree in Computer Science, Data Science, Information Design, or a related field
Experience Requirements:
3+ years of proven experience as a Data Visualization Engineer,?Data Engineer,?Data Analyst, or a similar role.
Proficiency in data visualization tools such as Power BI (required) or?QlikView (nice to have) and familiarity in programming languages like Python or R for data manipulation and analysis.
Advanced knowledge of SQL, executing complex queries and managing databases effectively.
Strong expertise in data integration, data modeling, and modern database technologies (GraphQL, SQL, No-SQL, python, pySpark) and AWS cloud technologies (e.g., DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming).
Understanding of data architecture and models (Data lake, Erwin).
Proficient in programming languages such as Python, Scala, Java, and C++, tailored for scalable data engineering solutions.
Experienced in system integration, ensuring seamless data flow and functionality across various platforms.
Strong analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights?for decision making.
Good communication and presentation skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.
Detail-oriented with a strong focus on data accuracy and integrity.
Knowledge of testing tools and practices (unit testing, integration testing, E2E testing).
Knowledge of SDLC?procedures,?version control (e.g., Git, GitHub, GitLab) and familiarity with CI/CD pipelines (e.g., Jenkins, GitHub Actions).
Proficient working in AGILE SCRUM Teams and using JIRA and Confluence.
Recuerda que ningún reclutador puede pedirte dinero a cambio de una entrevista o un puesto. Asimismo, evita realizar pagos o compartir información financiera con las empresas.