As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Design and develop data pipelines to extract, transform, and load data.- Ensure data quality and integrity by implementing data validation and cleansing processes.- Optimize data storage and retrieval processes for efficient data access.- Collaborate with cross-functional teams to understand data requirements and provide technical expertise. Professional & Technical Skills:- Must to Have Skills: Proficiency in Python (Programming Language), AWS, Glue and Pyspark.- Good to Have Skills: Google BigQuery / Python / DataSteam / Data Flow / Postgre Aurora DB / Handling ETL jobs- Strong understanding of data architecture principles and best practices.- Experience with data modeling and database design.- Hands-on experience with ETL (Extract, Transform, Load) processes and tools.- Familiarity with cloud platforms and services such as AWS or Azure.- Good To Have Skills: Experience with big data technologies such as Hadoop or Spark.- Knowledge of data governance and data security practices.- Experience with data visualization tools such as Tableau or Power BI. Additional Information:- The candidate should have a minimum of 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A BE degree is required.
...
As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Design and develop data pipelines to extract, transform, and load data.- Ensure data quality and integrity by implementing data validation and cleansing processes.- Optimize data storage and retrieval processes for efficient data access.- Collaborate with cross-functional teams to understand data requirements and provide technical expertise. Professional & Technical Skills:- Must to Have Skills: Proficiency in Python (Programming Language), AWS, Glue and Pyspark.- Good to Have Skills: Google BigQuery / Python / DataSteam / Data Flow / Postgre Aurora DB / Handling ETL jobs- Strong understanding of data architecture principles and best practices.- Experience with data modeling and database design.- Hands-on experience with ETL (Extract, Transform, Load) processes and tools.- Familiarity with cloud platforms and services such as AWS or Azure.- Good To Have Skills: Experience with big data technologies such as Hadoop or Spark.- Knowledge of data governance and data security practices.- Experience with data visualization tools such as Tableau or Power BI. Additional Information:- The candidate should have a minimum of 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A BE degree is required.