Senior Data Engineer
--Punjab, Haryana, himachal Pradesh--
12 - 17 Lakh/Year (Annual salary)
Longterm (Duration)
Onsite Bengaluru, Karnataka, India
Categories
Data Engineer (Software and Web Development)
Cloud Architects (Software and Web Development)
DevOps Engineers (Software and Web Development)
Data Scientist (Software and Web Development)
Database Administrator (Software and Web Development)
Must have Skills
- Data Modeling - 4 Years
- ETL(Extract, Transform, Load) - 4 Years
- data pipelines - 4 Years
Intermediate - Azure - 4 Years
Intermediate - Azure DataBricks - 4 Years
Intermediate - Azure Synapse analytics - 3 Years
Intermediate - Python - 3 Years
Intermediate - Java (All Versions) - 2 Years
Intermediate - Data Warehousing - 3 Years
Intermediate - SQL - 3 Years
Intermediate - NoSQL - 3 Years
Intermediate - Big Data - 3 Years
Intermediate - Tableau - 2 Years
Intermediate - Power BI - 2 Years
Intermediate - Data Engineer - 6 Years
Intermediate
Position OverviewAs a Data Engineer, you will play a crucial role in designing, building, and maintaining our dataarchitecture. You will be responsible for ensuring the availability, integrity, and efficiency ofour data pipelines, enabling our organization to make informed, data-driven decisions.Responsibilities Data pipeline development: Design, build, and maintain scalable data pipelines foringesting, processing, and transforming data from various sources, ensuring dataquality, scalability, and reliability. Data modelling: Develop and maintain data models, architecture patterns, schemas,and structures that support the needs of data analysts, data scientists, and otherstakeholders. ETL (Extract, Transform, Load): Create and optimize ETL processes to extract data fromdiverse sources, transform it into usable formats, and load it into data warehouses orother storage solutions. Data governance, quality and validation: Establish and enforce data governancepolicies, standards, procedures, and best practices and implement data quality checks,validation processes, and error handling to ensure the accuracy and consistency ofdata. Performance tuning: Continuously monitor and optimize the performance of datapipelines to meet business requirements and scalability needs. Data security: Implement and maintain data security measures to protect sensitiveinformation and ensure compliance with data privacy regulations. Collaboration: Work closely with data analysts, data scientists, and other stakeholdersto understand their data requirements and provide support in data access andavailability. Documentation: Maintain thorough documentation of data engineering processes,data models, and data dictionaries. Stay informed: Keep up to date with emerging trends and technologies in dataengineering to ensure our data infrastructure remains cutting-edge.Qualifications Bachelor’s degree in computer science, information technology, or a related field Demonstrable experience of minimum 7 years position as a Senior Data Engineer orsimilar, with hands on experience in designing and implementing data lakes, datawarehouses, data modelling, ETL processes, and data transformation pipelines
Experience in Azure cloud-based data platforms (e.g., Azure Synapse Analytics, Fabric,Databricks etc.). Microsoft Fabric and Lakehouse experience is an added advantage for this role. Experience in real time data ingestion and streaming Proficiency in programming languages such as Python, Java, or Scala Strong knowledge of database systems (SQL and NoSQL), data warehousing, and bigdata technologies Knowledge of data governance principles, data quality management, and regulatorycompliance Experience with orchestration services (i.e., Azure Data Factory (ADF), Azure Logic Apps,Azure Functions, Databricks Jobs, Airflow etc.) Excellent communications, leadership, and collaboration skills Ability to work in a fast-paced, dynamic environment and lead multiple projects andteams to deliver high-quality results Technical Expertise:o Programming languages: Python, Java, Scala, and R.o Data management & databases: Oracle, SAP, SQL, NoSQL, Data Warehousingo Big data technologies: Apache Hadoop, Spark, Kafka, etc.o Cloud platforms: Experience with Microsoft Fabric, Azure (Synapse Analytics,Databricks, Machine Learning, AI Search, Functions, etc.), Databricks on Azure,o Data governance: Experience with Data governance tools Ab Initio, Informatica,Collibra, Purview etc.o Data visualization: Familiarity with Tableau, Power BIo DevOps & MLOps: CI/CD principles, Docker, MLFlow, Kubernetes
Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technology projects. Collaborative and innovative work environment. Professional growth and training opportunities