Senior Data Engineer
--Punjab, Haryana, himachal Pradesh--
12 - 17 Lakh/Year (Annual salary)
Long term (Duration)
Onsite Bengaluru, Karnataka, India
Categories
Data Engineer (Software and Web Development)
Cloud Architects (Software and Web Development)
DevOps Engineers (Software and Web Development)
Data Scientist (Software and Web Development)
Database Administrator (Software and Web Development)
Must have Skills
- Data Modeling - 4 Years
- ETL(Extract, Transform, Load) - 4 Years
- data pipelines - 4 Years
Intermediate - Azure - 4 Years
Intermediate - Azure Data Bricks - 4 Years
Intermediate - Azure Synapse analytics - 3 Years
Intermediate - Python - 3 Years
Intermediate - Java (All Versions) - 2 Years
Intermediate - Data Warehousing - 3 Years
Intermediate - SQL - 3 Years
Intermediate - NoSQL - 3 Years
Intermediate - Big Data - 3 Years
Intermediate - Tableau - 2 Years
Intermediate - Power BI - 2 Years
Intermediate - Data Engineer - 6 Years
Intermediate
Position Overview As a Data Engineer, you will play a crucial role in designing, building, and maintaining our data architecture. You will be responsible for ensuring the availability, integrity, and efficiency of our data pipelines, enabling our organization to make informed, data-driven decisions. Responsibilities Data pipeline development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming data from various sources, ensuring data quality, scalability, and reliability. Data modelling: Develop and maintain data models, architecture patterns, schemas ,and structures that support the needs of data analysts, data scientists, and other stake holders. ETL (Extract, Transform, Load): Create and optimize ETL processes to extract data from diverse sources, transform it into usable formats, and load it into data warehouses brother storage solutions. Data governance, quality and validation: Establish and enforce data governance policies, standards, procedures, and best practices and implement data quality checks ,validation processes, and error handling to ensure the accuracy and consistency of data. Performance tuning: Continuously monitor and optimize the performance of data pipelines to meet business requirements and scalability needs. Data security: Implement and maintain data security measures to protect sensitive information and ensure compliance with data privacy regulations. Collaboration: Work closely with data analysts, data scientists, and other stake holders to understand their data requirements and provide support in data access and availability. Documentation: Maintain thorough documentation of data engineering processes, data models, and data dictionaries. Stay informed: Keep up to date with emerging trends and technologies in data engineering to ensure our data infrastructure remains cutting-edge. Qualifications Bachelor’s degree in computer science, information technology, or a related field Demonstrable experience of minimum 7 years position as a Senior Data Engineer or similar, with hands on experience in designing and implementing data lakes, data warehouses, data modelling, ETL processes, and data transformation pipelines
Experience in Azure cloud-based data platforms (e.g., Azure Synapse Analytics, Fabric, Databricks etc.). Microsoft Fabric and Lakehouse experience is an added advantage for this role. Experience in real time data ingestion and streaming Proficiency in programming languages such as Python, Java, or Scala Strong knowledge of database systems (SQL and NoSQL), data warehousing, and bigdata technologies Knowledge of data governance principles, data quality management, and regulatory compliance Experience with orchestration services (i.e., Azure Data Factory (ADF), Azure Logic Apps, Azure Functions, Databricks Jobs, Airflow etc.) Excellent communications, leadership, and collaboration skills Ability to work in a fast-paced, dynamic environment and lead multiple projects and teams to deliver high-quality results Technical Expertise: o Programming languages: Python, Java, Scala, and R.o Data management & databases: Oracle, SAP, SQL, NoSQL, Data Ware housing o Big data technologies: Apache Hadoop, Spark, Kafka, etc .o Cloud platforms: Experience with Microsoft Fabric, Azure (Synapse Analytics Databricks, Machine Learning, AI Search, Functions, etc.), Databricks on Azure, o Data governance: Experience with Data governance tools Ab Initio, Informatica, Collibra, Purview etc. Data visualization: Familiarity with Tableau, Power BI DevOps & ML Ops: CI/CD principles, Docker, ML Flow, Kubernetes
Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technology projects. Collaborative and innovative work environment. Professional growth and training opportunities