Senior Data Engineer
- Industry Other
- Category Production/Maintenance /Quality
- Location Kathmandu, Nepal
- Expiry date Feb 13, 2026 (Expired)
Job Description
About the Role
We are looking for a skilled Data Engineer to design, build, and operate scalable data pipelines and cloud-native data platforms. You will work closely with analytics, product, and engineering teams to enable reliable reporting, analytics, and AI/ML use cases.
This role is hands-on and focused on building and owning production-grade data pipelines in Azure, with an emphasis on data quality, performance, and maintainability.
What You’ll Do
Data Pipelines & Processing
Design, develop, and maintain robust batch and near–real-time data pipelines
- Build and optimize ETL/ELT workflows for structured and semi-structured data
- Ingest data from multiple sources (databases, APIs, files, and cloud services)
- Ensure pipelines are reliable, scalable, and easy to operate in production
Data Modeling & Warehousing
Design and maintain data models for analytics and reporting (fact/dimension, star/snowflake schemas)
- Build and optimize data warehouse / analytical datasets
- Partner with analytics and BI teams to deliver clean, well-documented datasets
Cloud & Platform Engineering
Implement and manage workflows using Azure Data Factory (ADF) and Apache Airflow
Deploy and manage data workloads using Azure Container Apps and Azure Function Apps
Monitor pipelines, troubleshoot failures, and continuously improve performance and cost efficiency
- Apply best practices for cloud security, scalability, and cost management
Collaboration & Quality
- Work closely with analytics, product, and engineering teams to support business and AI/ML use cases
- Implement basic data quality checks, validation, and monitoring
- Document data pipelines, data models, and operational processes
What We’re Looking For (Required Skills & Experience)
• Strong understanding of data warehousing concepts (fact/dimension modeling, star and snowflake schemas)
• Hands-on experience building and operating ETL/ELT pipelines
• Proficiency in Python and SQL (writing production-quality code and queries)
• Experience working with large-scale datasets
• Solid experience with Azure data services, including: Azure Data Factory (ADF), Apache Airflow, Azure Container Apps, Azure Function Apps
• Good understanding of cloud security, performance optimization, and cost management
• Strong problem-solving, debugging, and operational mindset
Nice to Have
Experience with CI/CD for data pipelines
Familiarity with data governance, data quality frameworks, or monitoring tools
• Experience supporting analytics, BI, or AI/ML workloads
• Exposure to infrastructure-as-code (Terraform, ARM, or similar)
What We Offer
Opportunity to work on modern, cloud-native data platforms
• Exposure to large-scale, real-world data problems
• A collaborative, growth-oriented engineering environment
• Ownership and influence over data architecture and best practices