Job description
As a Cloud Data Engineer, you will:
- Work on the project assignments to design and build Data & Analytics Applications using private/public cloud Iaas , PaaS and Saas Platforms on Azure, AWS.
- Design and implement data pipelines using Azure technologies such as Gen2 Data Lakes, Azure Data Factory, SQL Pools, Azure Stream Analytics, Synapse Analytics, etc.
- Optimize data pipelines for performance and scalability using Azure Databricks
- Use of ETL/ELT technology (Extract, Transform, Load) and Optimize data storage, data quality
- Document source table details and appropriate transformation rules that needs to be applied on the target tables/reports.
- Work closely with the business, IT teams, project managers, Architects, developers, and Modelers.
- Experience in Translating Business requirements into Code and deliver the required result.
- Exploring various Cloud Services, Build & Demonstrating PoC to implement the best solutions.
- Feasibility Study, Analysis, Design, Build, Test & Deploy phases in Modernization program to migrate legacy Data Systems from Mainframes to Azure/AWS Cloud Systems.
- Understand data models, data storage models and data migration to manage data within the organization, for a small to large-sized project.
- Work on cloud native and micro services based application solutions design, development, and resolution of technical issues.
- Identify, communicate, and mitigate Risks, Assumptions, Issues and Decisions throughout full lifecycle.
We are looking for experience in the following skills:
- Hands-on experience in building Data Lakes, Analytics & Visualizations using Azure/AWS private/public cloud IaaS, PaaS, and SaaS services.
- Expertise of Cloud Storage services Data Lakes, Containers and Data Pipelines to ingest & orchestrate huge volumes of data.
- Experience in building Data Lakes & Relational Data Stores on multi-cloud architecture.
- Experience in development of Azure based cloud solutions Using ADF, Databricks , Synapse, SQL and spark pools, Cosmosdb, streaming analytics , Power-BI reporting, DevOps and CI/CD using Azure.
- Experience in SQL/PLSQL, On-Prem/Cloud ETL tools, Visualizations & Analytics tools Power BI, Tableau etc.
- Experience working with Mainframe technologies COBOL and/or PLI programming, System or application level experience with IBM CICS, IMS and JCL Subsystems.
- Proficiency in IBM Db2 for z, IMS DB or mainframe format data files (VSAM/QSAM).
- Strong verbal and written communications skills as well as the ability to work effectively across internal and external organizations.
Eligibility Criteria:
- Bachelor’s or Masters degree in Computer Science or related field
- Minimum 5 years of experience as a data engineer with strong expertise in azure
- Proficiency in Azure, ADLS, ADF, Synapse analytics and Databricks.
- Experience in implementation of Lakehouse & Delta lakes.
- Knowledge on LakehouseIQ is advantageous.
- Experience in Reporting/Visualization tools Power BI, Tableau, etc.
- Good SQL/PL SQL programming Skills, PySpark, Spark SQL, Writing complex procedures and queries for Dataframes and Notebooks.
- Hands-on experience with Agile software development
- Good problem-solving skills.
Job Types: Full-time, Permanent
Salary: £33,500.00-£35,500.00 per year
Schedule:
- Monday to Friday
Supplemental pay types:
- Performance bonus
Ability to commute/relocate:
- Aldershot: reliably commute or plan to relocate before starting work (required)
Education:
- Bachelor's (required)
Work Location: Hybrid remote in Aldershot
Application deadline: 05/08/2023