Job description
Work you'll do/Responsibilities
You will determine processes and automation tools to reduce IT spend and increase efficiencies on multiple projects within the Healthcare domain.
This position includes collaborating with DevOps teams to implement CI/CD pipelines, automated deployments, and infrastructure as code (IaC) practices for AWS-based solutions. Document design, development, and deployment processes, as well as create technical specifications and user guides for developed solutions.
Your role will be to design, develop, and deploy cloud-based solutions for data processing, analytics, and integration using cloud services and big data technologies. Collaborate with architects, data engineers, and business stakeholders to understand requirements and translate them into technical solutions.
You will implement data ingestion, transformation, and storage processes using cloud services like AWS's S3, Glue, Athena, Redshift, and EMR. Implement security, data governance, and compliance measures to ensure data integrity and protection in AWS-based solutions. Develop and optimize data pipelines using Snowpark, SnowSQL, Hadoop and PySpark to extract, transform, and load data efficiently.
You will conduct performance tuning and optimization of data processing and analytics workflows to maximize efficiency and scalability. Work with cross-functional teams to troubleshoot and resolve issues related to data processing, data integration, and analytics solutions.
Communicate regularly with Engagement Managers (Directors), project team members, and representatives from various functional and / or technical teams, including escalating any matters that require additional attention and consideration from engagement management
The Team
As a part of the US Strategy & Analytics Offering Portfolio, the AI & Data Operations offering provides managed AI, Intelligent Automation, and Data DevOps services across the advise-implement-operate spectrum.
Qualifications
Required
- 5+ years' experience as a Cloud Data Engineer
- 5+ years' hands on experience in Snowpark, SnowSQL, Hadoop and PySpark
- 5+ years' experience in AWS services such as S3, Glue, Athena, Redshift, EMR, Lambda and Cloud Formation.
- 5+ years' experience in Python with a focus on data processing and analytics
- 5+ years in healthcare domain
- 5+ years in consulting
- Strong knowledge and hands-on experience in designing, developing, and deploying scalable solutions on the cloud platforms
- Expertise in SQL and database technologies for data manipulation and querying
- Bachelor's degree or equivalent experience
- Limited immigration sponsorship may be available
Preferred
- Familiarity with data modeling, data warehousing, and data integration concepts.
- Experience with DevOps practices, CI/CD pipelines, and infrastructure as code (IAAC) using tools like Jenkins, Git, and Terraform.
- Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues.
- Familiarity with agile development methodologies and experience working in Agile teams
- Ability to travel 10%, on average, based on the work you do and the clients and industries/sectors you serve
- Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience
- Analytical/ decision making responsibilities
- Analytical ability to manage multiple projects and prioritize tasks into manageable work products
- Can operate independently or with minimum supervision
- Excellent communication skills
- Ability to deliver technical demonstrations