Job description
MedeAnalytics is a leader in healthcare analytics, providing innovative solutions that enable measurable impact for healthcare payers and providers. With the most advanced data orchestration in healthcare, payers and providers count on us to deliver actionable insights that improve financial, operational, and clinical outcomes. To date, we’ve helped uncover millions of dollars in savings annually.
As a senior member of the data engineering team, our Principal, Data Engineer, will be the key technical expert developing and overseeing Mede’s data product build & operations. This role will build data pipelines into various source systems, rest data on the Mede/Analytics Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. A key component will be driving a strong vision for how data engineering can proactively create a positive impact on the business. This role will also help lead the development of large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of Mede's flagship data products.
You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Experience working on very large scale AWS based Data lake and Data initiatives along with being hands on is required for this role.
Responsibilities:
- Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company
- Responsible for day-to-day data collection, transportation, maintenance/curation and access to the Data Lake/data repository
- Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders
- Increase awareness about available data and democratize access to it across the company
- Active contributor to code development in projects and services
- Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products
- Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance
- Responsible for implementing best practices around systems integration, security, performance and data management
- Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape
- Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions
- Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners
- Develop and optimize procedures to “productionalize” data science models
- Define and manage SLA’s for data products and processes running in production
- Support large-scale experimentation done by data scientists.
- Prototype new approaches and build solutions at scale
- Research in state-of-the-art methodologies
- Create documentation for learnings and knowledge transfer
- Create and audit reusable packages or libraries
Qualifications
- Bachelor’s degree preferred; Experience with building solutions in the healthcare space is a plus
- Fluent with AWS cloud services; AWS Certification is a plus
- 6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture
- 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools and Snowflake required
- 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.
- 2+ years in cloud data engineering experience in Oracle OCI/AWS
- Experience with integration of multi cloud services with on-premises technologies
- Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines
- Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Experience with MPP database technologies such as AWS Redshift, Vertica or SnowFlake
- Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes
- Experience with version control systems like Github and deployment & CI tools.
- Experience with Glue, Data Factory, Databricks and Machine learning tools
- Working knowledge of agile development, including DevOps and DataOps concept
- Strong change manager. Comfortable with change, especially that which arises through company growth
Benefits Include:
- Incredible Medical, Dental, Vision benefits - Effective on the first of the month after your start
- FREE single healthcare coverage!!!
- Company paid Basic Life & AD&D Insurance, STD/LTD
- ROBUST Employee Assistance Program (EAP)
- 401k with company match
- 9 paid holidays AND 3 floating holidays = 12 total!
- Paid time off accrual
- Employee Referral Bonus
- Professional Development
- and more!
This job description reflects management’s assignment of essential functions. Flexibility is a necessary understanding with the natural growth of MedeAnalytics, and deviation/delegation of tasks will be presented as necessary.
At MedeAnalytics we deeply value each and every one of our committed, inspired and passionate employees. If you're looking to make an impact doing work that matters, you're in the right place. Help us shape the future of healthcare by joining #TeamMede.