Please apply at this link: https://procogia.bamboohr.com/jobs/view.php?id=26&source=aWQ9MjE%3D
We are looking for software engineers who wants to bring their passion for infrastructure to build world class infrastructure products. You will build libraries and distributed services to support a data analytics team reliably and at scale using both on-premise and cloud environments. We want to provide cutting edge, reliable and easy to use infrastructure for ingesting and processing data and help the teams that build data intensive applications be successful. You will work with many cross functional teams and lead the planning, execution and success of technical projects with the ultimate purpose of improving customer experience.
ProCogia is a data consulting firm whose mission is to empower organizations to achieve sustainable advantage through data solutions. We invest in our consultants and want to see them progress in their skillsets and careers. ProCogia is constantly looking to innovate and find the latest breakthroughs that we can support our clients with.
You will be responsible for the Data Infrastructure used for analytics and machine learning within our clients teams. Their infrastructure stores, processes, and serves 100s of TeraBytes of data for millions of users. The team’s goal is to ensure the reliability and performance at the highest level. Responsibilities will include:
Manage teams data infrastructure supporting both internal customers in the team and external across the enterprise.
Take part in designing and building out our next generation data storage/processing infrastructure to push our services to the next level
Diagnose, fix, improve, and automate complex issues across the entire stack to ensure maximum uptime and performance
Collaborate across the team on proper use/integration of our platform
Write code, documentation, participate in code reviews, and mentor other engineers
Passionate about data infrastructure and deep engineering background
Experience with distributed systems like big data processing/streaming/storage engines (e.g., Apache Hadoop, Apache Spark, Apache Kafka, Apache Hudi), different Cloud environments (e.g. AWS, GCP), or resource management systems (e.g., Apache Mesos, Kubernetes)
Experience with alerting, monitoring and remediation automation in a large scale distributed environment
Extensive programming experience in Java, Scala or Python.
Interest or knowledge in using public or private Kubernetes frameworks for scaling data and services infrastructure
B.S., M.S., or PhD in Computer Science, Computer Engineering, or equivalent practical experience