Our client is an R&D and Innovation lab located in downtown Toronto, that are responsible for transmitting billions of bytes of electronic and secure data at dizzying speeds. Their goal is to make commerce more accessible and convenient, and in 2017, they launched their first foray app into Canada/North America, which helps users organize and pay bills in one simple location. Not only does the app send you reminders so that you never miss a payment, but it also gives you 3% cash back on popular retail brand gift cards! They support their parent company, a mobile payments and financial services company that currently serves 300 million customers!!
Working on a small diverse, and tight-knit team that is committed to working for the end consumer, they leverage their expertise in technology to build a lasting, secure, and efficient solution. Their creative and incredibly talented engineers work to provide customized and confidential experiences for their consumers and users. They encourage their employees to take charge of their innovative ideas and execute them with passion and vigour.
Role:This ‘Data as a Service’ team operates a charter of capturing, storing and processing data reliably at scale. The DaaS Team makes this data available for a large set of products that are used for internal and external services. The core infrastructure that powers this platform operates at a scale of speed, performance, and complexity that few others can claim. The issues they face with large-scale data storage, low-latency retrievals, high volume requests and high availability are common yet complex. To help solve these challenges, they are looking for the best of the best engineering talent to come and join our cool & rewarding environment! Right now these guys are in a hyper growth phase, and this is a stellar opportunity to make an impact. The ‘Senior DevOps Engineer’, will be a core contributor in the Data Platform team and help deliver the world-class Data Platform that our client is gearing towards for data products growth.
Must Have Skills:
• 3+ years of DevOps or System Administration experience using Chef/Puppet/Ansible/SaltStack for system configuration, or quality shell scripting for systems management (error handling, idempotency, configuration management)
• Strong Amazon AWS experience, Familiarity with other cloud platforms (Google Cloud, Aliyun, or Apache Mesos)
• Experience with managing and automating configuration of MySQL database clusters.
• Adept at troubleshooting and administering Linux systems, dealing with networking issues, and fine tuning instrumentation and alerting systems.
• Proven knowledge of systems programming (bash and shell tools) and/or at least one scripting language (Python, Ruby, Perl).
• Preferably experience with operating a high load data pipeline and exposure to technologies such as Kafka, Kinesis, Spark, S3 and Redshift.
• Experience with securing distributed systems.
• You understand the purpose of reasonable security techniques and the tradeoff with operational efficiency
• Be adaptable and able to focus on the simplest, most efficient & reliable solutions
Responsibilities:• Create a resilient and highly operable production environment for our Data Platform with 24×7 availability, high performance, scalable and zero downtime releases in AWS environment.
• Manage large MySQL database clusters and noSQL systems Cassandra.
• Manage regional deployments and set up disaster recovery of Kafka data pipelines, systems and stores in AWS environment.
• Collaborate with Engineers to create a continuous delivery environment and processes.
• Instrument and monitor the health and availability of services, with fault detection, alerting, triage and recovery (automated and manual).
• Write scripts and runbooks to automate procedures.
• Share an on-call rotation and handle service incidents.