Connecting...

W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9syxdyzw5jzs1oyxj2zxkvanbnl2pvyl9kzwzhdwx0x2jhbm5lci5qcgcixv0

DevOps Engineer - Big Data - GCP

DevOps Engineer - Big Data - GCP

Job Title: DevOps Engineer - Big Data - GCP
Contract Type: Contract
Location: London, England
Salary: £550.00 - £750.00 per day
Start Date: ASAP
Reference: HQ00091022_1552411956
Contact Name: John Wright
Contact Email: j.wright@lawrenceharvey.com
Job Published: March 12, 2019 17:32

Job Description

CONTRACT OPPORTUNITY: DevOps Engineer - Big Data - Google Cloud Platform (GCP) - Central London - Initial 6-month contract - Competitive daily rate (Dependent on experience)

One of my leading clients, a global organisation based in Central London are currently recruiting for DevOps Engineer's (ideally GCP) with Big Data experience to join them on-site for an initial 6-month contract. My client is looking to migrate their Big Data Platform from on-premise Hadoop to Google Cloud Platform (GCP).

The DevOps Engineer provides expert guidance and delivers through self and others to:
- Integrate the necessary data from several sources in the Big Data Programme necessary for analysis and for commercial actions;
- Build applications that make use of large volumes of data and generate outputs that allow commercial actions that generate incremental value
- Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines and support) to speed up the Local Markets delivery in the Big Data Programme, assuring quality, performance and alignment to the Group technology blueprint of components releases in the platform
- Support local markets and Group functions in obtaining benefiting business value from the operational data

Core competencies, knowledge and experience required:
- Expert level experience in designing, building and managing applications to process large amounts of data in a Hadoop/DevOps (GCP) ecosystem;
- Extensive experience with performance tuning applications on Hadoop/GCP and configuring Hadoop/GCP systems to maximise performance;
- Experience building systems to perform real-time data processing using Spark Streaming and Kafka, or similar technologies;
- Experience with common SDLC, including SCM, build tools, unit testing, TDD/BDD, continuous delivery and agile practises
- Experience working in large-scale multi tenancy Hadoop environments;

Technical / professional qualifications required:
- Expert level experience with Hadoop/GCP ecosystem (Spark, Hive/Impala, HBase, Yarn); desirable experience with Cloudera distribution;
- Strong software development experience in Scala and Python programing languages; other functional languages desirable;
- Experience with Unix-based systems, including bash programming
- Experience with columnar data formats
- Experience with other distributed technologies such as Cassandra, Solr/ElasticSearch, Flink, Flume would also be desirable.

Key performance indicators:
- Delivered integrated use cases from Local Markets to add value to the business using the Big Data Programme,
- Development of core frameworks to speed up and facilitate integration of Local Markets developments in the BDP
- Speed of on-boarding data sources and use cases for EU Hub markets

Is this exciting opportunity of interest? If so, please respond with your CV and I will be in touch ASAP.

Lawrence Harvey is acting as an Employment Business in regards to this position. Visit our website www.lawrenceharvey.com and follow us on Twitter for all live vacancies @lawharveyjobs