SP Group - Data Engineer

Location: Singapore
Business sector: Data Engineering with Machine Learning Fundamentals
Job reference: 468921
Published: about 2 years ago
Startdate: 16 March 2022 - 16 March 2022
SP Group is a leading utilities group in the Asia Pacific, empowering the future of energy with low-carbon, smart energy solutions for its customers. It owns and operates electricity and gas transmission and distribution businesses in Singapore and Australia, and sustainable energy solutions in Singapore and China.

 
The Role 

You will be part of the Digital Technology Team and together, you will innovate, create, and deploy digital products that will empower more than 3,800 employees within SP Group and improve the quality of life for the 1.5 million commercial, industrial and residential customers that SP Group serves. We build solutions that enable sustainable high quality lifestyles and help consumers save energy and cost, as well as supporting national goals for a sustainable livable city. Now, imagine the impact you can create. 
 
Singapore Power (SP group) is looking for a Data Engineer to join the Digital Team at SP Digital (https://www.spdigital.io/) to design and build data applications and services for operational and new innovation needs. We are a brand-new team in Singapore Power, spearheading the digitization and data transformation efforts and we operate like a startup within the organization, moving with speed and delivering with quality. 
 
The Data Engineering team is part of the Data & AI organization within SP Digital that focuses on the creation, automation and maintenance of high velocity data pipelines to support the use and consumption of data for decision-making and innovation in various SP Group business units. The team also serves as subject matter experts on the running and maintenance of the in-house enterprise data lake, as well as the set up and operation of the enterprise data infrastructure. 
 

What You’ll Be Doing 

  • Create and maintain multiple robust and high-performance data processing pipeline within Cloud, Private Data Centre and Hybrid data ecosystem 
  • Assemble large, complex data sets from a wide variety of data sources 
  • Collaborate with Data Scientist, Machine Learning Engineer, Business Analyst and Business users to derive actionable insights and reliable foresights into customer acquisition, operational efficiency and other key business performance metrics 
  • Develop, deploy and maintain multiple microservices, rest API and reporting services Design and implement internal processes to automate manual workflow, optimize data delivery and re designing infrastructure for greater scalability, etc 
  • Establish expertise in designing, analyzing and troubleshooting large-scale distributed systems 


Skills You’ll Need

  • Experience building and operating large scale data lake and data warehouse 
  • Experience with Hadoop ecosystem and big data tools, including Spark and Kafka 
  • Experience with stream-processing systems including Spark-Streaming 
  • Advance working experience with relational SQL and NoSQL databases, including Hive, Hbase and Postgres 
  • Deep understanding in SQL and able to optimize data queries 
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc A successful history of manipulating, processing and extracting value from large disconnected datasets Experience applying modern development principles (Scrum, TDD, continuous integration, and code reviews) 


Bonus: 

  • Experience with ETL tools such as Talend Big Data, Apache Nifi, etc 
  • Experience working with Hortonworks Data Platform or Cloudera Data Platform 
  • Experience with Metadata Management tools 
  • Exposure to Data Governance processes and tools 
  • Proven ability in supporting and working with cross-functional teams in a dynamic environment