Data Engineer Data Engineer

Level

Senior

Location

Jakarta

Apply Before

30 Nov 2024

Description

We are looking for a savvy data engineer to join our team of data heroes. You will be responsible for designing and building big data architecture pipelines for data lakehouses in cloud, as well as optimizing and productionizing machine learning and predictive models. The ideal candidate is an experienced software engineer and data wrangler who enjoys building complex platforms from the ground up, using the latest technologies in cloud. You will cooperate with data architects and data scientists on large data projects for the biggest international brands, as well as build an internal platform framework to ensure consistent & optimal delivery. You should be a versatile self-starter eager to roll out next-gen data architectures, comfortable supporting multiple technologies/teams/solutions/clients, and also a great team player able to work within our international team with a positive, startup-minded attitude.

Responsibilities:

  • Design and implement data ingestion and processing of various data sources using public cloud (MS Azure, AWS, GCP) big data technologies like Databricks, AWS Glue, Azure DataFactory, Redshift, Kafka, Azure Event Hubs, AWS Step Functions, AWS Lambda, Azure Functions etc.
  • Collaborate with Business Intelligence consultants and assemble large and complex data sets that meet functional / non-functional business requirements for data lakehouse.
  • Support data scientist / analyst teams in deployment and optimization of AI / Machine Learning models and other data algorithms in services like AWS SageMaker or Azure ML.
  • Develop data pipelines to provide actionable insights into marketing automation, customer acquisition and other key businesses areas.
  • Develop DevOps automation of continuous development / test / deployment processes.
  • Document implemented data pipelines and logic in a structured manner using Confluence, plan your activities using Agile methodology in Jira.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs, like optimizing existing data delivery, re-designing infrastructure for greater scalability, etc.
  • Support pre-sales by proposing technical solution and accurate effort estimate

Qualification

  • Experience in building and productionizing big data architectures, pipelines and data sets.
  • Understanding data concepts and patterns of big data, data lake, lambda architecture, stream processing, DWH, BI & reporting.
  • Min. 2+ years of experience in a Data Engineer role, who has attained experience using the following software/tools:
  • Experience with big data tools like Hadoop, Spark, Kafka, etc.
  • -Experience with object-oriented / functional / scripting languages like Python, Scala, Java, R, C++, Bash, PowerShell etc.
  • -Experience with MS Azure (Databricks, Data Factory, Data Lake, Azure SQL, Event Hub, etc.) or AWS (Glue, EC2, EMR, RDS, Redshift, Sagemaker, etc.) cloud services.
  • -Implementing large-scale data/events oriented pipelines/workflows using ETL tools.
  • -Extensive working experience with relational (MS SQL, Oracle, Postgres, Snowflake, etc.) and NoSQL databases (Cassandra, MongoDB, Elasticsearch, Redis, etc.)

Other skill:

  • Strong analytic skills related to working with structured and unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Experience in setting up and using CI/CD automation tools like Azure DevOps, AWS CodePipeline, etc.
  • Person who is precise, well organized, has good communication skill, can adapt to changing circumstances and is not afraid of responsibility for his / her work will do great in this role.
  • Deep hands-on development experience in MS Azure or AWS environments 
  • Past experience in delivery of business intelligence projects, using tools like Power BI, Tableau, Qlik Sense, Keboola
  • Working knowledge of message queuing, stream processing, and highly scalable real-time  data processing using technologies like Storm, Spark-Streaming, etc.
  • Experience with data pipeline / workflow management tools like AWS Glue, Azure Data Factory, Airflow, AWS Step Functions, NiFi, etc.

Benefits

  •  Device will be provided by Company