Big Data Engineer
Big Data Engineer
Job Details
Vacancies
1 position
Experience Required
No experience required
Job Description
Job Description:
- Development and maintenance of Huawei Big Data Stack. Extraction of structured and unstructured data from different sources and create a structured Data Model. Daily work will be on Hadoop, Spark, Hive and other related tools and technologies.
- In this role, you will be designing and developing data and features for Huawei’s search and recommendation systems.
- Develop reports from the data based on the business requirements. Develop and Maintain systems for Self-Analytics and other Visualization requirements.
- Maintain and manage the daily running of the ETL jobs and fix the data pipeline issues, focus on “cost, compute, storage” optimization tasks on day-to-day basis.
- Ensure the data quality, privacy, security as mandated by the company and take necessary actions.
Skills / Qualifications:
- Master Degree/Bachelor in Computer Science or other related fields with coding and development background in the data base queries and related products.
- Proficiency in coding, algorithms, and data structures is required to support the technical responsibilities of the role. Familiarity with programming languages such as Python, Java, and Scala would be advantageous.
- Strong Working Experience with Hive, ETL, SQL, data modeling, very strong experience in writing complex SQL/Hive queries in optimized way so that it can run on huge data set with optimized resource usage.
- Working experience in large-scale data pipeline development for recommendation and search is preferred .
- Strong working experience in big data technologies such as Apache Spark, Hive, Hadoop and Linux.
- Strong capability to coalesce and present complex data sets from different sources to different data consumers.
- Strong working experience in data visualization and report generation, through tools like Tableau/PowerBI/FineBI etc.
- Working experience with large-scale data processing frameworks such as Spark, Flink, and Kafka.
- Solve data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
- Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
- Responsible for maintaining data pipelines and ensuring data accuracy, validity, and quality to minimize data-related issues prior to deployment or exposure.
Interested candidate please click "APPLY" to begin your job search journey.
We regret to inform that only shortlisted candidates will be notified.
By sending us your personal data and curriculum vitae (CV), you are deemed to consent to PERSOL Singapore Pte Ltd and its affiliates to collect, use and disclose your personal data for the purposes set out in the Privacy Policy available at https://www.persolsingapore.com/policies. You acknowledge that you have read, understood, and agree with the Privacy Policy.
PERSOL Singapore Pte Ltd
• RCB No. 200007268E • EA License No. 01C4394
• Registration ID: Heah Sian Wei R23117518
Similar Jobs
🤡Client Engagement Crew [Mentorship + Travel]
Marketing Brand Trainee (Fast Tracked Growth + One to One Mentor)
[🌠ENTRY LEVEL🌠] CAMPAIGN SPECIALIST
Security Detection & SIEM Engineer
Sales Manager
Response Reality Check
PERSOL SINGAPORE PTE. LTD.
Ready to Apply?
This is a direct application to PERSOL SINGAPORE PTE. LTD.. No recruitment agencies involved.
Apply for this PositionResponse rate not available - Direct application to employer