Responsibilities: - Create, construct, and manage pipelines and batch or real-time data systems. - Create extract, transform, and load (ETL) procedures that assist in the extraction and manipulation of data from various sources. - Uphold and improve the data infrastructure needed for precise data loading, transformation, and extraction from a range of data sources. - Data ingestion, aggregation, and ETL processing workflows can be automated. - Raw data from OLAP databases can be transformed into a dataset that is usable by technical and non-technical stakeholders. - Data scientists and functional leaders from various business units can collaborate to implement machine learning models. - Utilize data controls to maintain data privacy, security, compliance, and quality for assigned areas of ownership; - Monitor data systems performance and apply optimization strategies; - Use quality control methods to ensure data accuracy, integrity, privacy, security, and compliance; Qualification: - More than two years of relevant work experience - Expertise in relational databases, database architecture, and advanced SQL - Outstanding command of data pipeline and workflow management - Proficiency in developing and implementing machine learning models Strong mathematical, analytical, and problem-solving abilities; familiarity with the Kubernetes container platform outstanding organising and communication abilities - Demonstrated ability to work both independently and collaboratively - Prior experience with systems linked to data streaming (e.g., Apache Kafka, MySQL, Apache Pino, etc.) - Previous experience with distributed computing and big data sets (e.g., Hadoop/Spark, Presto/Trino, Superset, etc.) - Expertise in programming languages, such as Go, Java, Python, etc. 有意應徵者可將個人簡歷、個人身份證副本、相片及相關證明證書電郵至 hr@winson-group.com 薪金待遇面議,合則約見。所收集之個人資料將保密僅作招聘用途。