As a part of our team, you will apply your engineering skills and passion in developing modern architectures to enable our data-driven digital business.

Data Engineers are responsible for the design, architecture and support of the systems, services and applications required for the collection, storage, processing, and analysis of all forms of data in order to enable data-driven decisions and outcomes within the organization.

Responsibilities:

  • Working collaboratively with other engineers, data scientists, analytics teams, and business product owners in an agile environment:
    • Architect, build and support the operation of our Cloud and On-Premises enterprise data infrastructure and tools
    • Design robust, reusable and scalable data driven solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data
    • Build data APIs and data delivery services to support critical operational processes, analytical models and machine learning applications
    • Assist in selection and integration of data related tools, frameworks and applications required to expand our platform capabilities
    • Understand and implement best practices in management of enterprise data, including master data, reference data, metadata, data quality and lineage

Qualifications:

  • Bachelors degree in a technical field (e.g. Comp Science, Math, Engineering) or related experience
  • 2+ years of collective experience in data engineering, data analysis, data warehousing, data integration or business intelligence, in a similarly sized organization
  • 2+ years of experience architecting, building and administering big data and real-time streaming analytics architectures in both on premises and cloud environments (AWS, Azure, Google) leveraging technologies such as Hadoop, Spark, S3, EMR, Aurora, DynamoDB, Redshift, Neptune, Cosmos DB
  • 1+ years of experience architecting, building and administering large-scale distributed applications
  • 1+ years of experience with Linux operations and development, including basic commands and shell scripting
  • 2+ years of experience with execution of DevOps methodologies and Continuous Integration/Continuous Delivery within a large scale data delivery environment
  • Software development experience in least two or more of following languages: Java, Python, Scala, Node.js
  • Expertise in usage of SQL for data profiling, analysis and extraction

Preferred Qualifications:

  • Masters Degree in a technical field (e.g. Comp Science, Math, Engineering) or related experience
  • 2+ years of experience with NoSQL implementations (Mongo, Cassandra, HBase)
  • 3+ years of experience in implementing serverless architecture leveraging AWS Lambda or similar technology
  • 1+ years of experience with data visualization tools such as Tableau and PowerBI
  • Solid understanding of the Hadoop ecosystem (e.g. HDFS, MapReduce, HBase, Pig, Scoop, Spark, Hive)
  • Solid understanding of the Hadoop ecosystem (e.g. HDFS, MapReduce, HBase, Pig, Scoop, Spark, Hive)
  • 2+ years of experience with data warehousing architecture and implementation, including hands on experience developing ETL (Informatica, SSIS, etc.)
  • Relevant technology or platform certification (AWS Certified, Microsoft Certified)

Behavioral & Leadership Competencies:

  • Attention to detail and results oriented, with a strong customer focus
  • The ability to work within a team environment
  • Problem-solving and effective technical communication skills
  • Meet tight deadlines, multi-task, and prioritize workload
  • Willing to learn and keep pace with the latest advances in the related field (Grasp new technologies rapidly as needed to progress varied initiatives )

Working Environment:

  • Remote or at our office based in Hanoi, Vietnam
  • Due to the nature of the role, work outside of normal business hours may be required as needed

To apply for this job email your details to contact@aigenexpert.com