4+
Experience
Full Time
Job Type
B2
English Level
Experience
Job Type
English Level
Jaxel is seeking a Senior Big Data Engineer to join our cutting-edge team.
Design, build, and maintain reliable and scalable big data pipelines and infrastructure
Develop ETL/ELT processes for data ingestion, transformation, and integration
Optimize and manage distributed data processing using tools like Spark, Hadoop, or Flink
Collaborate with data scientists, analysts, and other engineers to understand data needs and deliver solutions
Ensure data quality, governance, and security best practices are followed
Monitor and troubleshoot data workflows and systems performance
Write clean, maintainable code and maintain documentation
4+ years of experience in Big Data engineering or related roles
Proficiency with big data technologies such as Apache Spark, Hadoop, Hive, Presto, or Flink
Strong programming skills in Python, Scala, or Java
Experience with data orchestration tools such as Apache Airflow, Luigi, or AWS Step Functions
Solid understanding of distributed systems and data architecture patterns
Experience with data storage technologies (e.g., HDFS, S3, Delta Lake, Parquet)
Familiarity with cloud data platforms like AWS, Google Cloud Platform (GCP), or Azure
Proficient in working with SQL and databases (relational and NoSQL)
Strong problem-solving and debugging skills
Nice to Have
Experience with real-time data processing (e.g., Kafka Streams, Apache Beam)
Knowledge of data modeling and warehouse design (e.g., Snowflake, Redshift, BigQuery)
Exposure to DevOps practices and CI/CD for data pipelines
Understanding of data privacy and compliance (GDPR, HIPAA, etc.)