About the Role
We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic Data Engineering team on a contract basis. In this role, you will play a key part in designing, building, maintaining, and optimizing scalable data pipelines and infrastructure. Your expertise will ensure our data is reliable, performant, and ready for analytics, data science, and business intelligence initiatives.
This is an exciting opportunity for a proactive, solutions-oriented professional who thrives in fast-paced environments and enjoys collaborating across diverse teams.
Key Responsibilities
-
Design, develop, and maintain scalable data pipelines using modern ELT/ETL tools and technologies.
-
Build and optimize data models on cloud data warehouses (Snowflake), ensuring integrity, performance, and cost-effectiveness.
-
Collaborate with analysts, data scientists, and business stakeholders to translate requirements into technical solutions.
-
Implement and manage orchestration workflows (Airflow, Prefect, etc.) for timely and accurate data delivery.
-
Develop and enforce data quality frameworks to ensure reliability and consistency.
-
Optimize pipeline and query performance across large datasets.
-
Contribute to data platform architecture and design, applying modern engineering principles.
-
Ensure compliance with data governance, security, and regulatory frameworks.
-
Document processes, troubleshoot issues, and drive continuous improvement.
-
Mentor junior team members and promote best practices where applicable.
Must-Have Skills & Experience
-
5+ years of proven experience as a Senior Data Engineer(or similar role).
-
Strong expertise with dbt for data transformation and modeling.
-
Hands-on experience with Snowflake(data modeling, performance tuning, cost optimization, and security).
-
Solid experience with Databricks including:
-
Apache Spark (PySpark and/or Scala)
-
Familiarity with R on Databricks for analytics
-
-
Advanced SQL skills for complex, optimized queries.
-
Proficiency in Python or Scala.
-
Experience with orchestration tools (Airflow, Prefect, Dagster).
-
Familiarity with major cloud platforms (AWS, Azure, GCP).
-
Strong knowledge of ETL/ELT principles and dimensional modeling.
-
Experience with Git or other version control systems.
To be considered for the role click the 'apply' button or for more information about this and other opportunities please contact Yash Kumar Jain on 03 8680 4235 or email: ykumarjain@paxus.com.au and quote the above job reference number.
Paxus values diversity and welcomes applications from Indigenous Australians, people from diverse cultural and linguistic backgrounds and people living with a disability. If you require an adjustment to the recruitment process, including the application form in an alternate format, please contact me on the above contact details.