About the Role
Are you passionate about real-time data and cutting-edge streaming technologies? We’re looking for a Streaming Data Engineer
to build and optimize high-performance data pipelines that process millions of events per second! If you thrive in fast-paced environments and love solving complex challenges, this is the role for you.
What You'll Do 🔥
- Architect, build, and optimize real-time data streaming pipelines using Kafka, Flink, Spark Streaming, or similar technologies.
- Design scalable, low-latency solutions to process massive volumes of data.
- Work closely with software engineers, data scientists, and DevOps teams to integrate streaming solutions into our platform.
- Monitor and enhance pipeline performance for fault tolerance, security, and efficiency.
- Implement data governance, compliance, and best practices in real-time data engineering.
What You Bring 💡
- Proven expertise in real-time data streaming and event-driven architectures.
- Strong hands-on experience with Kafka, Flink, Spark Streaming, or similar technologies.
- Proficiency in Java, Scala, or Python for data engineering.
- Deep understanding of distributed computing, messaging systems, and cloud platforms (AWS, Azure, GCP).
- Familiarity with SQL and NoSQL databases for real-time analytics.
- Experience with Docker, Kubernetes, and CI/CD pipelines is a plus!
To be considered for the role click the 'apply' button or for more information about this and other opportunities please contact Yash Kumar Jain on 03 8680 4235 or email: ykumarjain@paxus.com.au and quote the above job reference number.
Paxus values diversity and welcomes applications from Indigenous Australians, people from diverse cultural and linguistic backgrounds and people living with a disability. If you require an adjustment to the recruitment process, including the application form in an alternate format, please contact me on the above contact details.




