Systems Research Engineer - Data and ML Infrastructure (Remote)
About the Role
We are seeking a talented Systems Research Engineer - Data and ML Infrastructure to join our innovative team at Feedzai. This remote position allows you to work from anywhere while contributing to cutting-edge technology that safeguards global commerce. As a Systems Research Engineer, you will be at the forefront of developing and implementing advanced machine learning and data processing systems.
What You'll Do
- Execute the full software development life cycle, writing well-designed and testable code.
- Develop state-of-the-art research work on topics including Data Processing, Machine Learning, and Graphs.
- Communicate findings and results to internal and external stakeholders, including writing scientific papers and patents.
- Design and develop new Cloud Native services, focusing on Data and ML Infrastructure.
- Collaborate with Product and Data Science teams to build distributed systems that support high-load operations.
Requirements
- BSc/MSc degree in Computer Science or a similar technical degree.
- 4+ years of experience in developing backend services.
- Good understanding of distributed systems, multi-threading, and object-oriented design principles.
- Strong programming skills, preferably in Java (or any JVM language) and Python.
- Ability to digest and implement state-of-the-art research and transform it into product requirements.
Nice to Have
- Knowledge of Big Data technologies such as Spark, Kafka, and EMR/Dataflow.
- Familiarity with workflow orchestration tools such as Airflow or Flyte.
What We Offer
- Competitive salary range of $90,000 - $130,000.
- Flexible remote work environment.
- Access to continuous learning and professional development opportunities.
- Collaborative and inclusive team culture.
- Opportunity to work on innovative projects that make a real impact.
This role offers a unique opportunity to work on cutting-edge technology in a collaborative remote environment. Feedzai is a leader in AI-driven risk management.
Who Will Succeed Here
Proficiency in Java and Python with a proven ability to implement scalable machine learning algorithms in a distributed system environment, leveraging frameworks like Apache Spark and tools such as Apache Kafka.
Strong problem-solving mindset and autonomy to tackle complex data processing challenges in a remote work setup, demonstrating self-motivation and adaptability to changing project requirements.
Experience with cloud-based data infrastructures such as AWS EMR and orchestration tools like Airflow, coupled with a proactive approach to optimizing data workflows and enhancing system performance.
Learning Resources
Career Path
Market Overview
Skills & Requirements
Domain Trends
Industry News
Loading latest industry news...
Finding relevant articles from the last 6 months