Job Description
About NewCombin
Our name, NewCombin, refers to the “new combination” between professionals with extensive experience and experience with those young people with a strong disruptive imprint.
It is no coincidence: Marcelo, Nicolás and Ian, the founding partners, are a father, a son and a friend united by the ambition to empower organizations through technology.
Since 2017 we have sought to merge these characteristics into our professional profiles to provide unique and quality solutions to our clients, so that we can mutually nurture each other and learn together along the way.
If you are interested in taking a challenge that:
- Allows you to work with multiple industries and learn from each of them
- Boosts your English by working with teams located around the world
- Allows you to see the direct impact of your work on the users of the solution you created…
This position is for you!
In this role, you’ll get to:
-
Design and build automated and secure ingestion pipelines (SFTP, cloud storage, server-to-server authentication).
-
Process large-scale datasets (terabytes) in CSV, Parquet, Excel, JSON, and custom formats.
-
Implement high-performance transformations and processing using local analytical engines (DuckDB or equivalent).
-
Develop extensible scripts and AI-assisted automation workflows to support new data sources.
-
Implement data integrity validations (completeness, time coverage, file validation).
-
Build infrastructure compatible with AWS, Azure, and/or GCP.
-
Operate and maintain production pipelines with a focus on reliability and resilience.
-
Respond to ingestion failures and changes in data formats.
-
Optimize performance and reduce downtime risks in audit-time-sensitive environments.
-
Collaborate with architects, domain specialists, and audit teams.
-
Translate business rules into technical ingestion logic.
-
Participate in technical conversations with clients when needed.
-
Contribute to the system’s evolution and long-term roadmap.
We are looking for you if you
- Have 6+ years of experience in Data Engineering or Pipeline Engineering.
-
Have strong proficiency in Python (or another equivalent scripting language).
-
Have experience building ingestion systems for large, messy, and fragmented datasets.
-
Have experience with parallelizing ingestion and processing workflows.
-
Have knowledge and hands-on experience with DuckDB or equivalent embedded analytical engines.
-
Have experience with SFTP, secure data transfers, and authentication mechanisms.
-
Have experience with at least one cloud provider: AWS, Azure, or GCP.
-
Have experience with Infrastructure as Code (Terraform or similar).
- Are fluent in English (direct communication with a US-based team and stakeholders)
- Are based in LATAM
Desirable skills
-
Experience with financial auditing systems, royalties, or compliance.
-
Knowledge of the music, media, or streaming data domain.
-
Experience working with multi-source and highly fragmented data ecosystems.
-
Previous experience in client-facing roles.
-
Background in DevOps or Platform Engineering.
-
Use of AI tools applied to data engineering workflows.
We offer
- Excellent work environment
- 100% remote
- Long-term relationship
- Opportunity to work with global teams