About

We love our portfolio companies.

You’ll love working for one of them.

178
Companies
2,403
Jobs

Sr. Data Engineer

Yalochat

Yalochat

Data Science
Latambarcem, Goa, India
Posted on Friday, October 13, 2023

Yalo

Hi! This is Yalo! We are on a mission to bring conversational commerce to the world...

Remember how it used to be to interact with businesses that knew and understood you, that could recommend exactly what you needed, and that with a simple message could get you what you wanted??? Yep... neither do we. That is why at Yalo we are marrying the scale of digital commerce with the personalization and simplicity of conversations to help companies delight their users.

We know that traditional SAAS companies focus on first world problems... we don't! Having started in Latin America, our roots are in Emerging Markets and therefore we care about bringing amazing experiences to a population that traditionally has been underserved, such as the small shop owner in Brazil that is ordering online for the first time.

What we are looking for?

We are seeking a skilled Senior Data Engineer with a deep understanding of data structures, formats, construction of ETL’s and good understanding of the implementation of data pipelines such as Kafka or Snowplow. We are looking for a person who also will be responsible for ensuring built data models, their integrity and security as well to be able to help in the maintenance of a semantic layer.

What are the responsibilities for this role?

  • Design, build and maintain batch or real-time data pipelines in production.
  • Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources.
  • Build and maintain Kafka and Snowplow pipelines.
  • Develop ETL (extract, transform, load) processes to help extract and manipulate data from multiple sources.
  • Help to design and maintain a semantic layer.
  • Automate data workflows such as data ingestion, aggregation, and ETL processing.
  • Prepare raw data in Data Warehouses into a consumable dataset for both technical and non-technical stakeholders.
  • Partner with data scientists and data analysts to deploy machine learning and data models in production.
  • Build, maintain, and deploy data products for analytics and data science teams on GCP platform.
  • Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures.
  • Monitor data systems performance and implement optimization strategies.
  • Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership.
  • Collaboration: Work closely with cross-functional teams, product managers, and stakeholders to ensure the delivery of high-quality software.
  • Continuous Learning: Stay updated with the latest trends and technologies in data systems, ensuring that our systems remain state-of-the-art.

Job Requirements (Must have)

  • Bachelor’s/Master’s degree in Computer Science, Information Systems, or a related field.
  • Minimum 5 years of Data Engineering experience ideally in cloud environments and good understanding of microservices and APIs.
  • Working knowledge of Kafka pipelines.
  • Strong experience in designing and building ETL models and data workflows.
  • Working knowledge on designing and implementing a BI semantic layer.
  • Strong foundation in data structures, algorithms, and software design.
  • Advanced SQL skills and experience with relational databases and database design.
  • Experience working with BigQuery cloud Data Warehouse and other solutions like Snowflake, Databricks.
  • Working knowledge in object-oriented languages (e.g. Python, Java).
  • Strong proficiency in data pipeline and workflow management tools (e.g., Airflow).
  • Strong project management and organizational skills.
  • Excellent problem-solving, communication, and organizational skills.
  • Proven ability to work independently and with a team.

Nice to have:

  • Expertise in open table formats like Hudi, Iceberg, Delta.
  • Expertise with Snowplow pipelines.
  • Expertise in databases like Druid, Pinot, and Elasticsearch.
  • Collaborative project experience in Data Governance.

What do we offer?

  • Unlimited PTO policy
  • Competitive rewards on the market range
  • Remote working is available (-+3 hours CT)
  • Flexible time (driven by results)
  • Start-up environment
  • International teamwork
  • You and nothing else limit your career here

We care,

We keep it simple,

We make it happen,

We strive for excellence.

At Yalo, we are dedicated to creating a workplace that embodies our core values: caring, initiative, excellence, and simplicity. We believe in the power of diversity and inclusivity, where everyone's unique perspectives, experiences, and talents contribute to our collective success. As we embrace and respect our differences, we strive to create something extraordinary for the benefit of all.
We are proud to be an Equal Opportunity Employer, providing equal opportunities to individuals regardless of race, color, religion, national or ethnic origin, gender, sexual orientation, gender identity or expression, age, disability, protected veteran status, or any other legally protected characteristic. Our commitment to fairness and equality is a fundamental pillar of our company.


At Yalo, we uphold a culture of excellence. We constantly challenge ourselves to go above and beyond, delivering remarkable results and driving innovation. We encourage each team member to take initiative and make things happen, empowering them to bring their best ideas forward and contribute to our shared goals.