Engineer II (Data)

Welcome to Ross Stores, Inc., where our differences make us stronger… At Ross and dd’s, inclusion is a way of life. We care about our Associates and the communities we serve and we value their differences. We are committed to building diverse teams and an inclusive culture. We respect and celebrate the diversity of backgrounds, identities, and ideas of those who work and shop with us. Come join us as we continue our diversity, equality and inclusion journey!

The Data Engineer II will work in the Ross Data Technology team to build and deploy scalable and high-performance data pipelines needed to meet the company's growth trajectories and analytic needs. The role requires deep technical expertise building out data solutions from start to finish including: data acquisition, modeling, transformations (ELT/ETL), data architecture, orchestration, and data consumption.

The Data Engineer II will be working closely with our leadership and technical teams to understand business and technical requirements in order to deliver robust, reusable, and scalable solutions that are used to drive key business decisions. Included in this scope of work are development of data pipelines on our data lake and EDW platforms.

  • Develop data pipelines for use on our analytics platforms. Development tasks include: ingestion, acquisition, modeling, and data transformations, orchestration, consumption, access patterns, and deployment of code.
  • Perform basic data/business analyst functions such as requirements gathering, technical designs, and testing documentation. Will work directly with our internal business partners and cross functional technical teams in a collaborative manner.
  • Research and development tasks - buildout of "proof of technology" for new data software components
  • Participate as requested on other corporate initiatives that may involve ad-hoc data requests and new data needs
  • Administration and performance tuning tasks for data pipeline technical stack
  • Ongoing maintenance, enhancement and support of data pipeline solutions provided, so that they continue to meet business needs

  • Planning
  • Communication
  • Listening
  • Problem Solving
  • Customer Focus
  • Approachability
  • Dealing with Ambiguity
  • Motivating Others

  • At least 5+ years of in-depth experience developing and deploying data pipelines for analytics use cases and workloads
  • Experience working with immutable data storage (HDFS, S3, ADLS)
  • Experience building data models from scratch (Kimball, star-schema, 3NF)
  • Experience building ingestion processes using an ingestion tool. Sqoop, Streamsets, and/or Attunity preferred.
  • Experience developing solutions for a multi-terabyte data warehouse platform (Snowflake, Redshift, Azure Synapse, Netezza, Greenplum, etc.)
  • Experience performing basic data analysis functions and working directly with end users during SDLC
  • Experience with agile SDLC processes (sprints, scrums, etc.)
  • Experience performing common data preparation tasks: munging, cleaning, profiling, conforming, basic statistics, etc.
  • Advanced SQL experience - including hand-writing code
  • Scripting experience - shell and/or python
  • Strong analytical and problem-solving skills utilizing diagnostic abilities related to database management and data analysis.
  • Strong documentation skills to facilitate support of ongoing operational responsibilities.
  • Must possess excellent written and oral communications skills and the ability to clearly define projects, objectives, goals, schedules and assignments.
  • Strong emphasis on unit testing and delivering defect-free code.
  • Must be able to work with ambiguous direction and undefined requirements, while driving towards a clear deliverable
  • Must be able to work independently and take complete ownership of a deliverable.
  • Bachelor's degree or equivalent relevant experience. Must be able to display requisite knowledge, experience, and technical aptitude.

Preferred Qualifications
  • Experience building pipelines using modern ETL platforms - Talend, DBT, Azure Data Factory.
  • Experience developing on a cloud native data warehouse platform such as: Snowflake, Redshift, Azure Synapse, etc.
  • Experience building and deploying pipelines using cloud native tooling and platforms (Azure, Redshift, GCP, etc.)
  • Experience administering Hive or CDP platforms
  • Knowledge of retail operations and/or retail supply chain operations and retail data models (strongly preferred)

  • Job requires ability to work in an office environment, primarily on a computer.
  • Requires sitting, standing, walking, hearing, talking on the telephone, attending in-person meetings, typing, and working with paper/files, etc.
  • Consistent timeliness and regular attendance.
  • Vision requirements: Ability to see information in print and/or electronically.


This job description is a summary of the primary duties and responsibilities of the job and position. It is not intended to be a comprehensive or all-inclusive listing of duties and responsibilities. Contents are subject to change at management's discretion.

Ross is an equal employment opportunity employer. We consider individuals for employment or promotion according to their skills, abilities and experience. We believe that it is an essential part of the Company's overall commitment to attract, hire and develop a strong, talented and diverse workforce. Ross is committed to complying with all applicable laws prohibiting discrimination based on race, color, religious creed, age, national origin, ancestry, physical, mental or developmental disability, sex (which includes pregnancy, childbirth, breastfeeding and medical conditions related to pregnancy, childbirth or breastfeeding), veteran status, military status, marital or registered domestic partnership status, medical condition (including cancer or genetic characteristics), genetic information, gender, gender identity, gender expression, sexual orientation, as well as any other category protected by federal, state or local laws.