DAT-025 Research Engineer

Toronto (On-site)

The Research Engineer will be part of a growing Research Engineering team and report to the Head of Data and Analytics. His/ her primary responsibility is to develop a robust and scalable firm-wide data warehouse to ingest, process, validate, analyze and report using financial market and internal data. This data warehouse will be used by various research teams for data exploration, online analytical processing, and data-driven decision support.

The ideal candidate is an entrepreneurial self-starter who is passionate about continuous learning in a rapidly evolving software/data engineering space. Extreme technical competence, intellectual curiosity, and attention to detail are essential, as are flexibility and comfort working in a growing organization.

This position will be based in Toronto and work closely with teams in New York and London.

RESPONSIBILITIES:

  • Collaborate with stakeholders and evaluate business needs and objectives; analyze, organize and combine raw data from different sources
  • Design and model complex and large scale (TB+) data in a standardized, resilient, and scalable manner
  • Explore and improve solutions to enhance data discoverability, security, exportability, understandability, and reliability
  • Develop bespoke analytical applications to facilitate data accessibility and adoption within investment research
  • Create custom solutions to continuously monitor data pipeline; develop and improve underlying cloud infrastructure and platforms; explore and due diligence new technologies for introduction to the firm
  • Own all aspects of development projects from start to finish, from establishing business requirements, design and development, testing and release management, to system monitoring and maintenance.

QUALIFICATIONS:

  • 2+ years of professional experience working with financial data from major vendors (e.g., S&P, Bloomberg, Factset)
  • Technical expertise in data models and data analysis, with an in-depth understanding of SQL and ETL (extract, transform, load) processes
  • Great technical, numerical and analytical skills
  • Hands-on experience with any of the following is a plus:
  • Interacting with data at scale (e.g., Spark, Pandas)
  • Exposure and previous experiment of market data (e.g., S&P, Bloomberg, FactSet)
  • CI/CD best practice and tooling (e.g., Jenkins, Cloud Build)
  • Cloud architecture and platform (e.g., AWS, GCP)
  • Container orchestration (e.g., Docker, Kubernetes, Airflow)
  • Infrastructure-as-Code concepts and tools (e.g., Ansible, Terraform)
  • Contributions to the open source community
  • Excellent interpersonal and communication skills
  • Track record of continuous learning, and pro-active knowledge sharer
  • Highest degree of integrity, professionalism and confidentiality
  • Honors degree from a reputable university