Financial Data Science Association

(201) 984-3456

Deep Data Delivery Standards

The Deep Data Delivery Standards are a public good that any asset owner or asset manager is explicitly permitted to use in contracting third party data providers. Third Party Data Providers can now submit for recognition as Bronze, Silver, or Gold standard in Deep Data Delivery by following the instructions at the end of this page.

“As a provider who has always been committed to high quality data that is valued by clients, we welcome this market based approach to recognising data quality. We look forward to discussing the application of these principles further as our dataset expands to include the broader and deeper range of indicators that will be available in the Vigeo Eiris database.“

Peter Webster (EIRIS, now part of Vigeo Eiris)

“We welcome the initiative for the Deep Data Delivery Standards and Principles and are intending to submit for recognition of our Deep Data status.”

Jillis Herpers (MSCI ESG)

“We strongly appreciate this initiative aimed at driving quality and transparency of ESG data sets and are supportive of the specifications’ objectives. A decision to submit for bronze, silver or gold recognition still depends on details and definitions specifying the concrete requirements for some of the standards.“

Robert Hassler (oekom research)

“RepRisk welcomes the Deep Data Delivery Standards and intend to submit for recognition of our own Deep Data status.“

Dr. Philipp Aeby (RepRisk)

“We welcome the Deep Data Delivery Standards and are intending to submit for recognition of our Deep Data status.”

Hendrik Bartel (TruValue Labs)

Version 1.0, launched at RI Europe June 22nd 2016

Deep data sets are expected to be delivered …

  1. … with a minimum of 5 years historical data on at least 30 independent indicators per data set (e.g. credit rating, ESG data) whereby any data point that is not delivered as reported at the respective point in time should be flagged as backfilled;
  2. … with 98% value weighted market coverage, where a market (e.g. equity index) is claimed to be covered;
  3. … with an assurance that ratings will be re-considered for at least 8.25% of the companies covered in the average month of the following year;
  4. … including considerate, accurate identifiers (eg ISINs) for 99% of the firms covered in every month of current and historical data coverage;
  5. … in machine readable format (eg CSV, XML) and with proper documentation of the data structure1;
  6. … with an assurance of individual rating independence meaning that none of the rated entities in the respective market (e.g. equity index) financially contributed to their rating or paid for access solely to their own rating;
  7. … with an assurance of organizational rating independence meaning that whenever rating agencies win entities as new clients which they also rate an independent analysis2 is conducted if these new clients receive, statistically significant3, higher ratings than in the year before and any biases found in this analysis will be addressed within 12 months;
  8. … with an assurance that all research or rating reports in the following year will indicate names and office locations of all analysts substantially involved in the analysis4 as well as the extent to which their data sources exceed those self-reported by the rated entity5;
  9. … with an assurance that all research or rating reports in the following year will include a logbook detailing any errata, where applicable, as well as the dates and roles of participants in communication with the rated companies;
  10. … accompanied by the ratio of the rating agency’s research costs to total cost6 or the ratio of research head count to total head count in the most recent financial year.

1. Proper documentation may be understood as documenting each row and column or the data structure including relevant definitions, third party providers and methodological descriptions as well as any relevant adjustments to definitions, third party providers and methodologies that might have occurred over time.
2. Independent analysis may be understood as an analysis by a third person or thirty party that was neither directly involved in the client acquisition process nor the rating process.
3. The analysis may use 1%, 5% or 10% as common statistical significance levels.
4. Whenever a rating process is fully automated, a rating agency may indicate the data scientist(s) substantially involved in designing the rating process.
5. This extent may be communicated by classifying data that is self-reported by the rated entity as representing (i) all, (ii) most, (iii) about half, (iv) some or (v) none of the information underlying the assessment.
6. Research costs or head count includes staff costs of researchers, staff providing or costs of supplies needed for research and IT related to research (i.e. data processing and data delivery).

The volunteers who developed the standard …

Industry Advisory Board:

  • Larry Abele, Auriel Equities, UK
  • Stephanie Aument, Calvert, USA [Co-Chair of IAB]
  • Andre Bertolotti, Quotient, USA
  • Herb Blank S-Network Global Indexes, USA
  • Emir Borovac, Nordea, Sweden
  • Iordanis Chatziprodromou, Swiss Re, Switzerland
  • Stephen Freedman, UBS, USA
  • Carlota Garcia-Manas, Church of England, UK
  • Tomasz Godziek, Bank J Safra Sarasin, Switzerland
  • Stephanie Hansen, Vanguard, USA
  • James Hodson, Financial Data Science Association, UK/USA
  • Pernille Jessen, Unipension, Denmark
  • Lloyd Kurtz, Nelson Capital, USA
  • Erica Lasdon, Calvert, USA
  • Andy Mason, Standard Life Investments, UK
  • Christopher McKnett, State Street Global Advisors, USA
  • Agnes Neher, ReFine Research Project, UK
  • Meryam Omi, Legal & General Investment Management
  • Akila Prabhakar, Mariner Investment Group, USA
  • Eli Reisman, SASB, USA [Co-Chair of IAB]
  • Frédéric Samama, Amundi, France
  • Alex Struc, PIMCO, UK
  • Geoff Trukenbrod, CFO of Obama for America 2012, USA
  • Maurice Versaevel, PGGM, NL
  • Michael Viehs, Hermes Fund Managers, UK [Co-Chair of IAB]
  • Ian Woods, AMP Capital, Australia

Academic Advisory Board:

  • Daniel Beunza, London School of Economics, UK
  • Damian Borth, German Centre for Artificial Intelligence (DFKI), Germany
  • Christel Dumas, ICHEC Brussels Management School, Belgium
  • Nadja Guenster, University of Muenster, Germany
  • Andreas Hoepner, ICMA Centre of Henley Business School, UK [Chair of AAB]
  • Bouchra M’Zali, Universite du Quebec a Montreal, Canada
  • Joakim Sandberg, Mistra Centre for Sustainable Markets, SSE, Sweden
  • Thomas Syrmos, University of Piraeus, Greece
  • Anna Young-Ferris, University of Sydney, Australia
  • Stefan Zeume, University of Michigan Ross School of Business, USA

Note to asset managers and asset owners:

The Deep Data Delivery Standards are a public good that any asset owner or asset manager is explicitly permitted to use in contracting third party data providers. Should you have any questions re the Standards, please contact the advisory board chairs or email .

Note to third party data providers:

You can submit for recognition from July 1st 2016 onwards. This is expected to involve: (1) Submitting an example data set to an academic of your choice for the academic to verify adherence to principles 1-5. (2) Submitting a signed letter of assurance to principles 6-9 including one of the ratios requested in principle 10. The first group of third party research providers is expected to be recognised at the 3rd Investment Innovation Benchmark Summit in Stockholm on September 2nd 2016.

Leave a Reply