Quality Control: How to check for errors in your investment performance
Sean P. Gilligan, CFA, CPA, CIPM and Jocelyn Mullis, CFA, CIPM
May 5, 2021
Recent investment performance calculation mistakes at Pennsylvania Public School Employees’ Retirement System (“PSERS”) have highlighted the importance of quality control reviews and raises questions about where risk exists, how these risks can be mitigated, and what role independent verifications should play in the quality control process.
What happened at PSERS?1
An error in the return calculation for Pennsylvania’s $64 billion state public school employee retirement plan has had serious implications for its beneficiaries and those involved in the calculation mistake.
In 2010, the plan, which was already underfunded, entered into a risk-sharing agreement where employees hired after 2011 would pay more into the plan if the return (average time-weighted return) over a specific time period fell below the actuarial value of asset (AVA) return of 6.36%.
In December 2020, the board announced that the plan had achieved a return of 6.38%, a mere 2 basis points above the minimum threshold. But in March the board changed its tune, announcing that the calculation was incorrect and the 100,000 or so employees hired since 2011 (and their employers) should have actually paid more into the plan.
What’s worse is PSERS also announced that the FBI is investigating the organization, although details of the probe have not yet been released.
According to PSERS, a consultant, that had calculated the return, came forward and admitted to the calculation error. But the board also said that it is looking into potential cover up by its staff. From what we know, at least 3 independent consultants were involved in providing data used for the calculations, calculating the returns, and verifying the returns. So, with all these experts involved, how could this happen and what can your firm do to avoid a similar situation?
Key issues to address in an investment performance quality control process
Firms should develop sound quality control processes to help identify errors before results are published. Often these processes either do not exist or are insufficient to identify issues. Following a robust quality control process that considers the key risks involved and then finds ways to mitigate these risks greatly increases the accuracy of presented investment performance.
Although we do not yet know the cause of the errors found in the PSERS case, we can highlight a few primary reasons errors occur in investment performance reporting. Primarily, errors found in published performance results are caused by:
- Key Issue # 1 – Issues in the underlying data (e.g., incorrect or missing prices, unreconciled data, missing transactions, misclassified expenses, or failing to accrue fixed income)
- Key Issue #2 – Mistakes in calculations (e.g., manual calculations that fail to match the intended methodology)
- Key Issue #3 – Errors in reporting (e.g., publishing numbers that do not match the calculated results)
A robust quality control process should specifically address all three of these areas.
Considerations when designing a robust quality control process
Key Issue #1 – Issues in the underlying data
As they say, garbage in, garbage out. It is important to ask and address questions confirming the validity of data before it is used to calculate performance. Specifically, consider how the data used in the calculations is gathered, prepared, and reconciled before completing the calculations. Is there any formal signoff from the operations team confirming that the data is ready for use? Has a review of the data been conducted by an operations manager prior to this confirmation being made?
While deadlines to get performance published can be tight, taking the time to ensure that the underlying data is final and ready to use before performance is calculated can prevent headaches later on.
The following is a list of issues to look for when testing data validity:
- Outlier performance – Portfolios performing differently than their peers may indicate a data issue or that the portfolio is mislabeled (i.e., tagged to a different strategy than it is invested in).
- Differences between ending and beginning market values – Generally, we expect a portfolio’s market value at the end of one month and the beginning of the next month to be equal (unless using a system where external cashflows are recorded between months and differences like this are expected). Flagging differences can help identify data issues.
- Offsetting decrease/increase in market value – Market values that suddenly increase or decrease and then return to the original value may have an incorrect price or transaction that should be researched.
- Gaps in performance – A portfolio whose performance suddenly stops and then restarts may have missing data.
- 0% returns – The portfolio may have liquidated and may no longer be under the firm’s discretionary management.
- Very low market values – The portfolio may have closed and is only holding a small residual balance, which should be excluded from the firm’s discretionary management.
- Net-of-fee returns higher than gross-of-fee returns – Seeing net returns that are higher than gross returns could indicate a data issue unless there are fee reversals you are aware of (e.g., performance fee accruals where previously accrued fees are adjusted back down).
- Gross-of-fee returns and net of-fee returns are equal – If gross-of-fee and net-of-fee returns are always equal for a fee-paying portfolio, it is likely that the management fees are paid from an outside source (paid by check or out of a different portfolio). The returns labelled as net-of-fee in a case like this should be treated as gross-of-fee returns.
Key Issue #2 – Mistakes in calculations
Mistakes happen, but there are ways to reduce their frequency and impact. First, you’ll want to consider how manual your performance calculations are as well as the experience of the person completing the calculations.
Let’s face it, Excel is probably the most widely used tool in performance measurement, especially for smaller firms. While many firms likely find Excel to be a user-friendly tool for calculating performance statistics, it has its limitations. Studies have shown that up to 90% of spreadsheets contain errors and spreadsheets with lots of formulas are even more likely to contain mistakes. Whether it’s not properly dragging down a formula or referencing the wrong cell, fundamentally, the biggest problem is that users do not check their work or have carefully outlined procedures for confirming accuracy.
Although this may seem obvious, having a second set of eyes on a spreadsheet can save you from the embarrassing headache of having to explain errors in performance calculations. It is even better if this review is a multi-layered process. Having someone review details as well as someone to do a high-level “gut-check” to make sure the calculations and results make sense can reduce this risk. Depending on the size of your firm, this may be easier to accomplish with a third-party consultant, where you serve as a final layer of review.
Having this final “gut-check” can help prevent avoidable errors prior to publication. We find that this final “gut-check” is best performed by someone who knows the strategy intimately rather than a performance or compliance analyst, as these individuals may be too focused on the calculation details to take a step back and consider whether the returns make sense for the strategy and are in line with expectations.
If you use software to calculate performance, you can significantly reduce the risk of manual error, but due diligence should still be performed from time to time to manually prove out the accuracy of the calculations completed in the program. This does not need to be done every time but should be conducted when introducing a new software system and any time changes are made to the program.
Key Issue #3 – Errors in reporting
It may seem silly, but many performance reporting errors come from transposing strategy and benchmark returns in presentations or placing the return of one strategy in the factsheet of another. Therefore, it is important to consider how the final performance figures make it from the system or spreadsheet into the performance presentations. Are they typed? Copy and pasted? Or are the performance reports generated directly out of a system? It’s not enough to complete the calculations correctly, the final reports must also be accurate, so adding a step to review this is crucial.
A similar review process to the one described above can really make a difference, but ultimately, understanding the vulnerabilities of your performance reporting will help you design quality control procedures that address any exposure.
Calculations completed by external performance consultants
Whether performance is calculated internally or by a third-party performance consultant, the same key issues should be considered when designing the quality control process. Due diligence should be done on the performance consulting firm to evaluate the level of experience the firm has with calculating investment performance and what kind of quality control process they follow prior to providing results to your firm. This information will help you determine what reliance you can place on their procedures and what your firm should still check internally.
For example, outsourcing performance calculations to an individual or single-person firm likely necessitates a more in-depth review since this individual would not have the ability to have a second set of eyes on the results prior to providing them to your firm. However, even larger performance consulting firms with robust quality control processes may not have intimate knowledge of your strategies, meaning that, at a minimum, a final “gut-check” should be done by your firm prior to publication.
Reliance on independent performance verification firms to find errors
Many firms that hire performance verification firms rely on their verifier to be their quality control check; however, this may not be a good practice for a variety of reasons. If this is a common practice at your firm, you may want to check the scope of your engagement before relying too heavily on your verifier to find errors.
Verification is common for firms that claim compliance with the Global Investment Performance Standards (GIPS®). But even firms that claim compliance with the GIPS standards and receive a firm-wide verification are required to disclose that, “…Verification does not provide assurance on the accuracy of any specific performance report.”
This is because verifiers are primarily focused on the existence and implementation of policies and procedures. While their review may help identify errors that exist in the sample selected for testing, it specifically does not certify the accuracy of presented results. While the verification process is valuable and often does turn up errors that need to be corrected, regardless of the scope of your engagement, a robust internal quality control process is likely still warranted.
Firms that are not GIPS compliant may engage verification firms for various types of attestation or review engagements like strategy exams or other non-GIPS performance reviews. In these situations, the scope of the engagement may be customized to meet the needs (and budget) of the firm seeking verification. A clear understanding of exactly what is in-scope and specifically what the verifier is opining on when issuing their report is key.
Situations where the engagement entails a detailed attestation tracing input data back to independent sources, confirming that calculations are carried out consistently, and verifying that published results match the calculations, allow for heavy reliance on the verifier as part of your quality control process.
Alternatively, when the scope merely consists of a high-level review confirming the appropriateness of the calculation methodology, a much more robust internal quality control process should be applied.
Knowing the scope of the engagement your firm has established with the verification firm is an important element in determining how much reliance can put on their review and findings, which can then be incorporated into the design of your own internal quality control procedures.
Mistakes happen in investment performance reporting, but a robust quality control process can greatly mitigate this risk. Understanding the risks that exist, designing processes to test these risk areas, and understanding the role and engagement scope of all consultants involved are essential items in designing a quality control procedure that work for your firm – and hopefully one that will help you avoid situations like what happened with PSERS.
If you are not sure where to begin, we have tools and services available to help. Longs Peak uses proprietary software to calculate and analyze performance. Our software helps flag possible data issues and outlier performers and also produces performance reports directly from our performance system.
In addition, our performance consultants are available to work with your team to help identify potential vulnerabilities in your performance reporting process and can help you develop better quality control procedures, where needed.
If you would like to learn more about our quality control process or any of the services we offer (like data and outlier testing) to help improve the accuracy and reliability of investment performance, contact us or email Sean Gilligan directly at firstname.lastname@example.org.
1 For more information on PSERS, please see this article from the Philadelphia Inquirer.