Quality Control: How to check for errors in your investment performance

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
May 5, 2021
15 min
Quality Control: How to check for errors in your investment performance

Recent investment performance calculation mistakes at Pennsylvania Public School Employees’ Retirement System (“PSERS”) have highlighted the importance of quality control reviews and raises questions about where risk exists, how these risks can be mitigated, and what role independent verifications should play in the quality control process.

What happened at PSERS?1

An error in the return calculation for Pennsylvania’s $64 billion state public school employee retirement plan has had serious implications for its beneficiaries and those involved in the calculation mistake.

In 2010, the plan, which was already underfunded, entered into a risk-sharing agreement where employees hired after 2011 would pay more into the plan if the return (average time-weighted return) over a specific time period fell below the actuarial value of asset (AVA) return of 6.36%.

In December 2020, the board announced that the plan had achieved a return of 6.38%, a mere 2 basis points above the minimum threshold. But in March the board changed its tune, announcing that the calculation was incorrect and the 100,000 or so employees hired since 2011 (and their employers) should have actually paid more into the plan.

What’s worse is PSERS also announced that the FBI is investigating the organization, although details of the probe have not yet been released.

According to PSERS, a consultant, that had calculated the return, came forward and admitted to the calculation error. But the board also said that it is looking into potential cover up by its staff. From what we know, at least 3 independent consultants were involved in providing data used for the calculations, calculating the returns, and verifying the returns. So, with all these experts involved, how could this happen and what can your firm do to avoid a similar situation?

Key issues to address in an investment performance quality control process

Firms should develop sound quality control processes to help identify errors before results are published. Often these processes either do not exist or are insufficient to identify issues. Following a robust quality control process that considers the key risks involved and then finds ways to mitigate these risks greatly increases the accuracy of presented investment performance.

Although we do not yet know the cause of the errors found in the PSERS case, we can highlight a few primary reasons errors occur in investment performance reporting. Primarily, errors found in published performance results are caused by:

  • Key Issue # 1 – Issues in the underlying data (e.g., incorrect or missing prices, unreconciled data, missing transactions, misclassified expenses, or failing to accrue fixed income)
  • Key Issue #2 – Mistakes in calculations (e.g., manual calculations that fail to match the intended methodology)
  • Key Issue #3 – Errors in reporting (e.g., publishing numbers that do not match the calculated results)

A robust quality control process should specifically address all three of these areas.

Considerations when designing a robust quality control process

Key Issue #1 – Issues in the underlying data

As they say, garbage in, garbage out. It is important to ask and address questions confirming the validity of data before it is used to calculate performance. Specifically, consider how the data used in the calculations is gathered, prepared, and reconciled before completing the calculations. Is there any formal signoff from the operations team confirming that the data is ready for use? Has a review of the data been conducted by an operations manager prior to this confirmation being made?

While deadlines to get performance published can be tight, taking the time to ensure that the underlying data is final and ready to use before performance is calculated can prevent headaches later on.

The following is a list of issues to look for when testing data validity:

  • Outlier performance – Portfolios performing differently than their peers may indicate a data issue or that the portfolio is mislabeled (i.e., tagged to a different strategy than it is invested in).
  • Differences between ending and beginning market values – Generally, we expect a portfolio’s market value at the end of one month and the beginning of the next month to be equal (unless using a system where external cashflows are recorded between months and differences like this are expected). Flagging differences can help identify data issues.
  • Offsetting decrease/increase in market value – Market values that suddenly increase or decrease and then return to the original value may have an incorrect price or transaction that should be researched.
  • Gaps in performance – A portfolio whose performance suddenly stops and then restarts may have missing data.
  • 0% returns – The portfolio may have liquidated and may no longer be under the firm’s discretionary management.
  • Very low market values – The portfolio may have closed and is only holding a small residual balance, which should be excluded from the firm’s discretionary management.
  • Net-of-fee returns higher than gross-of-fee returns – Seeing net returns that are higher than gross returns could indicate a data issue unless there are fee reversals you are aware of (e.g., performance fee accruals where previously accrued fees are adjusted back down).
  • Gross-of-fee returns and net of-fee returns are equal – If gross-of-fee and net-of-fee returns are always equal for a fee-paying portfolio, it is likely that the management fees are paid from an outside source (paid by check or out of a different portfolio). The returns labelled as net-of-fee in a case like this should be treated as gross-of-fee returns.

Key Issue #2 – Mistakes in calculations

Mistakes happen, but there are ways to reduce their frequency and impact. First, you’ll want to consider how manual your performance calculations are as well as the experience of the person completing the calculations.

Let’s face it, Excel is probably the most widely used tool in performance measurement, especially for smaller firms. While many firms likely find Excel to be a user-friendly tool for calculating performance statistics, it has its limitations. Studies have shown that up to 90% of spreadsheets contain errors and spreadsheets with lots of formulas are even more likely to contain mistakes. Whether it’s not properly dragging down a formula or referencing the wrong cell, fundamentally, the biggest problem is that users do not check their work or have carefully outlined procedures for confirming accuracy.

Although this may seem obvious, having a second set of eyes on a spreadsheet can save you from the embarrassing headache of having to explain errors in performance calculations. It is even better if this review is a multi-layered process. Having someone review details as well as someone to do a high-level “gut-check” to make sure the calculations and results make sense can reduce this risk. Depending on the size of your firm, this may be easier to accomplish with a third-party consultant, where you serve as a final layer of review.

Having this final “gut-check” can help prevent avoidable errors prior to publication. We find that this final “gut-check” is best performed by someone who knows the strategy intimately rather than a performance or compliance analyst, as these individuals may be too focused on the calculation details to take a step back and consider whether the returns make sense for the strategy and are in line with expectations.

If you use software to calculate performance, you can significantly reduce the risk of manual error, but due diligence should still be performed from time to time to manually prove out the accuracy of the calculations completed in the program. This does not need to be done every time but should be conducted when introducing a new software system and any time changes are made to the program.

Key Issue #3 – Errors in reporting

It may seem silly, but many performance reporting errors come from transposing strategy and benchmark returns in presentations or placing the return of one strategy in the factsheet of another. Therefore, it is important to consider how the final performance figures make it from the system or spreadsheet into the performance presentations. Are they typed? Copy and pasted? Or are the performance reports generated directly out of a system? It’s not enough to complete the calculations correctly, the final reports must also be accurate, so adding a step to review this is crucial.

A similar review process to the one described above can really make a difference, but ultimately, understanding the vulnerabilities of your performance reporting will help you design quality control procedures that address any exposure.

Calculations completed by external performance consultants

Whether performance is calculated internally or by a third-party performance consultant, the same key issues should be considered when designing the quality control process. Due diligence should be done on the performance consulting firm to evaluate the level of experience the firm has with calculating investment performance and what kind of quality control process they follow prior to providing results to your firm. This information will help you determine what reliance you can place on their procedures and what your firm should still check internally.

For example, outsourcing performance calculations to an individual or single-person firm likely necessitates a more in-depth review since this individual would not have the ability to have a second set of eyes on the results prior to providing them to your firm. However, even larger performance consulting firms with robust quality control processes may not have intimate knowledge of your strategies, meaning that, at a minimum, a final “gut-check” should be done by your firm prior to publication.

Reliance on independent performance verification firms to find errors

Many firms that hire performance verification firms rely on their verifier to be their quality control check; however, this may not be a good practice for a variety of reasons. If this is a common practice at your firm, you may want to check the scope of your engagement before relying too heavily on your verifier to find errors.

Verification is common for firms that claim compliance with the Global Investment Performance Standards (GIPS®). But even firms that claim compliance with the GIPS standards and receive a firm-wide verification are required to disclose that, “…Verification does not provide assurance on the accuracy of any specific performance report.

This is because verifiers are primarily focused on the existence and implementation of policies and procedures. While their review may help identify errors that exist in the sample selected for testing, it specifically does not certify the accuracy of presented results. While the verification process is valuable and often does turn up errors that need to be corrected, regardless of the scope of your engagement, a robust internal quality control process is likely still warranted.

Firms that are not GIPS compliant may engage verification firms for various types of attestation or review engagements like strategy exams or other non-GIPS performance reviews. In these situations, the scope of the engagement may be customized to meet the needs (and budget) of the firm seeking verification. A clear understanding of exactly what is in-scope and specifically what the verifier is opining on when issuing their report is key.

Situations where the engagement entails a detailed attestation tracing input data back to independent sources, confirming that calculations are carried out consistently, and verifying that published results match the calculations, allow for heavy reliance on the verifier as part of your quality control process.

Alternatively, when the scope merely consists of a high-level review confirming the appropriateness of the calculation methodology, a much more robust internal quality control process should be applied.

Knowing the scope of the engagement your firm has established with the verification firm is an important element in determining how much reliance can put on their review and findings, which can then be incorporated into the design of your own internal quality control procedures.

Key take-aways

Mistakes happen in investment performance reporting, but a robust quality control process can greatly mitigate this risk. Understanding the risks that exist, designing processes to test these risk areas, and understanding the role and engagement scope of all consultants involved are essential items in designing a quality control procedure that work for your firm – and hopefully one that will help you avoid situations like what happened with PSERS.

If you are not sure where to begin, we have tools and services available to help. Longs Peak uses proprietary software to calculate and analyze performance. Our software helps flag possible data issues and outlier performers and also produces performance reports directly from our performance system.

In addition, our performance consultants are available to work with your team to help identify potential vulnerabilities in your performance reporting process and can help you develop better quality control procedures, where needed.

Questions?

If you would like to learn more about our quality control process or any of the services we offer (like data and outlier testing) to help improve the accuracy and reliability of investment performance, contact us or email Sean Gilligan directly at sean@longspeakadvisory.com.

1 For more information on PSERS, please see this article from the Philadelphia Inquirer.

Recommended Post

View All Articles

From Compliance to Growth: How the GIPS® Standards Help Investment Firms Unlock New Opportunities

For many investment managers, the first barrier to growth isn’t performance—it’s proof.
When platforms, consultants, and institutional investors evaluate new strategies, they’re not just asking how well you perform; they’re asking how you measure and present those results.

That’s where the GIPS® standards come in.

More and more investment platforms and allocators now require firms to comply with the GIPS standards before they’ll even review a strategy. For firms seeking to expand their reach—whether through model delivery, SMAs, or institutional channels—GIPS compliance has become a passport to opportunity.

The Opportunity Behind Compliance

Becoming compliant with the GIPS standards is about more than checking a box. It’s about building credibility and transparency in a way that resonates with today’s due diligence standards.

When a firm claims compliance with the GIPS standards, it demonstrates that its performance is calculated and presented according to globally recognized ethical principles—ensuring full disclosure and fair representation. This helps level the playing field for managers of all sizes, giving them a chance to compete where it matters most: on results and consistency.

In short, GIPS compliance doesn’t just make your reporting more accurate—it makes your firm more credible and discoverable.

Turning Complexity Into Clarity

While the benefits are clear, the process can feel overwhelming. Between defining the firm, creating composites, documenting policies and procedures, and maintaining data accuracy—many teams struggle to find the time or expertise to get it right.

That’s where Longs Peak comes in.

We specialize in simplifying the process. Our team helps firms navigate every step—from initial readiness and composite construction to quarterly maintenance and ongoing training—so that compliance becomes a seamless part of operations rather than a burden on them.

As one of our clients put it, “Longs Peak helps us navigate GIPS compliance with ease. They spare us from the time and effort needed to interpret what the requirements mean and let us focus on implementation.”

Real Firms, Real Impact

We’ve seen firsthand how GIPS compliance can transform firms’ growth trajectories.

Take Genter Capital Management, for example. As David Klatt, CFA and his team prepared to expand into model delivery platforms, managing composites in accordance with the GIPS standards became increasingly complex. With Longs Peak’s customized composite maintenance system in place, Genter gained the confidence and operational efficiency they needed to access new platforms and relationships—many of which require firms to be GIPS compliant as a baseline.

Or consider Integris Wealth Management. After years of wanting to formalize their composite reporting, they finally made it happen with our support. As Jenna Reynolds from Integris shared:

“When I joined Integris over seven years ago, we knew we wanted to build out our composite reporting, but the complexity of the process felt overwhelming. Since partnering with Longs Peak in 2022, they’ve been instrumental in driving the project to completion. Our ongoing collaboration continues to be both productive and enjoyable.”

These are just two examples of what happens when compliance meets clarity—firms gain time back, confidence grows, and new business doors open.

Why It Matters—Compliance as a Strategic Advantage

At Longs Peak, we believe compliance with the GIPS standards isn’t a cost—it’s an investment.

By aligning your firm’s performance reporting with the GIPS standards, you gain:

  • Access to platforms and institutions that require GIPS compliant firms.
  • Credibility and trust in an increasingly competitive landscape.
  • Operational efficiency through consistent data and documented processes.
  • Scalability to support multiple strategies and distribution channels.

Simply put: compliance fuels confidence—and confidence drives growth.

Simplifying the Complex

At Longs Peak, we’ve helped over 250 firms and asset owners transform how they calculate, present, and communicate their investment performance. Our goal is simple: make compliance with the GIPS standards practical, transparent, and aligned with your firm’s growth goals.

Because when compliance works efficiently, it doesn’t slow your business down—it helps it reach further.

Ready to turn compliance into a growth advantage?

Let’s talk about how we can help your firm simplify the complex.

📧 hello@longspeakadvisory.com
🌐 www.longspeakadvisory.com

Performance reporting has two common pitfalls: it’s backward-looking, and it often stops at raw returns. A quarterly report might show whether a portfolio beat its benchmark, but it doesn’t always show why or whether the results are sustainable. By layering in risk-adjusted performance measures—and using them in a structured feedback loop—firms can move beyond reporting history to actively improving the future.

Why a Feedback Loop Matters

Clients, boards, and oversight committees want more than historical returns. They want to know whether:

·        performance was delivered consistently,

·        risk was managed responsibly, and

·        the process driving results is repeatable.

A feedback loop helps firms:

·        define expectations up front instead of rationalizing results after the fact,

·        monitor performance relative to objective appraisal measures,

·        diagnose whether results are consistent with the manager’s stated mandate, and

·        adjust course in real time so tomorrow’s outcomes improve.

With the right discipline, performance reporting shifts from a record of the past toa tool for shaping the future.

Step 1: Define the Measures in Advance

A useful feedback loop begins with clear definitions of success. Just as businesses set key performance indicators (KPIs) before evaluating outcomes, portfolio managers should define their performance and risk statistics in advance, along with expectations for how those measures should look if the strategy is working as intended.

One way to make this tangible is by creating a Performance Scorecard. The scorecard sets out pre-determined goals with specific targets for the chosen measures. At the end of the performance period, the manager completes the scorecard by comparing actual outcomes against those targets. This creates a clear, documented record of where the strategy succeeded and where it fell short.

Some of the most effective appraisal measures to include on a scorecard are:

·        Jensen’s Alpha: Did the manager generate returns beyond what would be expected for the level of market risk (beta) taken?

·        Sharpe Ratio: Were returns earned efficiently relative to volatility?

·        Max Drawdown: If the strategy claims downside protection, did the worst loss align with that promise?

·        Up- and Down-Market Capture Ratios: Did the strategy deliver the participation levels in up and down markets that were expected?

By setting these expectations up front in a scorecard, firms create a benchmark for accountability. After the performance period, results can be compared to those preset goals, and any shortfalls can be dissected to understand why they occurred.

Step 2: Create Accountability Through Reflection

This structured comparison between expected vs. actual results is the heart of the feedback loop.

If the Sharpe Ratio is lower than expected, was excess risk taken unintentionally? If the Downside Capture Ratio is higher than promised, did the strategy really offer the protection it claimed?

The key is not just to measure, but to reflect. Managers should ask:

·        Were deviations intentional or unintentional?

·        Were they the result of security selection, risk underestimation, or process drift?

·        Do changes need to be made to avoid repeating the same shortfall next period?

The scorecard provides a simple framework for this reflection, turning appraisal statistics into active learning tools rather than static reporting figures.

Step 3: Monitor, Diagnose, Adjust

With preset measures in place, the loop becomes an ongoing process:

1.     Review results against the expectations that were defined in advance.

2.     Flag deviations using alpha, Sharpe, drawdown, and capture ratios.

3.     Discuss root causes—intentional, structural, or concerning.

4.     Refine the investment process to avoid repeating the same shortcomings.

This approach ensures that managers don’t just record results—they use them to refine their craft. The scorecard becomes the record of this process, creating continuity over multiple periods.

Step 4: Apply the Feedback Loop Broadly

When applied consistently, appraisal measures—and the scorecards built around them—support more than internal evaluation. They can be used for:

·        Manager oversight: Boards and trustees see whether results matched stated goals.

·        Incentive design: Bonus structures tied to pre-defined risk-adjusted outcomes.

·        Governance and compliance: Demonstrating accountability with clear, documented processes.

How Longs Peak Can Help

At Longs Peak, we help firms move beyond static reporting by building feedback loops rooted in performance appraisal. We:

·        Define meaningful performance and risk measures tailored to each strategy.

·        Help managers set pre-determined expectations for those measures and build them into a scorecard.

·        Calculate and interpret statistics such as alpha, Sharpe, drawdowns, and capture ratios.

·        Facilitate reflection sessions so results are compared to goals and lessons are turned into process improvements.

·        Provide governance support to ensure documentation and accountability.

The result is a sustainable process that keeps strategies aligned, disciplined, and credible.

Closing Thought

Markets will always fluctuate. But firms that treat performance as a feedback loop—nota static report—build resilience, discipline, and trust.

A well-structured scorecard ensures that performance data isn’t just about yesterday’s story. When used as feedback, it becomes a roadmap for tomorrow.

Need help creating a Performance Scorecard? Reach out if you want us to help you create more accountability today!

When you're responsible for overseeing the performance of an endowment or public pension fund, one of the most critical tools at your disposal is the benchmark. But not just any benchmark—a meaningful one, designed with intention and aligned with your Investment Policy Statement(IPS). Benchmarks aren’t just numbers to report alongside returns; they represent the performance your total fund should have delivered if your strategic targets were passively implemented.

And yet, many asset owners still find themselves working with benchmarks that don’t quite match their objectives—either too generic, too simplified, or misaligned with how the total fund is structured. Let’s walkthrough how to build more effective benchmarks that reflect your IPS and support better performance oversight.

Start with the Policy: Your IPS Should Guide Benchmark Construction

Your IPS is more than a governance document—it is the road map that sets strategic asset allocation targets for the fund. Whether you're allocating 50% to public equity or 15% to private equity, each target signals an intentional risk/return decision. Your benchmark should be built to evaluate how well each segment of the total fund performed.

The key is to assign a benchmark to each asset class and sub-asset class listed in your IPS. This allows for layered performance analysis—at the individual sub-asset class level (such as large cap public equity), at the broader asset class level (like total public equity), and ultimately rolled up at the Total Fund level. When benchmarks reflect the same weights and structure as the strategic targets in your IPS, you can assess how tactical shifts in weights and active management within each segment are adding or detracting value.

Use Trusted Public Indexes for Liquid Assets

For traditional, liquid assets—like public equities and fixed income—benchmarking is straightforward. Widely recognized indexes like the S&P 500, MSCI ACWI, or Bloomberg U.S. Aggregate Bond Index are generally appropriate and provide a reasonable passive alternative against which to measure active strategies managed using a similar pool of investments as the index.

These benchmarks are also calculated using time-weighted returns (TWR), which strip out the impact of cash flows—ideal for evaluating manager skill. When each component of your total fund has a TWR-based benchmark, they can all be rolled up into a total fund benchmark with consistency and clarity.

Think Beyond the Index for Private Markets

Where benchmarking gets tricky is in illiquid or asset classes like private equity, real estate, or private credit. These don’t have public market indexes since they are private market investments, so you need a proxy that still supports a fair evaluation.

Some organizations use a peer group as the benchmark, but another approach is to use an annualized public market index plus a premium. For example, you might use the 7-year annualized return of the Russell 2000(lagged by 3 months) plus a 3% premium to account for illiquidity and risk.

Using the 7-year average rather than the current period return removes the public market volatility for the period that may not be as relevant for the private market comparison. The 3-month lag is used if your private asset valuations are updated when received rather than posted back to the valuation date. The purpose of the 3% premium (or whatever you decide is appropriate) is to account for the excess return you expect to receive from private investments above public markets to make the liquidity risk worthwhile.

By building in this hurdle, you create a reasonable, transparent benchmark that enables your board to ask: Is our private markets portfolio delivering enough excess return to justify the added risk and reduced liquidity?

Roll It All Up: Aggregated Benchmarks for Total Fund Oversight

Once you have individual benchmarks for each segment of the total fund, the next step is to aggregate them—using the strategic asset allocation weights from your IPS—to form a custom blended total fund benchmark.

This approach provides several advantages:

  • You can evaluate performance at both the micro (asset class) and macro (total fund) level.
  • You gain insight into where active management is adding value—and where it isn’t.
  • You ensure alignment between your strategic policy decisions and how performance is being measured.

For example, if your IPS targets 50% to public equities split among large-, mid-, and small-cap stocks, you can create a blended equity benchmark that reflects those sub-asset class allocations, and then roll it up into your total fund benchmark. Rebalancing of the blends should match there balancing frequency of the total fund.

What If There's No Market Benchmark?

In some cases, especially for highly customized or opportunistic strategies like hedge funds, there simply may not be a meaningful market index to use as a benchmark. In these cases, it is important to consider what hurdle would indicate success for this segment of the total fund. Examples of what some asset owners use include:

  • CPI + Premium – a simple inflation-based hurdle
  • Absolute return targets – such as a flat 7% annually
  • Total Fund return for the asset class – not helpful for evaluating the performance of this segment, but still useful for aggregation to create the total fund benchmark

While these aren’t perfect, they still serve an important function: they allow performance to be rolled into a total fund benchmark, even if the asset class itself is difficult to benchmark directly.

The Bottom Line: Better Benchmarks, Better Oversight

For public pension boards and endowment committees, benchmarks are essential for effective fiduciary oversight. A well-designed benchmark framework:

  • Reflects your strategic intent
  • Provides fair, consistent measurement of manager performance
  • Supports clear communication with stakeholders

At Longs Peak Advisory Services, we’ve worked with asset owners around the globe to develop custom benchmarking frameworks that align with their policies and support meaningful performance evaluation. If you’re unsure whether your current benchmarks are doing your IPS justice, we’re hereto help you refine them.

Want to dig deeper? Let’s talk about how to tailor a benchmark framework that’s right for your total fund—and your fiduciary responsibilities. Reach out to us today.