Quality Control: How to check for errors in your investment performance

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
May 5, 2021
15 min
Quality Control: How to check for errors in your investment performance

Recent investment performance calculation mistakes at Pennsylvania Public School Employees’ Retirement System (“PSERS”) have highlighted the importance of quality control reviews and raises questions about where risk exists, how these risks can be mitigated, and what role independent verifications should play in the quality control process.

What happened at PSERS?1

An error in the return calculation for Pennsylvania’s $64 billion state public school employee retirement plan has had serious implications for its beneficiaries and those involved in the calculation mistake.

In 2010, the plan, which was already underfunded, entered into a risk-sharing agreement where employees hired after 2011 would pay more into the plan if the return (average time-weighted return) over a specific time period fell below the actuarial value of asset (AVA) return of 6.36%.

In December 2020, the board announced that the plan had achieved a return of 6.38%, a mere 2 basis points above the minimum threshold. But in March the board changed its tune, announcing that the calculation was incorrect and the 100,000 or so employees hired since 2011 (and their employers) should have actually paid more into the plan.

What’s worse is PSERS also announced that the FBI is investigating the organization, although details of the probe have not yet been released.

According to PSERS, a consultant, that had calculated the return, came forward and admitted to the calculation error. But the board also said that it is looking into potential cover up by its staff. From what we know, at least 3 independent consultants were involved in providing data used for the calculations, calculating the returns, and verifying the returns. So, with all these experts involved, how could this happen and what can your firm do to avoid a similar situation?

Key issues to address in an investment performance quality control process

Firms should develop sound quality control processes to help identify errors before results are published. Often these processes either do not exist or are insufficient to identify issues. Following a robust quality control process that considers the key risks involved and then finds ways to mitigate these risks greatly increases the accuracy of presented investment performance.

Although we do not yet know the cause of the errors found in the PSERS case, we can highlight a few primary reasons errors occur in investment performance reporting. Primarily, errors found in published performance results are caused by:

  • Key Issue # 1 – Issues in the underlying data (e.g., incorrect or missing prices, unreconciled data, missing transactions, misclassified expenses, or failing to accrue fixed income)
  • Key Issue #2 – Mistakes in calculations (e.g., manual calculations that fail to match the intended methodology)
  • Key Issue #3 – Errors in reporting (e.g., publishing numbers that do not match the calculated results)

A robust quality control process should specifically address all three of these areas.

Considerations when designing a robust quality control process

Key Issue #1 – Issues in the underlying data

As they say, garbage in, garbage out. It is important to ask and address questions confirming the validity of data before it is used to calculate performance. Specifically, consider how the data used in the calculations is gathered, prepared, and reconciled before completing the calculations. Is there any formal signoff from the operations team confirming that the data is ready for use? Has a review of the data been conducted by an operations manager prior to this confirmation being made?

While deadlines to get performance published can be tight, taking the time to ensure that the underlying data is final and ready to use before performance is calculated can prevent headaches later on.

The following is a list of issues to look for when testing data validity:

  • Outlier performance – Portfolios performing differently than their peers may indicate a data issue or that the portfolio is mislabeled (i.e., tagged to a different strategy than it is invested in).
  • Differences between ending and beginning market values – Generally, we expect a portfolio’s market value at the end of one month and the beginning of the next month to be equal (unless using a system where external cashflows are recorded between months and differences like this are expected). Flagging differences can help identify data issues.
  • Offsetting decrease/increase in market value – Market values that suddenly increase or decrease and then return to the original value may have an incorrect price or transaction that should be researched.
  • Gaps in performance – A portfolio whose performance suddenly stops and then restarts may have missing data.
  • 0% returns – The portfolio may have liquidated and may no longer be under the firm’s discretionary management.
  • Very low market values – The portfolio may have closed and is only holding a small residual balance, which should be excluded from the firm’s discretionary management.
  • Net-of-fee returns higher than gross-of-fee returns – Seeing net returns that are higher than gross returns could indicate a data issue unless there are fee reversals you are aware of (e.g., performance fee accruals where previously accrued fees are adjusted back down).
  • Gross-of-fee returns and net of-fee returns are equal – If gross-of-fee and net-of-fee returns are always equal for a fee-paying portfolio, it is likely that the management fees are paid from an outside source (paid by check or out of a different portfolio). The returns labelled as net-of-fee in a case like this should be treated as gross-of-fee returns.

Key Issue #2 – Mistakes in calculations

Mistakes happen, but there are ways to reduce their frequency and impact. First, you’ll want to consider how manual your performance calculations are as well as the experience of the person completing the calculations.

Let’s face it, Excel is probably the most widely used tool in performance measurement, especially for smaller firms. While many firms likely find Excel to be a user-friendly tool for calculating performance statistics, it has its limitations. Studies have shown that up to 90% of spreadsheets contain errors and spreadsheets with lots of formulas are even more likely to contain mistakes. Whether it’s not properly dragging down a formula or referencing the wrong cell, fundamentally, the biggest problem is that users do not check their work or have carefully outlined procedures for confirming accuracy.

Although this may seem obvious, having a second set of eyes on a spreadsheet can save you from the embarrassing headache of having to explain errors in performance calculations. It is even better if this review is a multi-layered process. Having someone review details as well as someone to do a high-level “gut-check” to make sure the calculations and results make sense can reduce this risk. Depending on the size of your firm, this may be easier to accomplish with a third-party consultant, where you serve as a final layer of review.

Having this final “gut-check” can help prevent avoidable errors prior to publication. We find that this final “gut-check” is best performed by someone who knows the strategy intimately rather than a performance or compliance analyst, as these individuals may be too focused on the calculation details to take a step back and consider whether the returns make sense for the strategy and are in line with expectations.

If you use software to calculate performance, you can significantly reduce the risk of manual error, but due diligence should still be performed from time to time to manually prove out the accuracy of the calculations completed in the program. This does not need to be done every time but should be conducted when introducing a new software system and any time changes are made to the program.

Key Issue #3 – Errors in reporting

It may seem silly, but many performance reporting errors come from transposing strategy and benchmark returns in presentations or placing the return of one strategy in the factsheet of another. Therefore, it is important to consider how the final performance figures make it from the system or spreadsheet into the performance presentations. Are they typed? Copy and pasted? Or are the performance reports generated directly out of a system? It’s not enough to complete the calculations correctly, the final reports must also be accurate, so adding a step to review this is crucial.

A similar review process to the one described above can really make a difference, but ultimately, understanding the vulnerabilities of your performance reporting will help you design quality control procedures that address any exposure.

Calculations completed by external performance consultants

Whether performance is calculated internally or by a third-party performance consultant, the same key issues should be considered when designing the quality control process. Due diligence should be done on the performance consulting firm to evaluate the level of experience the firm has with calculating investment performance and what kind of quality control process they follow prior to providing results to your firm. This information will help you determine what reliance you can place on their procedures and what your firm should still check internally.

For example, outsourcing performance calculations to an individual or single-person firm likely necessitates a more in-depth review since this individual would not have the ability to have a second set of eyes on the results prior to providing them to your firm. However, even larger performance consulting firms with robust quality control processes may not have intimate knowledge of your strategies, meaning that, at a minimum, a final “gut-check” should be done by your firm prior to publication.

Reliance on independent performance verification firms to find errors

Many firms that hire performance verification firms rely on their verifier to be their quality control check; however, this may not be a good practice for a variety of reasons. If this is a common practice at your firm, you may want to check the scope of your engagement before relying too heavily on your verifier to find errors.

Verification is common for firms that claim compliance with the Global Investment Performance Standards (GIPS®). But even firms that claim compliance with the GIPS standards and receive a firm-wide verification are required to disclose that, “…Verification does not provide assurance on the accuracy of any specific performance report.

This is because verifiers are primarily focused on the existence and implementation of policies and procedures. While their review may help identify errors that exist in the sample selected for testing, it specifically does not certify the accuracy of presented results. While the verification process is valuable and often does turn up errors that need to be corrected, regardless of the scope of your engagement, a robust internal quality control process is likely still warranted.

Firms that are not GIPS compliant may engage verification firms for various types of attestation or review engagements like strategy exams or other non-GIPS performance reviews. In these situations, the scope of the engagement may be customized to meet the needs (and budget) of the firm seeking verification. A clear understanding of exactly what is in-scope and specifically what the verifier is opining on when issuing their report is key.

Situations where the engagement entails a detailed attestation tracing input data back to independent sources, confirming that calculations are carried out consistently, and verifying that published results match the calculations, allow for heavy reliance on the verifier as part of your quality control process.

Alternatively, when the scope merely consists of a high-level review confirming the appropriateness of the calculation methodology, a much more robust internal quality control process should be applied.

Knowing the scope of the engagement your firm has established with the verification firm is an important element in determining how much reliance can put on their review and findings, which can then be incorporated into the design of your own internal quality control procedures.

Key take-aways

Mistakes happen in investment performance reporting, but a robust quality control process can greatly mitigate this risk. Understanding the risks that exist, designing processes to test these risk areas, and understanding the role and engagement scope of all consultants involved are essential items in designing a quality control procedure that work for your firm – and hopefully one that will help you avoid situations like what happened with PSERS.

If you are not sure where to begin, we have tools and services available to help. Longs Peak uses proprietary software to calculate and analyze performance. Our software helps flag possible data issues and outlier performers and also produces performance reports directly from our performance system.

In addition, our performance consultants are available to work with your team to help identify potential vulnerabilities in your performance reporting process and can help you develop better quality control procedures, where needed.

Questions?

If you would like to learn more about our quality control process or any of the services we offer (like data and outlier testing) to help improve the accuracy and reliability of investment performance, contact us or email Sean Gilligan directly at sean@longspeakadvisory.com.

1 For more information on PSERS, please see this article from the Philadelphia Inquirer.

Recommended Post

View All Articles

Mission-driven institutions are entrusted with something larger than capital. They are entrusted with purpose.

Endowments, foundations, and long-term investment pools exist to support education, healthcare, research, environmental initiatives, religious or cultural programs, community development, and countless other causes—often for generations.

That long-term horizon changes how investment performance should be reported. Because when an institution thinks in decades instead of quarters, investment performance is not just about what happened recently, itis about whether the portfolio is structured to sustain spending, preserve purchasing power, and remain aligned with its mission through full market cycles.

Many institutions rely entirely on their investment managers to calculate and present investment performance. That’s common, but it’s not always sufficient.

Performance Oversight Is Not the Same as Performance Results

Investment managers are responsible for generating returns. Boards and oversight committees are responsible for evaluating those results.

Those responsibilities are distinct.

Oversight is a fiduciary duty. It is not passive, and it cannot rely solely on the information created by the party being evaluated. Effective oversight requires independence, consistency, and clarity.

When the same party both manages assets and determines how performance is calculated and presented, the lines between management and oversight can blur—even when intentions are sound and calculations are technically accurate.

In some situations, reporting may not be:

  • Consistent across managers
  • Based on uniform calculation methodologies
  • Presented in a format designed for governance review
  • Structured to facilitate long-term policy evaluation

Consider a board reviewing results from three different managers. Each reports strong performance, but one calculates returns net-of-fees, another presents gross results, and a third uses slightly different valuation timing.

At first glance, the numbers appear comparable. In reality, they may not be measuring the same thing.

Some larger institutions maintain internal performance teams or engage independent performance professionals to standardize reporting, organize data across managers, and present results in accordance with established best practices—often aligning reporting with their Investment Policy Statement and/or recognized frameworks such as the Global Investment Performance Standards (GIPS® standards).

But many of these organizations operate lean. They may not have dedicated performance measurement expertise or the infrastructure required to consolidate, normalize, and present results in a governance-ready format.

In those cases, boards are often reviewing manager-produced materials that were designed primarily for client communication—not institutional oversight. Performance reporting for these institutions should be designed to serve the governing body—not simply to showcase results.

Why This Matters for Mission-Based Institutions

Boards of endowments and foundations are often composed of dedicated volunteers, philanthropists, community leaders, and subject-matter experts. They bring vision, experience, and commitment to the institution’s mission—but not always a deep understanding of investment management and reporting.

That makes investment performance clarity essential. When reporting is unclear, oversight weakens—not because trustees lack commitment, but because the information is not presented in a way that supports meaningful evaluation.

When reporting is structured and tied directly to policy benchmarks, risk parameters, and spending objectives, trustees know what questions to ask. Conversations remain focused on long-term sustainability and mission impact.

A Practical Framework for Strong Performance Reporting

Boards of mission-driven institutions are often operating at the governance-level and should evaluate their reporting structure against four questions:

1. Is performance calculated independently?

Independent calculation or oversight reduces potential conflicts and strengthens fiduciary governance. In institutional investing, separating portfolio management from performance oversight is widely viewed as a best practice.

2. Is the methodology consistent across managers?

Multi-manager portfolios require uniform return calculation, fee treatment, and valuation policies to ensure comparability. Without consistency, “relative performance” becomes difficult to interpret.

One practical way institutions address this challenge is by complying with and requiring their managers to comply with the GIPS® standards.

The GIPS standards are a globally recognized framework administered by CFA Institute designed to promote fair representation and full disclosure in the calculation and presentation of investment performance.

Endowments and foundations that adopt the GIPS standards for their own performance calculations—and require the same of the managers they hire—send a powerful message to their boards and stakeholders that the institution is committed to transparency in how results are calculated and presented.  

3. Is reporting aligned with policy benchmarks?

Boards should see performance relative to long-term policy objectives, not just absolute returns. And this information should be shown at the level at which it is managed. Simply reporting that “the portfolio returned 8%” does not answer the real governance question.

A portfolio can have a positive year and still fail to meet its strategic role within the overall allocation.

For example:

  • Did the equity allocation meet its return objective relative to its benchmark?
  • Did the diversifying strategies provide the downside protection they were intended to deliver?
  • Did fixed income serve its role as a stabilizer?
  • Did alternative investments justify their complexity and liquidity constraints?

Even if the overall portfolio met its expected return, boards should understand how it got there. Reviewing performance by allocation allows boards to evaluate whether each segment is fulfilling its mandate, not just whether the total return looks acceptable.

When reported this way, it becomes easier to see where the portfolio is meeting expectations and where it may be falling short.

4. Is communication designed for governance?

Once performance is aligned to policy benchmarks, reporting should help trustees interpret what the results mean without requiring them to operate at the manager or security-selection level.

Reports should help answer key questions:

·        Are we meeting long-term objectives?

·        How are managers performing relative to their mandates?

·        Is risk aligned with the investment policy?

·        Are we preserving capital appropriately given our spending needs?

·        Did managers follow investment guidelines that align with our institution’s mission?

If any of these areas underperform, governance-level reporting should prompt clear, high-level discussion: Why did this occur? Was the result consistent with expectations? What steps, if any, are being considered to address issues going forward? If shortfalls persist, boards may need to evaluate whether the strategy or manager remains appropriate.

This kind of oversight strengthens outcomes by reinforcing accountability. Performance reporting should be communicated in plain language and simplify complex data into clear actionable insight. When this occurs, it enables boards to move from procedural review toward informed, effective governance.

From Calculation to Communication

Accurate returns are the starting point. Clear communicationis the outcome.

When performance calculation, oversight, and presentation are thoughtfully structured, board discussions become more strategic and less reactive. Boards gain confidence in their oversight, managers operate within clearer expectations, and the institution stays focused on its purpose.

A Closing Thought

Mission-driven institutions think in decades, not quarters. Their performance reporting should reflect that same discipline. Investment oversight is not just about generating returns, it is about ensuring those returns are measured, understood, and aligned with the institution’s long-term purpose.

Clear reporting strengthens governance.
Strong governance protects sustainability.
And sustainability protects the mission.

If you’ve been around the Global Investment Performance Standards (GIPS®) long enough, you know that governance is one of those topics everyone agrees is important, but far fewer firms can clearly explain what good governance with the GIPS standards actually looks like day to day.

Most firms don’t fail at GIPS compliance because they misunderstand a technical requirement. They struggle because ownership is unclear, decisions are informal, or key knowledge lives in one person’s head. When that person leaves (or when the firm grows) things start to break.

So, let’s simplify this.

Below is a practical, real-world view of what good governance looks like when complying with the GIPS standards—not in theory, not in a policy document that no one reads, but in how well-run firms actually operate.

Start with the Right Mindset: Governance Is About Sustainability

At its core, GIPS compliance exists to answer one question:

Can this firm consistently calculate, maintain, and present performance fairly and accurately—regardless of growth, staff changes, or market stress?

The GIPS standards are built on the principles of fair representation and full disclosure, but governance is what turns those principles into repeatable behavior. Good governance doesn’t mean more paperwork or compliance headaches. It means clear accountability, documented decisions, and controls that actually get used.

1. Clear Ownership (It’s Rarely Just One Person)

One of the most common governance risks we see is a “GIPS compliance department of one” where critical knowledge, decisions, and processes are concentrated with a single individual. While this can work in the short term, it creates challenges around continuity, oversight, and scalability as the firm grows or changes.

Good governance starts by clearly defining:

  • Who owns GIPS compliance overall
  • Who performs monthly/quarterly/annual tasks
  • Who reviews and approves key inputs/outputs
  • Who resolves judgment calls
  • Who ensures it also complies with other relevant regulations  

In practice, this often looks like:

  • A GIPS compliance committee or designated governance group
  • Representation from performance, compliance, operations, and senior management
  • Defined escalation paths for gray areas (e.g., discretion, composite changes, error corrections)

When a firm isn’t large enough to support a formal committee, outsourcing to a GIPS compliance consultant or a provider of managed services can be an effective alternative. These individuals can help you design policies, create procedures, and essentially manage governance for you.

But even if you are big enough, having an independent third party on your GIPS compliance committee can provide an objective, well-informed perspective formed by experience across many firms and a deep understanding of what works well in practice.

2. Policies and Procedures That Reflect Reality

Every GIPS compliant firm has GIPS standards policies and procedures (GIPS standards P&P). Well-governed firms actually use them.

Strong GIPS compliance governance means your GIPS standards P&P:

  • Include procedures your firm actually follows instead of only stating policies
  • Reflect how performance is really calculated
  • Clearly document firm-specific elections and judgments
  • Are updated when the business changes (for new products, systems, asset classes)

 

Think of your GIPS standards P&P as the firm’s operating manual for performance, not a static compliance artifact. If someone new joined your performance team tomorrow, they should be able to follow your policies and procedures to calculate performance and arrive at the same results. If not, governance needs work.

3. Formalized Review and Oversight

Good governance includes independent review, even if it’s internal.

In practice, this often means:

  • Secondary review of composite membership decisions
  • Review of significant cash flow thresholds and discretion determinations
  • Approval of new composites and composite definition changes
  • Oversight of error identification and correction

 

This is where governance protects firms from subtle but costly mistakes, especially those that show up during verification and increase complexity and scope of these engagements. In an ideal situation, these internal reviews should catch issues before they become problems.

As a provider of managed services, Longs Peak helps firms identify performance outliers, accounts that are breaking composite rules, and other data anomalies. This review significantly reduces the risk of erroneous data ending up in your performance and later caught in verification. If you are not able to do this internally, we strongly recommend outsourcing this effort.

4. Governance Extends to Marketing and Distribution

One area that has been increasingly important is the intersection of GIPS compliance, the SEC marketing rule, and how you manage the distribution of marketing materials.

Well-governed firms:

  • Control who can distribute GIPS Reports and how they are distributed
  • Ensure Marketing understands what is and is not an advertisement that meets the requirements of the GIPS standards
  • Coordinate GIPS compliance requirements with broader regulatory rules, including the SEC marketing rule
  • Have a clear process for tracking distribution

 

This alignment helps firms avoid inconsistencies between factsheets, pitchbooks, and GIPS Reports—one of the fastest ways to lose credibility with prospects and regulators.

Some clients prefer not to mention GIPS compliance at all in their marketing (i.e., on their factsheets and pitchbooks) until a client is clearly interested in one of their strategies. Once they meet the definition of a prospect (as outlined in your GIPS standards P&P), it triggers the requirement to send a GIPS Report and they find this smaller list of prospects easier to maintain. For others, having everything in one document including required GIPS compliance information and disclosures is easier to manage than separate documents.

There is no “right” way to manage this, but in either case, having a clear process for tracking and reporting performance errors is key.

5. Documentation of Decisions (Not Just Results)

Here’s a subtle but critical point: Good governance for your GIPS compliance program documents decisions, not just outcomes.

Why was that composite redefined?
Why was this benchmark changed?

Why was this model fee selected?

Strong governance creates an audit trail that:

  • Supports sound reasoning (which aides in the verification process or even regulatory exams later on)
  • Reduces key person risk
  • Makes future reviews faster and less stressful

 

This is especially valuable when firms grow, merge, or experience turnover. Clear documentation allows others to step in seamlessly and continue critical functions without disruption. More importantly, it enables independent parties, such as a regulator or your verifier, to understand, assess, and validate how you are calculating and presenting performance that may not be immediately intuitive.

6. Governance Is Ongoing, Not a One-Time Project

The best-governed firms don’t “set and forget” their GIPS compliance program. They revisit governance when:

  • New strategies launch
  • Systems or custodians change
  • Regulations evolve
  • The firm’s structure changes

In other words, governance evolves with the business—because performance reporting doesn’t exist in a vacuum.

Even for firms that are not regularly launching new strategies, changing systems or structure, an annual review of your GIPS compliance program and governance framework is critical. This review helps confirm that practices have remained consistent, while also providing an opportunity to reflect on whether you are satisfied with your verifier, assess whether new regulations require updates, and reconsider how composites are managed or described.

The best time to do this is at year-end so that if you decide something should be changed, you can do that proactively for the upcoming year, rather than having to fix it retroactively.

What Good GIPS Compliance Governance Really Buys You

When GIPS compliance governance is working well, firms experience:

  • A structured, intentional process for validation of your performance results
  • A framework that supports consistency and transparency over time
  • Fewer surprises or last-minute scrambles during verification or regulatory review
  • Greater confidence from regulators and verifiers that you are following established policies and procedures
  • Lower operational and reputational risk

 

Most importantly, it creates trust internally and externally. Good GIPS compliance governance isn’t about being perfect. It’s about being intentional.

Clear ownership. Thoughtful documentation. Real oversight. Those are the firms that don’t just claim compliance, they live it.

Why “Net” Is Not a One-Size-Fits-All Answer

If you’ve worked in the investment industry, you’ve probably heard some version of this question:

“Should we show net or gross performance—or both?”

On the surface, the answer seems straight forward. The rules tell us what’s required. Compliance boxes get checked. End of story.

But in practice, presenting net and gross performance is rarely that simple.

How you calculate it, how you present it, and how you disclose it can materially change how investors interpret your results. This article goes beyond the rulebook to explore thepractical considerations firms face when deciding how to present net and gross returns in a manner that is clear, helpful, and in compliance with requirements.

Let’s Start with the Basics (Briefly)

At a high level, for separate account strategies:

  • Gross performance reflects returns before investment management fees
  • Net performance reflects returns after investment management fees have been deducted

Both gross and net performance are typically net of transaction costs, but gross of administrative fees and expenses. When dealing with pooled funds, net performance is also reduced by administrative fees and expenses, but here we are focused on separate account strategies, typically marketed as composite performance.

Simple enough. But that definition alone doesn’t tell the full story—and it’s where many misunderstandings begin.

Why Net Performance Is the Investor’s Reality

From an investor’s perspective, net performance is what actually matters. It represents the return they keep after paying the manager for active management.

That’s why modern regulations and best practices increasingly emphasize net returns. Investors don’t experience gross returns. They experience net outcomes.

And let’s be honest: if an investor chooses an active manager instead of a low-cost index fund or ETF tracking the same benchmark, the expectation is that the active approach should deliver something extra—after fees. Otherwise, it becomes difficult to justify paying for that active management.

Why Gross Performance Still Has a Role

If net returns are what investors actually receive, why do firms still talk about gross performance at all?

Because gross performance tells a different, but complementary, story: what the strategy is capable of before fees, and what investors are paying for that capability.

The gap between gross and net returns represents the cost of active management. Put differently, it answers a question investors are implicitly asking:

How much return am I giving up in exchange for this manager’s expertise?

Viewed this way, gross returns help investors assess:

  • Whether the strategy is adding value before fees
  • How much of the performance is driven by skill: security selection, asset allocation or portfolio construction
  • Whether fees are the primary drag—or whether the strategy itself is struggling

When gross and net returns are shown together, they create transparency around both skill and cost. When shown without context, they can easily obscure the economic tradeoff.

Gross-of-fee returns are also most important when marketing to institutional investors that have the power to negotiate the fee they will pay and know that they will likely pay a fee lower than most of your clients have paid in the past. Their detailed analysis can more accurately be done starting with your gross-of-fee returns and adjusting for the fee they expect to negotiate rather than using net-of-fee returns that have been charged historically.

The Real-World Gray Areas Firms Struggle With

How to Present Gross Returns

Gross returns are pretty straightforward. They are typically calculated before investment management or advisory fees and usually include transaction costs such as commissions and spreads.

For firms that comply with the GIPS® Standards, things can get more nuanced—particularly for bundled fee arrangements. In those cases, firms must make reasonable allocations to separate transaction costs from the bundled fee. But, if that separation cannot be done reliably, gross returns must be shown after removing the entire bundled fee. [1]

Once you move from gross to net returns, however, the conversation becomes less straightforward. We’ve had managers question, “why show net performance at all?” This is especially the case when fees vary across clients or historical fees no longer reflect what an investor would pay today. Others complain that the “benchmark isn’t net-of-fees,” making net-of-fee comparisons inherently imperfect. These concerns highlight why presenting net returns isn’t just a mechanical exercise. In the sections that follow, we’ll unpack these challenges and walk through how to present net-of-fee performance in a way that remains meaningful, transparent, and fit for its intended audience.

How to Present Net Returns

This is where judgment and documentation matters most.

Not all “net” returns are created equal. Even under the SEC Marketing Rule, there is no single mandated definition of net performance—only a requirement that net performance be presented. Under the GIPS Standards, net-of-fee returns must be reduced by investment management fees.

In practice, firms may deduct:

  • Advisory fees (asset-based investment management fees)
  • Performance-based fees
  • Custody fees
  • Transaction costs

Two net-return series can look comparable on the surface while reflecting very different assumptions underneath. This lack of transparency is one of the main reasons institutional investors often require managers to be GIPS compliant—it simplifies comparison by requiring consistency in the assumptions used and how they are presented or additional disclosure when more fees are included in the calculation than what is required.

And context matters. A higher fee may be perfectly reasonable if it reflects broader services such as tax or financial planning, holistic portfolio construction, or access to specialized strategies. The problem isn’t the fee itself, it’s failing to use a fee scenario that is relevant to the user of the report.

Deciding Between Actual vs Model Fees

The next hurdle is deciding whether to use actual fees or a model fee when calculating net returns. Historically, firms most often relied on actual fees, viewing them as the best representation of what clients actually experienced. But that approach raises an important question: are those historical fees still relevant to what an investor would pay today? If the answer is no, a model fee may provide a more representative picture of current expected outcomes. Under the SEC marketing rule, there are cases where firms are required to use a model fee when the anticipated fee is higher than actual fees charged.

This consideration becomes even more important for strategies or composites that include accounts paying little or no fee at all. While the GIPS Standards and the SEC Marketing Rule are not perfectly aligned on this topic, they agree in principle—net performance should be meaningful, not misleading, and should reflect what an actual fee-paying investor should reasonably expect to pay. Thus, many firms opt to present model fee performance to avoid violating the marketing rule’s general prohibitions. [2]

Additional SEC guidance published on Jan 15, 2026 on the Use of Model Fees reinforced that the decision to use model vs actual fees is context-dependent. While the marketing rule allows net performance to be calculated using either actual or model fees, there are cases where the use of actual fees may be misleading. The SEC emphasized flexibility and that while both fee types are allowed, what’s appropriate depends on the facts and circumstances of the situation, including the clarity of disclosures and how fee assumptions are explained.

Which Model Fee Should Be Used?

Most firms offer multiple fee structures, typically based on account size, but sometimes also on investor type (institutional versus retail clients). That variability makes fee selection a key decision when presenting net performance.

If you plan to use a single performance document for broad or mass marketing, best practice—and what the SEC Marketing Rule effectively requires—is to calculate net returns using the highest anticipated fee that could reasonably apply to the intended audience. This helps ensure the presentation is not misleading by overstating what an investor might take home.

A common pushback is: “But the highest fee isn’t relevant to this type of investor.” And that may be true. In those cases, firms have a few defensible options:

  • Create separate versions of the presentation tailored to different investor types, or
  • Present multiple fee tiers within the same document, clearly explaining what each tier represents

Either approach can work—but only if disclosures are explicit and easy to understand. When multiple fee structures are shown, clarity isn’t optional; it’s essential.

In practice, many firms maintain separate retail and institutional versions of factsheets or pitchbooks. That approach is perfectly reasonable, but it comes with operational risk. If this becomes standard practice, firms need strong internal controls to ensure the right presentation reaches the right audience. That means:

  • Clear internal policies
  • Consistent naming and version control
  • Training marketing and sales teams on when each version may be used

This often involves an overlap of both marketing and compliance to get it right because getting the fee right is only part of the equation. Making sure the presentation is used appropriately is just as important to ensuring net performance remains meaningful, compliant, and credible.

Which Statistics Can Be Shown Gross-of-Fees?

Since the introduction of the SEC Marketing Rule, there has been significant debate about whether all statistics must be presented net-of-fees—or whether certain metrics can still be shown gross-of-fees. Helpful clarity arrived in an SEC FAQ released on March 19, 2025, which confirmed that not all portfolio characteristics need to be presented net-of-fees. The examples cited included risk statistics such as the Sharpe and Sortino ratios, attribution results, and similar metrics that are often calculated gross-of-fees to avoid the “noise” introduced by fee deductions.

The staff acknowledged that presenting some of these characteristics net-of-fees may be impractical or even misleading. As long as firms prominently present the portfolio’s total gross and net performance incompliance with the rule (i.e., prescribed time periods 1, 5, 10 years),clearly label these characteristics as gross, and explain how they are calculated, the SEC indicated it would generally not recommend enforcement action.

Bringing it all Together

On paper, presenting net and gross performance should be a straight forward exercise.

In reality, layers of regulation, evolving expectations, and heightened scrutiny have made it feel far more complicated than it needs to be. But complexity doesn’t have to lead to confusion.

When firms are clear about:

  • Who they are communicating with,
  • What that audience expects,
  • What the performance is intended to represent, and
  • Why certain assumptions were chosen

…the decisions around what gets presented become far more manageable.

Net returns aren’t about finding a single “correct” number. They’re about telling an honest, well-documented story. And when that story is clear, investors don’t just understand the performance—they trust it.

[1] 2020 GIPS® Standards for Firms, Section 2: Input Data and Calculation Methodology(gross-of-fees returns and treatment of transaction costs, including bundled fees).

[2] See SEC Marketing Rule 2 026(4)-1(a) footnote 590 as well as the SEC updated FAQ from January 15, 2026. Available at: https://www.sec.gov/rules-regulations/staff-guidance/division-investment-management-frequently-asked-questions/marketing-compliance-frequently-asked-questions