Investment Performance Outlier Testing

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
January 29, 2020
15 min
Investment Performance Outlier Testing

For any firm that aggregates portfolios of the same strategy into a composite, or otherwise groups portfolios by mandate, how do you know that each portfolio truly follows that strategy? The answer is outlier testing.

Why Utilize Composites?

The GIPS standards require firms managing separate accounts to construct composites, which aggregate all discretionary portfolios of the same strategy. However, even for firms that are not GIPS compliant, the use of composites is considered best practice when reporting investment performance to prospective clients. Composites offer a more complete picture than presenting performance of a model or “representative portfolio” – which usually leave prospects wondering whether the information is truly representative or if the portfolio presented was “cherry picked.”

When creating and maintaining composites, firms must ensure that portfolios are included in the correct composite for the right time period – the period for which you had full discretion to implement the composite strategy for that portfolio. This is achieved by following a clearly documented set of policies and procedures for composite inclusion and exclusion. However, what happens when changes are made to a portfolio and those changes are not communicated to the person maintaining the composite?

In an ideal world, information in your firm would flow perfectly so that the person maintaining your composites knows exactly what is happening with the firm’s clients. In reality, client requests commonly result in small or temporary changes to the portfolio (e.g., halt trading, raise cash) that are not formally documented in the client’s investment guidelines or investment policy statement.

Without formal documentation of these changes, information may not flow down to the manager of your composites. While these minor or temporary changes may not affect the client’s long-term objectives, they may cause the portfolio to deviate from the strategy, requiring (at least temporary) removal from its composite. When these restricted portfolios are left in the composite, they often become performance outliers and create “noise” in the composite results. This “noise” prevents the composite from providing a meaningful representation of the portfolio manager’s ability to implement the strategy. This will also interfere with your prospective clients’ ability to analyze and interpret your performance results.

Why test for performance outliers?

Testing for performance outliers prior to finalizing and publishing performance results can help your firm remove this “noise” and can prevent costly errors in performance presentations. Firms that lack adequate composite construction policies and controls to ensure the policies are consistently followed often end up with errors in their composite presentations. In fact, it is very likely that errors in your performance exist. It is rare for us at Longs Peak to conduct an outlier analysis where no issues are found. Outlier testing should be completed quarterly and at a minimum, before any related verification or performance examination.

Many firms, especially those that are GIPS compliant, rely on their verifier to catch errors in their composites. We do not recommend this and suggest firms perform testing internally (or with the help of a performance consultant like Longs Peak) because:

  1. Verifiers only test a sample and will likely not catch all of your issues.
  2. Verification may happen months after the performance has been published. When errors are found, it may require redistribution of presentations with disclosures regarding prior performance errors.
  3. When verifiers find errors, they generally increase their sample size as well as their assessment of engagement risk. These two things lead to more time spent on the verification and a potential increase in your verification fee.

Even if not GIPS compliant, when firms use composites, regulators may test to ensure the composites are a meaningful representation of the strategy. In addition to improving accuracy, testing for performance outliers can help your firm‘s composites meet the standards expected by regulators.

How can performance outliers be identified?

Testing for performance outliers involves reviewing the performance of portfolios within the same composite or strategy to test if they are performing similarly. This testing allows you to flag any portfolios that may be performing differently so you can evaluate if their inclusion in the composite is appropriate.

For example, if your firm has a Large Cap Growth composite, testing performance outliers would involve compiling the return data for all of your Large Cap Growth portfolios, identifying which portfolios performed materially different from their peers, researching why they performed differently, and then taking the appropriate action if an issue is discovered. This may sound like a daunting task, but it doesn’t have to be. Let us walk you through this in more detail.

Some firms simply look at the absolute difference between each portfolio’s monthly return and the monthly return of the composite. While this may be straight forward, relying only on the absolute difference to determine outliers does not take into consideration the size of the return and the normal distribution of portfolio returns in the composite. For example, if you set a threshold to look at all portfolios that deviate from the composite return by 50bps, the result for a composite with low dispersion and a total return of 2% would very be different than a composite with higher dispersion and a total return of 20%.

In the outlier analysis Longs Peak conducts for clients, we use standard deviation in conjunction with a comparison of the absolute differences to identify the outlier portfolios that require review. Utilizing standard deviation allows us to identify portfolios that are truly outside the normal distribution of returns for each period. For example, reviewing all portfolios that are more than 3 standard deviations from the composite mean will provide the portfolios outside the normal distribution of returns for that period, regardless of the size of the return or the level of dispersion in that composite.

What to consider when reviewing outlier performance

The severity of the outlier

The larger the outlier, the more likely it is that the portfolio has an issue that would require it to be removed from the composite. We typically start by looking at the most extreme outliers first. Generally, we look at portfolios with performance periods flagged with +/-3 standard deviations from the mean return for the period. By addressing these first (including removing them if it is determined they do not belong in the composite), we are able to re-run the outlier test to assess what outliers exist without these extreme cases disrupting the analysis.

Once these extreme outliers are addressed, we move on to review the portfolios that are +/-2 standard deviations and even +/-1.5 standard deviations, if needed. We keep reviewing accounts with returns closer and closer to the composite’s mean return until we are consistently confirming that the portfolios do in fact belong in the composite and errors are not being found.

Each firm will be different in how much they need to drill down to get to a point of comfort that no more errors exist. If your composite is managed strictly to a model, the outliers will be very clear and easy to identify. If each portfolio you manage is customized, more research is often needed to determine if the outlier performance is simply a result of the portfolio’s customization or if the portfolio was included in the wrong composite.

How often the portfolio is an outlier

Longs Peak’s performance outlier reports show a portfolio’s performance, the number of standard deviations it is from the mean each month, and the number of months the portfolio was an outlier throughout its history in that composite. Our reports also show whether there was a cash flow during that period or not. The following are examples of outlier frequencies we evaluate:

Infrequent: If you see that a portfolio is only an outlier for one month and that month had a large cash flow, then you will know that the portfolio is likely only an outlier for that period because of the cash flow and, often, no further research is required.

Frequent: If you can see that the portfolio is an outlier for most of the months under review, then you will know that there is likely an issue with this portfolio.

As of a specific date: If you can see that the portfolio was not an outlier historically, but became a frequent outlier from a certain month forward, this may indicate that a restriction was added or that the strategy changed as of that period. The portfolio may then need to be reclassified to the appropriate composite or flagged as non-discretionary.

The most common causes of outlier performance and how to address performance outliers

Common causes of outlier performance:

  • Data issues – When outliers are extreme, it is likely that there is an issue with the data. Examples include a pricing issue that caused a material jump in performance or a late dividend hitting a portfolio that is closing and had most of its assets already transferred out. These issues are often easily addressed, depending on the circumstance of each case.
  • Cash flows – If a portfolio is only an outlier for one month and during that month the portfolio experienced a large cash flow, this is likely the reason for the outlier performance. If the portfolio had high cash for a period of time around the cash flow and the market moved during that period, this portfolio likely would perform differently than its fully invested peers. Nothing needs to be done in this scenario since the outlier performance is explained and there is no indication that the portfolio is invested incorrectly or grouped with the wrong portfolios.
  • Legacy positions or other client restrictions – If your clients hold legacy positions that you are restricted from selling or have other similar restrictions, this will likely cause these portfolios to perform differently when compared to their unrestricted peers. Depending on your composite construction rules, unless immaterial, these portfolios likely need to be excluded from the composite. With these portfolios removed, other outliers may appear that were not as noticeable when the restricted portfolios were included. It is important to refer to your firm’s composite construction policies, which should outline clear parameters for when restricted portfolios should be included/excluded in composites.
  • Portfolio categorized incorrectly – A portfolio may appear as an outlier because it was placed in the wrong composite. This often happens if a portfolio’s composite changed and it was not removed from its prior composite. If this is the case, the portfolio must be removed (after the change) and added to the new composite based on the timing outlined in your firm’s composite construction policies.
  • Portfolio managed incorrectly – Performance outlier analysis may help identify a portfolio that is managed to the wrong strategy. For example, it is possible that the portfolio is grouped with the correct portfolios, but the wrong strategy was implemented in the portfolio. This is one of the most important errors that performance outlier testing can identify because it means that the client is actually not having their money managed to the strategy for which your firm was hired. In this case, the portfolio would need to be rebalanced to the correct strategy. Likely, a review of the history would need to be conducted as well to ensure the client was not disadvantaged by the error.
  • High dispersion between portfolio managers – Especially when more than one portfolio manager is implementing the same composite at your firm, material differences may exist in the way they each manage the strategy. Outlier performers may be due to differences in the portfolio managers’ discretionary management. If the composite is being sold as one cohesive product, it is important to identify where the portfolio managers deviate and determine if they can work more closely together to avoid high dispersion or if the strategy should actually be run as two different products.

When researching outlier performance, keep in mind that, on its own, a portfolio’s performance deviating from its peers is not a valid reason to remove the portfolio from its composite. You need to determine the root cause of the deviation and remove the portfolio from its composite only if the root cause was client-driven. If the deviation was caused by tactical, discretionary moves made by the portfolio manager, the portfolio must remain in the composite as its performance is still a representation of the portfolio manager’s implementation of the strategy.

Ready to implement performance outlier testing at your firm?

While it is best practice to create a flow of information that will allow portfolios to proactively be included/excluded in the correct composite at the appropriate time, testing for performance outliers acts as a back-up plan to catch anything that was missed.

If analyzing your composite data to identify performance outliers is not something you have the resources to do internally, Longs Peak is available to help. Longs Peak offers both consulting and reporting services that can assist your firm with outlier analysis. Conducting outlier analysis should be done at least quarterly to help ensure your firm is managing your portfolios consistently and are reporting strategy or composite performance that is meaningful and accurate. Please contact us to discuss how we can help implement this practice for your firm.

Questions? 

If you have questions about investment performance, composite construction, or the GIPS standards, we would be love to talk to you. Longs Peak’s professionals have extensive experience helping firms with all of their investment performance needs. Please feel free to email Sean Gilligan directly at sean@longspeakadvisory.com.

Recommended Post

View All Articles

If you’ve been around the Global Investment Performance Standards (GIPS®) long enough, you know that governance is one of those topics everyone agrees is important, but far fewer firms can clearly explain what good governance with the GIPS standards actually looks like day to day.

Most firms don’t fail at GIPS compliance because they misunderstand a technical requirement. They struggle because ownership is unclear, decisions are informal, or key knowledge lives in one person’s head. When that person leaves (or when the firm grows) things start to break.

So, let’s simplify this.

Below is a practical, real-world view of what good governance looks like when complying with the GIPS standards—not in theory, not in a policy document that no one reads, but in how well-run firms actually operate.

Start with the Right Mindset: Governance Is About Sustainability

At its core, GIPS compliance exists to answer one question:

Can this firm consistently calculate, maintain, and present performance fairly and accurately—regardless of growth, staff changes, or market stress?

The GIPS standards are built on the principles of fair representation and full disclosure, but governance is what turns those principles into repeatable behavior. Good governance doesn’t mean more paperwork or compliance headaches. It means clear accountability, documented decisions, and controls that actually get used.

1. Clear Ownership (It’s Rarely Just One Person)

One of the most common governance risks we see is a “GIPS compliance department of one” where critical knowledge, decisions, and processes are concentrated with a single individual. While this can work in the short term, it creates challenges around continuity, oversight, and scalability as the firm grows or changes.

Good governance starts by clearly defining:

  • Who owns GIPS compliance overall
  • Who performs monthly/quarterly/annual tasks
  • Who reviews and approves key inputs/outputs
  • Who resolves judgment calls
  • Who ensures it also complies with other relevant regulations  

In practice, this often looks like:

  • A GIPS compliance committee or designated governance group
  • Representation from performance, compliance, operations, and senior management
  • Defined escalation paths for gray areas (e.g., discretion, composite changes, error corrections)

When a firm isn’t large enough to support a formal committee, outsourcing to a GIPS compliance consultant or a provider of managed services can be an effective alternative. These individuals can help you design policies, create procedures, and essentially manage governance for you.

But even if you are big enough, having an independent third party on your GIPS compliance committee can provide an objective, well-informed perspective formed by experience across many firms and a deep understanding of what works well in practice.

2. Policies and Procedures That Reflect Reality

Every GIPS compliant firm has GIPS standards policies and procedures (GIPS standards P&P). Well-governed firms actually use them.

Strong GIPS compliance governance means your GIPS standards P&P:

  • Include procedures your firm actually follows instead of only stating policies
  • Reflect how performance is really calculated
  • Clearly document firm-specific elections and judgments
  • Are updated when the business changes (for new products, systems, asset classes)

 

Think of your GIPS standards P&P as the firm’s operating manual for performance, not a static compliance artifact. If someone new joined your performance team tomorrow, they should be able to follow your policies and procedures to calculate performance and arrive at the same results. If not, governance needs work.

3. Formalized Review and Oversight

Good governance includes independent review, even if it’s internal.

In practice, this often means:

  • Secondary review of composite membership decisions
  • Review of significant cash flow thresholds and discretion determinations
  • Approval of new composites and composite definition changes
  • Oversight of error identification and correction

 

This is where governance protects firms from subtle but costly mistakes, especially those that show up during verification and increase complexity and scope of these engagements. In an ideal situation, these internal reviews should catch issues before they become problems.

As a provider of managed services, Longs Peak helps firms identify performance outliers, accounts that are breaking composite rules, and other data anomalies. This review significantly reduces the risk of erroneous data ending up in your performance and later caught in verification. If you are not able to do this internally, we strongly recommend outsourcing this effort.

4. Governance Extends to Marketing and Distribution

One area that has been increasingly important is the intersection of GIPS compliance, the SEC marketing rule, and how you manage the distribution of marketing materials.

Well-governed firms:

  • Control who can distribute GIPS Reports and how they are distributed
  • Ensure Marketing understands what is and is not an advertisement that meets the requirements of the GIPS standards
  • Coordinate GIPS compliance requirements with broader regulatory rules, including the SEC marketing rule
  • Have a clear process for tracking distribution

 

This alignment helps firms avoid inconsistencies between factsheets, pitchbooks, and GIPS Reports—one of the fastest ways to lose credibility with prospects and regulators.

Some clients prefer not to mention GIPS compliance at all in their marketing (i.e., on their factsheets and pitchbooks) until a client is clearly interested in one of their strategies. Once they meet the definition of a prospect (as outlined in your GIPS standards P&P), it triggers the requirement to send a GIPS Report and they find this smaller list of prospects easier to maintain. For others, having everything in one document including required GIPS compliance information and disclosures is easier to manage than separate documents.

There is no “right” way to manage this, but in either case, having a clear process for tracking and reporting performance errors is key.

5. Documentation of Decisions (Not Just Results)

Here’s a subtle but critical point: Good governance for your GIPS compliance program documents decisions, not just outcomes.

Why was that composite redefined?
Why was this benchmark changed?

Why was this model fee selected?

Strong governance creates an audit trail that:

  • Supports sound reasoning (which aides in the verification process or even regulatory exams later on)
  • Reduces key person risk
  • Makes future reviews faster and less stressful

 

This is especially valuable when firms grow, merge, or experience turnover. Clear documentation allows others to step in seamlessly and continue critical functions without disruption. More importantly, it enables independent parties, such as a regulator or your verifier, to understand, assess, and validate how you are calculating and presenting performance that may not be immediately intuitive.

6. Governance Is Ongoing, Not a One-Time Project

The best-governed firms don’t “set and forget” their GIPS compliance program. They revisit governance when:

  • New strategies launch
  • Systems or custodians change
  • Regulations evolve
  • The firm’s structure changes

In other words, governance evolves with the business—because performance reporting doesn’t exist in a vacuum.

Even for firms that are not regularly launching new strategies, changing systems or structure, an annual review of your GIPS compliance program and governance framework is critical. This review helps confirm that practices have remained consistent, while also providing an opportunity to reflect on whether you are satisfied with your verifier, assess whether new regulations require updates, and reconsider how composites are managed or described.

The best time to do this is at year-end so that if you decide something should be changed, you can do that proactively for the upcoming year, rather than having to fix it retroactively.

What Good GIPS Compliance Governance Really Buys You

When GIPS compliance governance is working well, firms experience:

  • A structured, intentional process for validation of your performance results
  • A framework that supports consistency and transparency over time
  • Fewer surprises or last-minute scrambles during verification or regulatory review
  • Greater confidence from regulators and verifiers that you are following established policies and procedures
  • Lower operational and reputational risk

 

Most importantly, it creates trust internally and externally. Good GIPS compliance governance isn’t about being perfect. It’s about being intentional.

Clear ownership. Thoughtful documentation. Real oversight. Those are the firms that don’t just claim compliance, they live it.

Why “Net” Is Not a One-Size-Fits-All Answer

If you’ve worked in the investment industry, you’ve probably heard some version of this question:

“Should we show net or gross performance—or both?”

On the surface, the answer seems straight forward. The rules tell us what’s required. Compliance boxes get checked. End of story.

But in practice, presenting net and gross performance is rarely that simple.

How you calculate it, how you present it, and how you disclose it can materially change how investors interpret your results. This article goes beyond the rulebook to explore thepractical considerations firms face when deciding how to present net and gross returns in a manner that is clear, helpful, and in compliance with requirements.

Let’s Start with the Basics (Briefly)

At a high level, for separate account strategies:

  • Gross performance reflects returns before investment management fees
  • Net performance reflects returns after investment management fees have been deducted

Both gross and net performance are typically net of transaction costs, but gross of administrative fees and expenses. When dealing with pooled funds, net performance is also reduced by administrative fees and expenses, but here we are focused on separate account strategies, typically marketed as composite performance.

Simple enough. But that definition alone doesn’t tell the full story—and it’s where many misunderstandings begin.

Why Net Performance Is the Investor’s Reality

From an investor’s perspective, net performance is what actually matters. It represents the return they keep after paying the manager for active management.

That’s why modern regulations and best practices increasingly emphasize net returns. Investors don’t experience gross returns. They experience net outcomes.

And let’s be honest: if an investor chooses an active manager instead of a low-cost index fund or ETF tracking the same benchmark, the expectation is that the active approach should deliver something extra—after fees. Otherwise, it becomes difficult to justify paying for that active management.

Why Gross Performance Still Has a Role

If net returns are what investors actually receive, why do firms still talk about gross performance at all?

Because gross performance tells a different, but complementary, story: what the strategy is capable of before fees, and what investors are paying for that capability.

The gap between gross and net returns represents the cost of active management. Put differently, it answers a question investors are implicitly asking:

How much return am I giving up in exchange for this manager’s expertise?

Viewed this way, gross returns help investors assess:

  • Whether the strategy is adding value before fees
  • How much of the performance is driven by skill: security selection, asset allocation or portfolio construction
  • Whether fees are the primary drag—or whether the strategy itself is struggling

When gross and net returns are shown together, they create transparency around both skill and cost. When shown without context, they can easily obscure the economic tradeoff.

Gross-of-fee returns are also most important when marketing to institutional investors that have the power to negotiate the fee they will pay and know that they will likely pay a fee lower than most of your clients have paid in the past. Their detailed analysis can more accurately be done starting with your gross-of-fee returns and adjusting for the fee they expect to negotiate rather than using net-of-fee returns that have been charged historically.

The Real-World Gray Areas Firms Struggle With

How to Present Gross Returns

Gross returns are pretty straightforward. They are typically calculated before investment management or advisory fees and usually include transaction costs such as commissions and spreads.

For firms that comply with the GIPS® Standards, things can get more nuanced—particularly for bundled fee arrangements. In those cases, firms must make reasonable allocations to separate transaction costs from the bundled fee. But, if that separation cannot be done reliably, gross returns must be shown after removing the entire bundled fee. [1]

Once you move from gross to net returns, however, the conversation becomes less straightforward. We’ve had managers question, “why show net performance at all?” This is especially the case when fees vary across clients or historical fees no longer reflect what an investor would pay today. Others complain that the “benchmark isn’t net-of-fees,” making net-of-fee comparisons inherently imperfect. These concerns highlight why presenting net returns isn’t just a mechanical exercise. In the sections that follow, we’ll unpack these challenges and walk through how to present net-of-fee performance in a way that remains meaningful, transparent, and fit for its intended audience.

How to Present Net Returns

This is where judgment and documentation matters most.

Not all “net” returns are created equal. Even under the SEC Marketing Rule, there is no single mandated definition of net performance—only a requirement that net performance be presented. Under the GIPS Standards, net-of-fee returns must be reduced by investment management fees.

In practice, firms may deduct:

  • Advisory fees (asset-based investment management fees)
  • Performance-based fees
  • Custody fees
  • Transaction costs

Two net-return series can look comparable on the surface while reflecting very different assumptions underneath. This lack of transparency is one of the main reasons institutional investors often require managers to be GIPS compliant—it simplifies comparison by requiring consistency in the assumptions used and how they are presented or additional disclosure when more fees are included in the calculation than what is required.

And context matters. A higher fee may be perfectly reasonable if it reflects broader services such as tax or financial planning, holistic portfolio construction, or access to specialized strategies. The problem isn’t the fee itself, it’s failing to use a fee scenario that is relevant to the user of the report.

Deciding Between Actual vs Model Fees

The next hurdle is deciding whether to use actual fees or a model fee when calculating net returns. Historically, firms most often relied on actual fees, viewing them as the best representation of what clients actually experienced. But that approach raises an important question: are those historical fees still relevant to what an investor would pay today? If the answer is no, a model fee may provide a more representative picture of current expected outcomes. Under the SEC marketing rule, there are cases where firms are required to use a model fee when the anticipated fee is higher than actual fees charged.

This consideration becomes even more important for strategies or composites that include accounts paying little or no fee at all. While the GIPS Standards and the SEC Marketing Rule are not perfectly aligned on this topic, they agree in principle—net performance should be meaningful, not misleading, and should reflect what an actual fee-paying investor should reasonably expect to pay. Thus, many firms opt to present model fee performance to avoid violating the marketing rule’s general prohibitions. [2]

Additional SEC guidance published on Jan 15, 2026 on the Use of Model Fees reinforced that the decision to use model vs actual fees is context-dependent. While the marketing rule allows net performance to be calculated using either actual or model fees, there are cases where the use of actual fees may be misleading. The SEC emphasized flexibility and that while both fee types are allowed, what’s appropriate depends on the facts and circumstances of the situation, including the clarity of disclosures and how fee assumptions are explained.

Which Model Fee Should Be Used?

Most firms offer multiple fee structures, typically based on account size, but sometimes also on investor type (institutional versus retail clients). That variability makes fee selection a key decision when presenting net performance.

If you plan to use a single performance document for broad or mass marketing, best practice—and what the SEC Marketing Rule effectively requires—is to calculate net returns using the highest anticipated fee that could reasonably apply to the intended audience. This helps ensure the presentation is not misleading by overstating what an investor might take home.

A common pushback is: “But the highest fee isn’t relevant to this type of investor.” And that may be true. In those cases, firms have a few defensible options:

  • Create separate versions of the presentation tailored to different investor types, or
  • Present multiple fee tiers within the same document, clearly explaining what each tier represents

Either approach can work—but only if disclosures are explicit and easy to understand. When multiple fee structures are shown, clarity isn’t optional; it’s essential.

In practice, many firms maintain separate retail and institutional versions of factsheets or pitchbooks. That approach is perfectly reasonable, but it comes with operational risk. If this becomes standard practice, firms need strong internal controls to ensure the right presentation reaches the right audience. That means:

  • Clear internal policies
  • Consistent naming and version control
  • Training marketing and sales teams on when each version may be used

This often involves an overlap of both marketing and compliance to get it right because getting the fee right is only part of the equation. Making sure the presentation is used appropriately is just as important to ensuring net performance remains meaningful, compliant, and credible.

Which Statistics Can Be Shown Gross-of-Fees?

Since the introduction of the SEC Marketing Rule, there has been significant debate about whether all statistics must be presented net-of-fees—or whether certain metrics can still be shown gross-of-fees. Helpful clarity arrived in an SEC FAQ released on March 19, 2025, which confirmed that not all portfolio characteristics need to be presented net-of-fees. The examples cited included risk statistics such as the Sharpe and Sortino ratios, attribution results, and similar metrics that are often calculated gross-of-fees to avoid the “noise” introduced by fee deductions.

The staff acknowledged that presenting some of these characteristics net-of-fees may be impractical or even misleading. As long as firms prominently present the portfolio’s total gross and net performance incompliance with the rule (i.e., prescribed time periods 1, 5, 10 years),clearly label these characteristics as gross, and explain how they are calculated, the SEC indicated it would generally not recommend enforcement action.

Bringing it all Together

On paper, presenting net and gross performance should be a straight forward exercise.

In reality, layers of regulation, evolving expectations, and heightened scrutiny have made it feel far more complicated than it needs to be. But complexity doesn’t have to lead to confusion.

When firms are clear about:

  • Who they are communicating with,
  • What that audience expects,
  • What the performance is intended to represent, and
  • Why certain assumptions were chosen

…the decisions around what gets presented become far more manageable.

Net returns aren’t about finding a single “correct” number. They’re about telling an honest, well-documented story. And when that story is clear, investors don’t just understand the performance—they trust it.

[1] 2020 GIPS® Standards for Firms, Section 2: Input Data and Calculation Methodology(gross-of-fees returns and treatment of transaction costs, including bundled fees).

[2] See SEC Marketing Rule 2 026(4)-1(a) footnote 590 as well as the SEC updated FAQ from January 15, 2026. Available at: https://www.sec.gov/rules-regulations/staff-guidance/division-investment-management-frequently-asked-questions/marketing-compliance-frequently-asked-questions

In most investment firms, performance calculation is treated like a math problem: get the numbers right, double-check the formulas, and move on. And to be clear—that part matters. A lot.

But here’s the truth many firms eventually discover: perfectly calculated performance can still be poorly communicated.

And when that happens, clients don’t gain confidence. Consultants don’t “get” the strategy. Prospects walk away unconvinced. Not because the returns were wrong—but because the story was missing.

Calculation Is Technical. Communication Is Human.

Performance calculation is about precision. Performance communication is about understanding.

The two overlap, but they are not the same skill set.

You can calculate a composite’s time-weighted return flawlessly, in line with the Global Investment Performance Standards (GIPS®), using best-in-class methodologies. Yet if the only thing your audience walks away with is “we beat the benchmark,” you’ve left most of the value on the table.

This gap shows up all the time:

  • A client sees strong long-term returns but fixates on one bad quarter.
  • A consultant compares two managers with similar returns and can’t tell what truly differentiates them.
  • A prospect asks, “But how did you generate these results?”—and the answer is a wall of statistics.

The math is necessary. It’s just not sufficient.

Returns Answer What. Clients Care About Why.

Returns tell us what happened. Clients want to know why it happened—and whether it’s likely to happen again.

That’s where communication comes in. Good performance communication connects returns to:

  • The investment philosophy
  • The decision-making process
  • The risks taken (and avoided)
  • The type of prospect the strategy is designed for

This is exactly why performance evaluation doesn’t stop at returns in the CFA Institute’s CIPM curriculum. Measurement, attribution, and appraisal are distinct steps fora reason—each adds context that raw performance alone cannot provide. Without that context, returns become just numbers on a page.

The Role of Standards: Necessary, Not Narrative

The GIPS Standards exist to ensure performance is fairly represented and fully disclosed. They do an excellent job of standardizing how performance is calculated and what must be presented. But GIPS compliance doesn’t automatically make performance meaningful to the reader.

A GIPS Report answers questions like:

  • What was the annual return of the composite?
  • What was the annual return of the composite’s benchmark?
  • How volatile was the strategy compared to the benchmark?

It does not answer:

  • Why did this strategy struggle in down markets?
  • What risks did the manager consciously take?
  • How should an allocator think about using this strategy in a broader portfolio?

That’s not a flaw in the standards, it’s a reminder that communication sits on top of compliance, not inside it.

Risk Statistics: Where Stories Start (or Die)

One of the most common communication missteps is overloading clients with risk statistics without explaining what they actually mean or how they can be used to assess the active decisions made in your investment process.

Sharpe ratios, capture ratios, alpha, beta—they’re powerful information. But without interpretation, they’re just numbers.

For example:

  • A downside capture ratio below 100% isn’t impressive on its own.
  • It becomes compelling when you explain how intentionally implemented downside protection was achieved and what trade-offs were accepted in strong up-markets.

This is where performance communication turns data into insight—connecting risk statistics back to portfolio construction and decision-making. Too often, managers select statistics because they look good or because they’ve seen them used elsewhere, rather than because they align with their investment process and demonstrate how their active decisions add value. The most effective communicators use risk statistics intentionally, in the context of what they are trying to deliver to the investor.

We often see firms change the statistics show Your most powerful story may come from when your statistics show you’ve missed the mark. Explaining why and how you are correcting course demonstrates discipline, self-awareness and control.

Know Your Audience Before You Tell the Story

Before you dive into risk statistics, every manager should be asking themselves about their audience. This is where performance communication becomes strategic. Who are you actually talking to? The right performance story depends entirely on your target audience.

Institutional Prospects

Institutional clients and consultants often expect:

  • Detailed risk statistics
  • Benchmark-relative analysis
  • Attribution and metrics that demonstrate consistency
  • Clear articulation of where the strategy fits in a portfolio

They want to understand process, discipline, and risk control. Performance data must be presented with precision and context –grounded in methodology, repeatability and portfolio role. Often, GIPS compliance is a must. Speaking their language builds credibility and demonstrates that you respect the rigor of their decision-making process. It shows that you understand how they evaluate managers and that you are prepared to stand behind your process.

Retail or High-Net-Worth Individuals

Many individual investors don’t care about alpha or capture ratios in isolation. What they really want to know is:

  • Will this help me retire comfortably?
  • Can I afford that second home?
  • How confident should I feel during market downturns?

For this audience, the same performance data must be framed differently—around goals, outcomes, and peace of mind. Sharing how you track and report on these goals in your communication goes a long way in building trust. It signals that you are committed to their goals and will hold yourself accountable to them.  It reassures them that you are not just managing money, you’re protecting the lifestyle they are building.

Keep in mind that cultural differences also shape expectations. For example, US-based investors are primarily results oriented, while investors in Japan often expect deeper transparency into the process and inputs, wanting to understand and validate how those results were achieved.

Same Numbers. Different Story.

The mistake many firms make is assuming one performance narrative works for everyone. It doesn’t. Effective communication adapts:

  • The statistics you emphasize
  • The language you use
  • The level of detail you provide
  • The context you wrap around the results

The goal isn’t to simplify the truth, it’s to translate it to ensure it resonates with the person on the other side of the table.

The Best Performance Reports Tell a Coherent Story

Strong performance communication does three things well:

  1. It sets expectations
    Before showing numbers, it reminds the reader what the strategy is     designed to do—and just as importantly, what it’s not designed to     do.
  2. It     explains outcomes
        Attribution, risk metrics, and market context are used selectively to     explain results, not overwhelm the reader.
  3. It reinforces discipline
    Good communication shows consistency between philosophy, process, and performance—especially during periods of underperformance.

This doesn’t mean dumbing anything down. It means respecting the audience enough to guide them through the data.

Calculation Builds Credibility. Communication Builds Confidence.

Performance calculation earns you a seat at the table.
Performance communication earns trust.

Firms that master both don’t just report results—they help clients understand them, evaluate them, and believe in them.

In an industry where numbers are everywhere, clarity is often the true differentiator.