Quality Control: How to check for errors in your investment performance

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
May 5, 2021
15 min
Quality Control: How to check for errors in your investment performance

Recent investment performance calculation mistakes at Pennsylvania Public School Employees’ Retirement System (“PSERS”) have highlighted the importance of quality control reviews and raises questions about where risk exists, how these risks can be mitigated, and what role independent verifications should play in the quality control process.

What happened at PSERS?1

An error in the return calculation for Pennsylvania’s $64 billion state public school employee retirement plan has had serious implications for its beneficiaries and those involved in the calculation mistake.

In 2010, the plan, which was already underfunded, entered into a risk-sharing agreement where employees hired after 2011 would pay more into the plan if the return (average time-weighted return) over a specific time period fell below the actuarial value of asset (AVA) return of 6.36%.

In December 2020, the board announced that the plan had achieved a return of 6.38%, a mere 2 basis points above the minimum threshold. But in March the board changed its tune, announcing that the calculation was incorrect and the 100,000 or so employees hired since 2011 (and their employers) should have actually paid more into the plan.

What’s worse is PSERS also announced that the FBI is investigating the organization, although details of the probe have not yet been released.

According to PSERS, a consultant, that had calculated the return, came forward and admitted to the calculation error. But the board also said that it is looking into potential cover up by its staff. From what we know, at least 3 independent consultants were involved in providing data used for the calculations, calculating the returns, and verifying the returns. So, with all these experts involved, how could this happen and what can your firm do to avoid a similar situation?

Key issues to address in an investment performance quality control process

Firms should develop sound quality control processes to help identify errors before results are published. Often these processes either do not exist or are insufficient to identify issues. Following a robust quality control process that considers the key risks involved and then finds ways to mitigate these risks greatly increases the accuracy of presented investment performance.

Although we do not yet know the cause of the errors found in the PSERS case, we can highlight a few primary reasons errors occur in investment performance reporting. Primarily, errors found in published performance results are caused by:

  • Key Issue # 1 – Issues in the underlying data (e.g., incorrect or missing prices, unreconciled data, missing transactions, misclassified expenses, or failing to accrue fixed income)
  • Key Issue #2 – Mistakes in calculations (e.g., manual calculations that fail to match the intended methodology)
  • Key Issue #3 – Errors in reporting (e.g., publishing numbers that do not match the calculated results)

A robust quality control process should specifically address all three of these areas.

Considerations when designing a robust quality control process

Key Issue #1 – Issues in the underlying data

As they say, garbage in, garbage out. It is important to ask and address questions confirming the validity of data before it is used to calculate performance. Specifically, consider how the data used in the calculations is gathered, prepared, and reconciled before completing the calculations. Is there any formal signoff from the operations team confirming that the data is ready for use? Has a review of the data been conducted by an operations manager prior to this confirmation being made?

While deadlines to get performance published can be tight, taking the time to ensure that the underlying data is final and ready to use before performance is calculated can prevent headaches later on.

The following is a list of issues to look for when testing data validity:

  • Outlier performance – Portfolios performing differently than their peers may indicate a data issue or that the portfolio is mislabeled (i.e., tagged to a different strategy than it is invested in).
  • Differences between ending and beginning market values – Generally, we expect a portfolio’s market value at the end of one month and the beginning of the next month to be equal (unless using a system where external cashflows are recorded between months and differences like this are expected). Flagging differences can help identify data issues.
  • Offsetting decrease/increase in market value – Market values that suddenly increase or decrease and then return to the original value may have an incorrect price or transaction that should be researched.
  • Gaps in performance – A portfolio whose performance suddenly stops and then restarts may have missing data.
  • 0% returns – The portfolio may have liquidated and may no longer be under the firm’s discretionary management.
  • Very low market values – The portfolio may have closed and is only holding a small residual balance, which should be excluded from the firm’s discretionary management.
  • Net-of-fee returns higher than gross-of-fee returns – Seeing net returns that are higher than gross returns could indicate a data issue unless there are fee reversals you are aware of (e.g., performance fee accruals where previously accrued fees are adjusted back down).
  • Gross-of-fee returns and net of-fee returns are equal – If gross-of-fee and net-of-fee returns are always equal for a fee-paying portfolio, it is likely that the management fees are paid from an outside source (paid by check or out of a different portfolio). The returns labelled as net-of-fee in a case like this should be treated as gross-of-fee returns.

Key Issue #2 – Mistakes in calculations

Mistakes happen, but there are ways to reduce their frequency and impact. First, you’ll want to consider how manual your performance calculations are as well as the experience of the person completing the calculations.

Let’s face it, Excel is probably the most widely used tool in performance measurement, especially for smaller firms. While many firms likely find Excel to be a user-friendly tool for calculating performance statistics, it has its limitations. Studies have shown that up to 90% of spreadsheets contain errors and spreadsheets with lots of formulas are even more likely to contain mistakes. Whether it’s not properly dragging down a formula or referencing the wrong cell, fundamentally, the biggest problem is that users do not check their work or have carefully outlined procedures for confirming accuracy.

Although this may seem obvious, having a second set of eyes on a spreadsheet can save you from the embarrassing headache of having to explain errors in performance calculations. It is even better if this review is a multi-layered process. Having someone review details as well as someone to do a high-level “gut-check” to make sure the calculations and results make sense can reduce this risk. Depending on the size of your firm, this may be easier to accomplish with a third-party consultant, where you serve as a final layer of review.

Having this final “gut-check” can help prevent avoidable errors prior to publication. We find that this final “gut-check” is best performed by someone who knows the strategy intimately rather than a performance or compliance analyst, as these individuals may be too focused on the calculation details to take a step back and consider whether the returns make sense for the strategy and are in line with expectations.

If you use software to calculate performance, you can significantly reduce the risk of manual error, but due diligence should still be performed from time to time to manually prove out the accuracy of the calculations completed in the program. This does not need to be done every time but should be conducted when introducing a new software system and any time changes are made to the program.

Key Issue #3 – Errors in reporting

It may seem silly, but many performance reporting errors come from transposing strategy and benchmark returns in presentations or placing the return of one strategy in the factsheet of another. Therefore, it is important to consider how the final performance figures make it from the system or spreadsheet into the performance presentations. Are they typed? Copy and pasted? Or are the performance reports generated directly out of a system? It’s not enough to complete the calculations correctly, the final reports must also be accurate, so adding a step to review this is crucial.

A similar review process to the one described above can really make a difference, but ultimately, understanding the vulnerabilities of your performance reporting will help you design quality control procedures that address any exposure.

Calculations completed by external performance consultants

Whether performance is calculated internally or by a third-party performance consultant, the same key issues should be considered when designing the quality control process. Due diligence should be done on the performance consulting firm to evaluate the level of experience the firm has with calculating investment performance and what kind of quality control process they follow prior to providing results to your firm. This information will help you determine what reliance you can place on their procedures and what your firm should still check internally.

For example, outsourcing performance calculations to an individual or single-person firm likely necessitates a more in-depth review since this individual would not have the ability to have a second set of eyes on the results prior to providing them to your firm. However, even larger performance consulting firms with robust quality control processes may not have intimate knowledge of your strategies, meaning that, at a minimum, a final “gut-check” should be done by your firm prior to publication.

Reliance on independent performance verification firms to find errors

Many firms that hire performance verification firms rely on their verifier to be their quality control check; however, this may not be a good practice for a variety of reasons. If this is a common practice at your firm, you may want to check the scope of your engagement before relying too heavily on your verifier to find errors.

Verification is common for firms that claim compliance with the Global Investment Performance Standards (GIPS®). But even firms that claim compliance with the GIPS standards and receive a firm-wide verification are required to disclose that, “…Verification does not provide assurance on the accuracy of any specific performance report.

This is because verifiers are primarily focused on the existence and implementation of policies and procedures. While their review may help identify errors that exist in the sample selected for testing, it specifically does not certify the accuracy of presented results. While the verification process is valuable and often does turn up errors that need to be corrected, regardless of the scope of your engagement, a robust internal quality control process is likely still warranted.

Firms that are not GIPS compliant may engage verification firms for various types of attestation or review engagements like strategy exams or other non-GIPS performance reviews. In these situations, the scope of the engagement may be customized to meet the needs (and budget) of the firm seeking verification. A clear understanding of exactly what is in-scope and specifically what the verifier is opining on when issuing their report is key.

Situations where the engagement entails a detailed attestation tracing input data back to independent sources, confirming that calculations are carried out consistently, and verifying that published results match the calculations, allow for heavy reliance on the verifier as part of your quality control process.

Alternatively, when the scope merely consists of a high-level review confirming the appropriateness of the calculation methodology, a much more robust internal quality control process should be applied.

Knowing the scope of the engagement your firm has established with the verification firm is an important element in determining how much reliance can put on their review and findings, which can then be incorporated into the design of your own internal quality control procedures.

Key take-aways

Mistakes happen in investment performance reporting, but a robust quality control process can greatly mitigate this risk. Understanding the risks that exist, designing processes to test these risk areas, and understanding the role and engagement scope of all consultants involved are essential items in designing a quality control procedure that work for your firm – and hopefully one that will help you avoid situations like what happened with PSERS.

If you are not sure where to begin, we have tools and services available to help. Longs Peak uses proprietary software to calculate and analyze performance. Our software helps flag possible data issues and outlier performers and also produces performance reports directly from our performance system.

In addition, our performance consultants are available to work with your team to help identify potential vulnerabilities in your performance reporting process and can help you develop better quality control procedures, where needed.

Questions?

If you would like to learn more about our quality control process or any of the services we offer (like data and outlier testing) to help improve the accuracy and reliability of investment performance, contact us or email Sean Gilligan directly at sean@longspeakadvisory.com.

1 For more information on PSERS, please see this article from the Philadelphia Inquirer.

Recommended Post

View All Articles

Why “Net” Is Not a One-Size-Fits-All Answer

If you’ve worked in the investment industry, you’ve probably heard some version of this question:

“Should we show net or gross performance—or both?”

On the surface, the answer seems straight forward. The rules tell us what’s required. Compliance boxes get checked. End of story.

But in practice, presenting net and gross performance is rarely that simple.

How you calculate it, how you present it, and how you disclose it can materially change how investors interpret your results. This article goes beyond the rulebook to explore thepractical considerations firms face when deciding how to present net and gross returns in a manner that is clear, helpful, and in compliance with requirements.

Let’s Start with the Basics (Briefly)

At a high level, for separate account strategies:

  • Gross performance reflects returns before investment management fees
  • Net performance reflects returns after investment management fees have been deducted

Both gross and net performance are typically net of transaction costs, but gross of administrative fees and expenses. When dealing with pooled funds, net performance is also reduced by administrative fees and expenses, but here we are focused on separate account strategies, typically marketed as composite performance.

Simple enough. But that definition alone doesn’t tell the full story—and it’s where many misunderstandings begin.

Why Net Performance Is the Investor’s Reality

From an investor’s perspective, net performance is what actually matters. It represents the return they keep after paying the manager for active management.

That’s why modern regulations and best practices increasingly emphasize net returns. Investors don’t experience gross returns. They experience net outcomes.

And let’s be honest: if an investor chooses an active manager instead of a low-cost index fund or ETF tracking the same benchmark, the expectation is that the active approach should deliver something extra—after fees. Otherwise, it becomes difficult to justify paying for that active management.

Why Gross Performance Still Has a Role

If net returns are what investors actually receive, why do firms still talk about gross performance at all?

Because gross performance tells a different, but complementary, story: what the strategy is capable of before fees, and what investors are paying for that capability.

The gap between gross and net returns represents the cost of active management. Put differently, it answers a question investors are implicitly asking:

How much return am I giving up in exchange for this manager’s expertise?

Viewed this way, gross returns help investors assess:

  • Whether the strategy is adding value before fees
  • How much of the performance is driven by skill: security selection, asset allocation or portfolio construction
  • Whether fees are the primary drag—or whether the strategy itself is struggling

When gross and net returns are shown together, they create transparency around both skill and cost. When shown without context, they can easily obscure the economic tradeoff.

Gross-of-fee returns are also most important when marketing to institutional investors that have the power to negotiate the fee they will pay and know that they will likely pay a fee lower than most of your clients have paid in the past. Their detailed analysis can more accurately be done starting with your gross-of-fee returns and adjusting for the fee they expect to negotiate rather than using net-of-fee returns that have been charged historically.

The Real-World Gray Areas Firms Struggle With

How to Present Gross Returns

Gross returns are pretty straightforward. They are typically calculated before investment management or advisory fees and usually include transaction costs such as commissions and spreads.

For firms that comply with the GIPS® Standards, things can get more nuanced—particularly for bundled fee arrangements. In those cases, firms must make reasonable allocations to separate transaction costs from the bundled fee. But, if that separation cannot be done reliably, gross returns must be shown after removing the entire bundled fee. [1]

Once you move from gross to net returns, however, the conversation becomes less straightforward. We’ve had managers question, “why show net performance at all?” This is especially the case when fees vary across clients or historical fees no longer reflect what an investor would pay today. Others complain that the “benchmark isn’t net-of-fees,” making net-of-fee comparisons inherently imperfect. These concerns highlight why presenting net returns isn’t just a mechanical exercise. In the sections that follow, we’ll unpack these challenges and walk through how to present net-of-fee performance in a way that remains meaningful, transparent, and fit for its intended audience.

How to Present Net Returns

This is where judgment and documentation matters most.

Not all “net” returns are created equal. Even under the SEC Marketing Rule, there is no single mandated definition of net performance—only a requirement that net performance be presented. Under the GIPS Standards, net-of-fee returns must be reduced by investment management fees.

In practice, firms may deduct:

  • Advisory fees (asset-based investment management fees)
  • Performance-based fees
  • Custody fees
  • Transaction costs

Two net-return series can look comparable on the surface while reflecting very different assumptions underneath. This lack of transparency is one of the main reasons institutional investors often require managers to be GIPS compliant—it simplifies comparison by requiring consistency in the assumptions used and how they are presented or additional disclosure when more fees are included in the calculation than what is required.

And context matters. A higher fee may be perfectly reasonable if it reflects broader services such as tax or financial planning, holistic portfolio construction, or access to specialized strategies. The problem isn’t the fee itself, it’s failing to use a fee scenario that is relevant to the user of the report.

Deciding Between Actual vs Model Fees

The next hurdle is deciding whether to use actual fees or a model fee when calculating net returns. Historically, firms most often relied on actual fees, viewing them as the best representation of what clients actually experienced. But that approach raises an important question: are those historical fees still relevant to what an investor would pay today? If the answer is no, a model fee may provide a more representative picture of current expected outcomes. Under the SEC marketing rule, there are cases where firms are required to use a model fee when the anticipated fee is higher than actual fees charged.

This consideration becomes even more important for strategies or composites that include accounts paying little or no fee at all. While the GIPS Standards and the SEC Marketing Rule are not perfectly aligned on this topic, they agree in principle—net performance should be meaningful, not misleading, and should reflect what an actual fee-paying investor should reasonably expect to pay. Thus, many firms opt to present model fee performance to avoid violating the marketing rule’s general prohibitions. [2]

Additional SEC guidance published on Jan 15, 2026 on the Use of Model Fees reinforced that the decision to use model vs actual fees is context-dependent. While the marketing rule allows net performance to be calculated using either actual or model fees, there are cases where the use of actual fees may be misleading. The SEC emphasized flexibility and that while both fee types are allowed, what’s appropriate depends on the facts and circumstances of the situation, including the clarity of disclosures and how fee assumptions are explained.

Which Model Fee Should Be Used?

Most firms offer multiple fee structures, typically based on account size, but sometimes also on investor type (institutional versus retail clients). That variability makes fee selection a key decision when presenting net performance.

If you plan to use a single performance document for broad or mass marketing, best practice—and what the SEC Marketing Rule effectively requires—is to calculate net returns using the highest anticipated fee that could reasonably apply to the intended audience. This helps ensure the presentation is not misleading by overstating what an investor might take home.

A common pushback is: “But the highest fee isn’t relevant to this type of investor.” And that may be true. In those cases, firms have a few defensible options:

  • Create separate versions of the presentation tailored to different investor types, or
  • Present multiple fee tiers within the same document, clearly explaining what each tier represents

Either approach can work—but only if disclosures are explicit and easy to understand. When multiple fee structures are shown, clarity isn’t optional; it’s essential.

In practice, many firms maintain separate retail and institutional versions of factsheets or pitchbooks. That approach is perfectly reasonable, but it comes with operational risk. If this becomes standard practice, firms need strong internal controls to ensure the right presentation reaches the right audience. That means:

  • Clear internal policies
  • Consistent naming and version control
  • Training marketing and sales teams on when each version may be used

This often involves an overlap of both marketing and compliance to get it right because getting the fee right is only part of the equation. Making sure the presentation is used appropriately is just as important to ensuring net performance remains meaningful, compliant, and credible.

Which Statistics Can Be Shown Gross-of-Fees?

Since the introduction of the SEC Marketing Rule, there has been significant debate about whether all statistics must be presented net-of-fees—or whether certain metrics can still be shown gross-of-fees. Helpful clarity arrived in an SEC FAQ released on March 19, 2025, which confirmed that not all portfolio characteristics need to be presented net-of-fees. The examples cited included risk statistics such as the Sharpe and Sortino ratios, attribution results, and similar metrics that are often calculated gross-of-fees to avoid the “noise” introduced by fee deductions.

The staff acknowledged that presenting some of these characteristics net-of-fees may be impractical or even misleading. As long as firms prominently present the portfolio’s total gross and net performance incompliance with the rule (i.e., prescribed time periods 1, 5, 10 years),clearly label these characteristics as gross, and explain how they are calculated, the SEC indicated it would generally not recommend enforcement action.

Bringing it all Together

On paper, presenting net and gross performance should be a straight forward exercise.

In reality, layers of regulation, evolving expectations, and heightened scrutiny have made it feel far more complicated than it needs to be. But complexity doesn’t have to lead to confusion.

When firms are clear about:

  • Who they are communicating with,
  • What that audience expects,
  • What the performance is intended to represent, and
  • Why certain assumptions were chosen

…the decisions around what gets presented become far more manageable.

Net returns aren’t about finding a single “correct” number. They’re about telling an honest, well-documented story. And when that story is clear, investors don’t just understand the performance—they trust it.

[1] 2020 GIPS® Standards for Firms, Section 2: Input Data and Calculation Methodology(gross-of-fees returns and treatment of transaction costs, including bundled fees).

[2] See SEC Marketing Rule 2 026(4)-1(a) footnote 590 as well as the SEC updated FAQ from January 15, 2026. Available at: https://www.sec.gov/rules-regulations/staff-guidance/division-investment-management-frequently-asked-questions/marketing-compliance-frequently-asked-questions

In most investment firms, performance calculation is treated like a math problem: get the numbers right, double-check the formulas, and move on. And to be clear—that part matters. A lot.

But here’s the truth many firms eventually discover: perfectly calculated performance can still be poorly communicated.

And when that happens, clients don’t gain confidence. Consultants don’t “get” the strategy. Prospects walk away unconvinced. Not because the returns were wrong—but because the story was missing.

Calculation Is Technical. Communication Is Human.

Performance calculation is about precision. Performance communication is about understanding.

The two overlap, but they are not the same skill set.

You can calculate a composite’s time-weighted return flawlessly, in line with the Global Investment Performance Standards (GIPS®), using best-in-class methodologies. Yet if the only thing your audience walks away with is “we beat the benchmark,” you’ve left most of the value on the table.

This gap shows up all the time:

  • A client sees strong long-term returns but fixates on one bad quarter.
  • A consultant compares two managers with similar returns and can’t tell what truly differentiates them.
  • A prospect asks, “But how did you generate these results?”—and the answer is a wall of statistics.

The math is necessary. It’s just not sufficient.

Returns Answer What. Clients Care About Why.

Returns tell us what happened. Clients want to know why it happened—and whether it’s likely to happen again.

That’s where communication comes in. Good performance communication connects returns to:

  • The investment philosophy
  • The decision-making process
  • The risks taken (and avoided)
  • The type of prospect the strategy is designed for

This is exactly why performance evaluation doesn’t stop at returns in the CFA Institute’s CIPM curriculum. Measurement, attribution, and appraisal are distinct steps fora reason—each adds context that raw performance alone cannot provide. Without that context, returns become just numbers on a page.

The Role of Standards: Necessary, Not Narrative

The GIPS Standards exist to ensure performance is fairly represented and fully disclosed. They do an excellent job of standardizing how performance is calculated and what must be presented. But GIPS compliance doesn’t automatically make performance meaningful to the reader.

A GIPS Report answers questions like:

  • What was the annual return of the composite?
  • What was the annual return of the composite’s benchmark?
  • How volatile was the strategy compared to the benchmark?

It does not answer:

  • Why did this strategy struggle in down markets?
  • What risks did the manager consciously take?
  • How should an allocator think about using this strategy in a broader portfolio?

That’s not a flaw in the standards, it’s a reminder that communication sits on top of compliance, not inside it.

Risk Statistics: Where Stories Start (or Die)

One of the most common communication missteps is overloading clients with risk statistics without explaining what they actually mean or how they can be used to assess the active decisions made in your investment process.

Sharpe ratios, capture ratios, alpha, beta—they’re powerful information. But without interpretation, they’re just numbers.

For example:

  • A downside capture ratio below 100% isn’t impressive on its own.
  • It becomes compelling when you explain how intentionally implemented downside protection was achieved and what trade-offs were accepted in strong up-markets.

This is where performance communication turns data into insight—connecting risk statistics back to portfolio construction and decision-making. Too often, managers select statistics because they look good or because they’ve seen them used elsewhere, rather than because they align with their investment process and demonstrate how their active decisions add value. The most effective communicators use risk statistics intentionally, in the context of what they are trying to deliver to the investor.

We often see firms change the statistics show Your most powerful story may come from when your statistics show you’ve missed the mark. Explaining why and how you are correcting course demonstrates discipline, self-awareness and control.

Know Your Audience Before You Tell the Story

Before you dive into risk statistics, every manager should be asking themselves about their audience. This is where performance communication becomes strategic. Who are you actually talking to? The right performance story depends entirely on your target audience.

Institutional Prospects

Institutional clients and consultants often expect:

  • Detailed risk statistics
  • Benchmark-relative analysis
  • Attribution and metrics that demonstrate consistency
  • Clear articulation of where the strategy fits in a portfolio

They want to understand process, discipline, and risk control. Performance data must be presented with precision and context –grounded in methodology, repeatability and portfolio role. Often, GIPS compliance is a must. Speaking their language builds credibility and demonstrates that you respect the rigor of their decision-making process. It shows that you understand how they evaluate managers and that you are prepared to stand behind your process.

Retail or High-Net-Worth Individuals

Many individual investors don’t care about alpha or capture ratios in isolation. What they really want to know is:

  • Will this help me retire comfortably?
  • Can I afford that second home?
  • How confident should I feel during market downturns?

For this audience, the same performance data must be framed differently—around goals, outcomes, and peace of mind. Sharing how you track and report on these goals in your communication goes a long way in building trust. It signals that you are committed to their goals and will hold yourself accountable to them.  It reassures them that you are not just managing money, you’re protecting the lifestyle they are building.

Keep in mind that cultural differences also shape expectations. For example, US-based investors are primarily results oriented, while investors in Japan often expect deeper transparency into the process and inputs, wanting to understand and validate how those results were achieved.

Same Numbers. Different Story.

The mistake many firms make is assuming one performance narrative works for everyone. It doesn’t. Effective communication adapts:

  • The statistics you emphasize
  • The language you use
  • The level of detail you provide
  • The context you wrap around the results

The goal isn’t to simplify the truth, it’s to translate it to ensure it resonates with the person on the other side of the table.

The Best Performance Reports Tell a Coherent Story

Strong performance communication does three things well:

  1. It sets expectations
    Before showing numbers, it reminds the reader what the strategy is     designed to do—and just as importantly, what it’s not designed to     do.
  2. It     explains outcomes
        Attribution, risk metrics, and market context are used selectively to     explain results, not overwhelm the reader.
  3. It reinforces discipline
    Good communication shows consistency between philosophy, process, and performance—especially during periods of underperformance.

This doesn’t mean dumbing anything down. It means respecting the audience enough to guide them through the data.

Calculation Builds Credibility. Communication Builds Confidence.

Performance calculation earns you a seat at the table.
Performance communication earns trust.

Firms that master both don’t just report results—they help clients understand them, evaluate them, and believe in them.

In an industry where numbers are everywhere, clarity is often the true differentiator.

Key Takeaways from the 29th Annual GIPS® Standards Conference in Phoenix

The 29th Annual Global Investment Performance Standards (GIPS®) Conference was held November 11–12, 2025, at the Sheraton Grand at Wild Horse Pass in Phoenix, Arizona—a beautiful desert resort and an ideal setting for two days of discussions on performance reporting, regulatory expectations, and practical implementation challenges. With no updates released to the GIPS standards this year, much of the content focused on application, interpretation, and the broader reporting and regulatory environment that surrounds the standards.

One of the few topics directly tied to GIPS compliance with a near-term impact relates to OCIO portfolios. Beginning with performance presentations that include periods through December 31, 2025, GIPS compliant firms with OCIO composites must present performance following a newly prescribed, standardized format. We published a high-level overview of these requirements previously.

The conference also covered related topics such as the SEC Marketing Rule, private fund reporting expectations, SEC exam trends, ethical challenges, and methodology consistency. Below are the themes and observations most relevant for firms today.

Are Changes Coming to the GIPS Standards in 2030?

Speakers emphasized that while no new GIPS standards updates were introduced this year, expectations for consistent, well-documented implementation continue to rise. Many attendee questions highlighted that challenges often stem more from inconsistent application or interpretation than from unclear requirements.

Several audience members also asked whether a “GIPS 2030” rewrite might be coming, similar to the major updates in 2010 and 2020. The CFA Institute and GIPS Technical Committee noted that:

    ·   No new version of the standards is currently in development,

     ·   A long-term review cycle is expected in the coming years, and

     ·   A future update is possible later this decade as the committee evaluates whether changes are warranted.

For now, the standards remain stable—giving firms a window to refine methodologies, tighten policies, and align practices across teams.

Performance Methodology Under the SEC Marketing Rule

The Marketing Rule featured prominently again this year, and presenters emphasized a familiar theme: firms must apply performance methodologies consistently when private fund results appear in advertising materials.

Importantly, these expectations do not come from prescriptive formulas within the rule. They stem from:

1.     The “fair and balanced” requirement,

2.     The Adopting Release, and

3.     SEC exam findings that view inconsistent methodology as potentially misleading.

Common issues raised included: presenting investment-level gross IRR alongside fund-level net IRR without explanation, treating subscription line financing differently in gross vs. net IRR, and inconsistently switching methodology across decks, funds, or periods.

To help firms void these pitfalls, speakers highlighted several expectations:

     ·   Clearly identify whether IRR is calculated at the investment level or fund level.

     ·   Use the same level of calculation for both gross and net IRR unless a clear, disclosed rationale exists.

     ·   Apply subscription line impacts consistently across both gross and net.

     ·   Label fund-level gross IRR clearly, if used(including gross returns is optional).

     ·   Ensure net IRR reflects all fees, expenses, and carried interest.

     ·   Disclose any intentional methodological differences clearly and prominently.

     ·   Document methodology choices in policies and apply them consistently across funds.

This remains one of the most frequently cited issues in SEC exam findings for private fund advisers. In short: the SEC does not mandate a specific methodology, but it does expect consistent, well-supported approaches that avoid misleading impressions.

Evolving Expectations in Private Fund Client Reporting

Although no new regulatory requirements were announced, presenters made it clear that limited partners expect more transparency than ever before. The session included an overview of the updated ILPA reporting template along with additional information related to its implementation. Themes included:

     ·   Clearer disclosure of fees and expenses,

     ·   Standardized IRR and MOIC reporting,

     ·   More detail around subscription line usage,

     ·   Attribution and dispersion that are easy to interpret, and

     ·   Alignment with ILPA reporting practices.

These are not formal requirements, but it’s clear the industry is moving toward more standardized and transparent reporting.

Practical Insights from SEC Exams—Including How Firms Should Approach Deficiency Letters

A recurring theme across the SEC exam sessions was the need for stronger alignment between what firms say in their policies and what they do in practice. Trends included:

     ·   More detailed reviews of fee and expense calculations, especially for private funds,

     ·   Larger sample requests for Marketing Rule materials,

     ·   Increased emphasis on substantiation of all claims, and

     ·   Close comparison of written procedures to actual workflows.

A particularly helpful part of the discussion focused on how firms should approach responding to SEC deficiency letters—something many advisers encounter at some point.

Christopher Mulligan, Partner at Weil, Gotshal & Manges LLP, offered a framework that resonated with many attendees. He explained that while the deficiency letter is addressed to the firm by the exam staff, the exam staff is not the primary audience when drafting the response.

The correct priority order is:

1. The SEC Enforcement Division

Enforcement should be able to read your response and quickly understand that: you fully grasp the issue, you have corrected or are correcting it, and nothing in the finding merits escalation.

Your first objective is to eliminate any concern that the issue rises to an enforcement matter.

2. Prospective Clients

Many allocators now request historical deficiency letters and responses during due diligence. The way the response is written—its tone, clarity, and thoroughness—can meaningfully influence how a firm is perceived.

A well-written response shows strong controls and a culture that takes compliance seriously.

3. The SEC Exam Staff

Although examiners issued the letter, they are the third audience. Their primary interest is acknowledgment and a clear explanation of the remediation steps.

Mulligan emphasized that firms often default to writing the response as if exam staff were the only audience. Reframing the response to keep the first two audiences in mind—enforcement and prospective clients—helps ensure the tone, clarity, and level of detail are appropriate and reduces both regulatory and reputational risk.

Final Thoughts

With no changes to the GIPS standards introduced this year, the 2025 conference in Phoenix served as a reminder that the real challenges involve consistency, documentation, and communication. OCIO providers in particular should be preparing for the upcoming effective date, and private fund managers continue to face rising expectations around transparent, well-supported performance reporting.

Across all sessions, a common theme emerged: clear methodology and strong internal processes are becoming just as important as the performance results themselves.

This is exactly where Longs Peak focuses its work. Our team specializes in helping firms document and implement practical, well-controlled investment performance frameworks—from IRR methodologies and composite construction to Marketing Rule compliance, fee and expense controls, and preparing for GIPS standards verification. We take the technical complexity and turn it into clear, operational processes that withstand both client due diligence and regulatory scrutiny.

If you’d like to discuss how we can help strengthen your performance reporting or compliance program, we’d be happy to talk. Contact us.