Key Takeaways from the 28th Annual GIPS® Conference

Jocelyn Gilligan, CFA, CIPM
Partner
October 2, 2024
15 min
Key Takeaways from the 28th Annual GIPS® Conference

The CFA Institute hosted its 28th Annual Global Investment Performance Standards (GIPS®) Conference on September 17-18 in San Diego, CA. As always, the opportunity to reconnect with industry peers and colleagues was a highlight. We are grateful to all the speakers and panelists who shared their insights. Here are some key takeaways we found valuable from this year’s event.

The SEC Marketing Rule

The SEC Marketing Rule continues to be a topic of discussion, especially as we continue navigating the nuances of the rule and its implications for investment performance advertising. During the panel discussion, two presenters clarified several points:

Model vs. Actual Fees

It seems that there is rarely a case when the use of actual fees will adequately satisfy the marketing rule. This is a major development as at least 30% of the participants in the audience claim to still be using actual fees in their marketing.

According to the SEC marketing rule, when calculating net returns you can use actual or model fees. However, to satisfy the general prohibitions, an advisor generally should apply a model fee that reflects either the highest fee that was charged historically or the highest potential fee that it will charge the prospect receiving the advertisement (not a reasonable fee or an average). Footnotes 590 and 593 further clarify that there may be cases when using actual fees would specifically violate the marketing rule.

Footnote 590: “If the fee to be charged to the intended audience is anticipated to be higher than the actual fees charged, the adviser must use a model fee that reflects the anticipated fee to be charged in order not to violate the rule’s general prohibitions.”

and

Footnote 593: “…net performance that reflects a model fee that is not available to the intended audience is not permitted under the final rule’s second model fee provision.”

As a result, we recommend that anyone using actual fees in advertisements compare their net returns to the net returns that would have been achieved using the highest fee a prospect would pay as the model fee. If your actual net returns result in materially better performance than what the performance would be using the highest model fee, this is likely problematic. The rules do not define materiality, but the panelists did provide an example where the difference was only 25bp and they indicated that would likely be considered material.

If you do not have tools for calculating model fees, don’t worry, we are here to help. Reach out to one of our performance experts if you need help calculating model fees - we have tools that can simplify this for you.

Showing Multiple Net Returns in a Single Advertisement

Standardized marketing materials that show multiple net return results (including net of actual fees) may be presented in a single advertisement. This seems like a change of tone from what we heard last year, but this greatly simplifies what we thought previously. Since the adoption of the marketing rule, firms have struggled with how to standardize marketing materials, especially when they have different fee schedules and investor types.

Many firms now manage several versions of the same marketing document that show only the gross-of-fee returns and net-of-fee returns relevant to the specific audience receiving the advertisement. This can be logistically challenging to manage. Based on the discussion and case studies provided, it seems that firms are permitted to create a single document that shows various net-of-fee returns based on the fees charged to different investor types. The example provided looked something like this:

This shift in approach may be a huge relief for firms that are managing multiple investor types and are trying to track and update performance under various fee schedules. If electing to do this, it is important to ensure the fee proposed for the prospective investor is clear – especially when presenting a table like this to a retail investor. It is essential that your prospects can easily identify the net-of-fee return stream that is applicable for them.

Attribution & Contribution – Which is Performance?

Attribution is not considered performance while contribution likely is. Because Attribution is not considered performance, the use of a representative account is generally accepted. However, careful consideration should be applied in selecting an appropriate rep account and documentation to support its selection should be maintained. While the performance-related requirements of the Marketing Rule may not apply, the overarching requirement for the advertisement to be “fair and balanced” applies and must be considered when determining what account to use to represent the strategy.

A separate case study discussed how to handle situations when the rep account closes. Using the old rep account historically and linking its data to a new rep account is considered hypothetical, so if your rep account ceases to exist, it’s best to re-evaluate and select a different rep account to be used for the entire track record of the strategy.

Presenting Sector Contribution Returns Net-of-Fees

When presenting extracted performance, such as contribution or returns at the sector-level, this is treated as performance and must be presented net-of-fees. Since some firms have been mistakenly reducing each sector by a prorated portion of the percentage fee when determining the net-of-fee results, the panelists emphasized that when netting down sector returns, firms must deduct the full percentage fee from each sector. If allocating the dollar amount of the fee, that would be prorated by weighting the dollar amount of the fee by the weight the sector represents in the portfolio, but prorating a percentage will not create the same result and will overstate the sector-level net-of-fee returns.

The following example was provided to demonstrate how to apply model fees to sector returns and contribution in an advertisement:

Private Fund Gross & Net Returns

The calculation of gross and net returns for private funds must be consistent. For example, you cannot report a gross-of-fee return that excludes the impact of a subscription line of credit while reporting a net-of-fee return that includes it. Firms must disclose the effect of leverage, specifying the impact of subscription lines of credit rather than just stating that returns will be lower.

Per the marketing rule: gross- and net-of-fee returns must be calculated over the same time period, using the same type of return methodology. For example, it is not appropriate to calculate gross IRR using investment-level cash flows and net IRR using fund-level cash flows as that would be considered different methodologies.

Hypothetical Performance

Firms should be prepared to defend the classification of hypothetical or extracted performance. Hypothetical performance is defined as “performance that no specific account received.” Panelists made a point of noting that the return stream of a composite is not considered hypothetical, even though no specific account received the performance.

Along similar lines, a case study was presented where a firm wanted to show recommended funds to an existing client in a marketing presentation. The question was whether presenting a recommendation like this is considered hypothetical. Not surprisingly, the answer was “it depends on how the information was presented.” Presenting the information in a way that implied what the investor “could have received” would likely be hypothetical. Simply showing how these funds performed historically (so long as it complies with the marketing rule – showing prescribed time periods etc.) appeared acceptable.

AI in Investment Performance Reporting

The integration of AI into performance measurement and reporting continues to gain momentum. Of particular interest was how quickly our jobs may be changing and whether we need to be concerned about job security.

Jobs that focus on data gathering, prepping and cleaning are expected to be replaced by AI in the near future.  We’ll likely see fewer new job postings for these entry-level roles, with a shift towards more value-added positions, such as data scientists, becoming more prevalent. Panelists suggested that many roles within the performance measurement function, including auditing, will likely be augmented, automating repetitive tasks (often performed by more junior professionals) and enhancing data analysis functions. Higher-level human oversight will still be essential for exercising judgment and interpreting information within the context of real-world scenarios – at least for now.

Panelists recommended preparing performance teams by encouraging them to take basic courses in Python and SQL to help prepare and empower them for the shift to a future with AI. AI platforms already exist that can perform detailed performance attribution and risk assessments by simply asking a question – much like one would with ChatGPT. It is likely that performance measurement professionals will continue to be needed to develop these platforms, and they will likely remain reliant on some human oversight for the foreseeable future.

Updates on the GIPS Standards

There were not a lot of updates on the GIPS Standards at the conference. As of July 31, 2024, 1,785 organizations across 51 markets claim compliance with the GIPS standards. This includes 85 of the top 100 global firms, and all 25 of the top 25 firms. The top five markets include the US, UK, Canada, Switzerland, and Japan, with Brazil emerging as a new market entrant in 2024.

The conference also provided updates on recent changes to the GIPS Standards. Key updates included:

  • The Guidance Statement for OCIO Strategies will be released by year-end, providing more clarity for firms managing OCIO portfolios. It appears that gross-of-fee and net-of-fee returns will need to be presented for OCIO composites.
  • The Guidance Statement for Firms Managing Only Broadly Distributed Pooled Funds(BDPFs) became effective on July 1, 2024. The new guidance offers increased flexibility for firms managing BDPFs, allowing them to avoid preparing GIPS Reports for prospective investors and instead focus on reporting for consultant databases or RFPs. While input data and return calculation requirements generally still apply, composite construction and report distribution are only required if the firm chooses to prepare GIPS Reports.
  • The GIPS Technical Committee is forming a working group to address after-tax reporting. For now, firms should refer to the USIPC After-Tax Performance Standards, which were issued in 2011. Additionally, as there is little consensus on how to calculate private fund returns, the committee plans to provide further guidance—though a timeline was not specified.

These takeaways underscore the evolving nature of the investment performance landscape. If you have any questions, please don’t hesitate to reach out to us. We would be happy to share additional insights from the conference as well as jump start your firm in complying with the GIPS Standards.

GIPS® is a registered trademark owned by CFA Institute. CFA Institute does not endorse or promote this organization, nor does it warrant the accuracy or quality of the content contained herein.

Recommended Post

View All Articles

When you're responsible for overseeing the performance of an endowment or public pension fund, one of the most critical tools at your disposal is the benchmark. But not just any benchmark—a meaningful one, designed with intention and aligned with your Investment Policy Statement(IPS). Benchmarks aren’t just numbers to report alongside returns; they represent the performance your total fund should have delivered if your strategic targets were passively implemented.

And yet, many asset owners still find themselves working with benchmarks that don’t quite match their objectives—either too generic, too simplified, or misaligned with how the total fund is structured. Let’s walkthrough how to build more effective benchmarks that reflect your IPS and support better performance oversight.

Start with the Policy: Your IPS Should Guide Benchmark Construction

Your IPS is more than a governance document—it is the road map that sets strategic asset allocation targets for the fund. Whether you're allocating 50% to public equity or 15% to private equity, each target signals an intentional risk/return decision. Your benchmark should be built to evaluate how well each segment of the total fund performed.

The key is to assign a benchmark to each asset class and sub-asset class listed in your IPS. This allows for layered performance analysis—at the individual sub-asset class level (such as large cap public equity), at the broader asset class level (like total public equity), and ultimately rolled up at the Total Fund level. When benchmarks reflect the same weights and structure as the strategic targets in your IPS, you can assess how tactical shifts in weights and active management within each segment are adding or detracting value.

Use Trusted Public Indexes for Liquid Assets

For traditional, liquid assets—like public equities and fixed income—benchmarking is straightforward. Widely recognized indexes like the S&P 500, MSCI ACWI, or Bloomberg U.S. Aggregate Bond Index are generally appropriate and provide a reasonable passive alternative against which to measure active strategies managed using a similar pool of investments as the index.

These benchmarks are also calculated using time-weighted returns (TWR), which strip out the impact of cash flows—ideal for evaluating manager skill. When each component of your total fund has a TWR-based benchmark, they can all be rolled up into a total fund benchmark with consistency and clarity.

Think Beyond the Index for Private Markets

Where benchmarking gets tricky is in illiquid or asset classes like private equity, real estate, or private credit. These don’t have public market indexes since they are private market investments, so you need a proxy that still supports a fair evaluation.

Some organizations use a peer group as the benchmark, but another approach is to use an annualized public market index plus a premium. For example, you might use the 7-year annualized return of the Russell 2000(lagged by 3 months) plus a 3% premium to account for illiquidity and risk.

Using the 7-year average rather than the current period return removes the public market volatility for the period that may not be as relevant for the private market comparison. The 3-month lag is used if your private asset valuations are updated when received rather than posted back to the valuation date. The purpose of the 3% premium (or whatever you decide is appropriate) is to account for the excess return you expect to receive from private investments above public markets to make the liquidity risk worthwhile.

By building in this hurdle, you create a reasonable, transparent benchmark that enables your board to ask: Is our private markets portfolio delivering enough excess return to justify the added risk and reduced liquidity?

Roll It All Up: Aggregated Benchmarks for Total Fund Oversight

Once you have individual benchmarks for each segment of the total fund, the next step is to aggregate them—using the strategic asset allocation weights from your IPS—to form a custom blended total fund benchmark.

This approach provides several advantages:

  • You can evaluate performance at both the micro (asset class) and macro (total fund) level.
  • You gain insight into where active management is adding value—and where it isn’t.
  • You ensure alignment between your strategic policy decisions and how performance is being measured.

For example, if your IPS targets 50% to public equities split among large-, mid-, and small-cap stocks, you can create a blended equity benchmark that reflects those sub-asset class allocations, and then roll it up into your total fund benchmark. Rebalancing of the blends should match there balancing frequency of the total fund.

What If There's No Market Benchmark?

In some cases, especially for highly customized or opportunistic strategies like hedge funds, there simply may not be a meaningful market index to use as a benchmark. In these cases, it is important to consider what hurdle would indicate success for this segment of the total fund. Examples of what some asset owners use include:

  • CPI + Premium – a simple inflation-based hurdle
  • Absolute return targets – such as a flat 7% annually
  • Total Fund return for the asset class – not helpful for evaluating the performance of this segment, but still useful for aggregation to create the total fund benchmark

While these aren’t perfect, they still serve an important function: they allow performance to be rolled into a total fund benchmark, even if the asset class itself is difficult to benchmark directly.

The Bottom Line: Better Benchmarks, Better Oversight

For public pension boards and endowment committees, benchmarks are essential for effective fiduciary oversight. A well-designed benchmark framework:

  • Reflects your strategic intent
  • Provides fair, consistent measurement of manager performance
  • Supports clear communication with stakeholders

At Longs Peak Advisory Services, we’ve worked with asset owners around the globe to develop custom benchmarking frameworks that align with their policies and support meaningful performance evaluation. If you’re unsure whether your current benchmarks are doing your IPS justice, we’re hereto help you refine them.

Want to dig deeper? Let’s talk about how to tailor a benchmark framework that’s right for your total fund—and your fiduciary responsibilities. Reach out to us today.

Valuation Timing for Illiquid Investments
Explore how firms & asset owners can balance accuracy & timeliness in performance reporting for illiquid investments.
June 23, 2025
15 min

For asset owners and investment firms managing private equity, real estate, or other illiquid assets, one of the most persistent challenges in performance reporting is determining the right approach to valuation timing. Accurate performance results are essential, but delays in receiving valuations can create friction with timely reporting goals. How can firms strike the right balance?

At Longs Peak Advisory Services, we’ve worked with hundreds of investment firms and asset owners globally to help them present meaningful, transparent performance results. When it comes to illiquid investments, the trade-offs and decisions surrounding valuation timing can have a significant impact—not just on performance accuracy, but also on how trustworthy and comparable the results appear to stakeholders.

Why Valuation Timing Matters

Illiquid investments are inherently different from their liquid counterparts. While publicly traded securities can be valued in real-time with market prices, private equity and real estate investments often report with a delay—sometimes months after quarter-end.

This delay creates a reporting dilemma: Should firms wait for final valuations to ensure accurate performance, or should they push ahead with estimates or lagged valuations to meet internal or external deadlines?

It’s a familiar struggle for investment teams and performance professionals. On one hand, accuracy supports sound decision-making and stakeholder trust. On the other, reporting delays can hinder communication with boards, consultants, and beneficiaries—particularly for asset owners like endowments and public pension plans that follow strict reporting cycles.

Common Approaches to Delayed Valuations

For strategies involving private equity, real estate, or other illiquid holdings, receiving valuations weeks—or even months—after quarter-end is the norm rather than the exception. To deal with this lag, investment organizations typically adopt one of two approaches to incorporate valuations into performance reporting: backdating valuations or lagging valuations. Each has benefits and drawbacks, and the choice between them often comes down to a trade-off between accuracy and timeliness.

1. Backdating Valuations

In the backdating approach, once a valuation is received—say, a March 31 valuation that arrives in mid-June—it is recorded as of March 31, the actual valuation date. This ensures that performance reports reflect economic activity during the appropriate time period, regardless of when the data became available.

Pros:
  • Accuracy: Provides the most accurate snapshot of asset values and portfolio performance for the period being reported.
  • Integrity: Maintains alignment between valuation dates and the underlying activity in the portfolio, which is particularly important for internal analysis or for investment committees wanting to evaluate manager decisions during specific market environments.
Cons:
  • Delayed Reporting: Final performance for the quarter may be delayed by 4–6 weeks or more, depending on how long it takes to receive valuations.
  • Stakeholder Frustration: Boards, consultants, and beneficiaries may grow  frustrated if they cannot access updated reports in a timely manner, especially if performance data is tied to compensation decisions, audit     deadlines, or public disclosures.

When It's Useful:
  • When transparency and accuracy are prioritized over speed—e.g., in annual audited performance reports or regulatory filings.
  • For internal purposes where precise attribution and alignment with economic events are critical, such as evaluating decision-making during periods of market volatility.

2. Lagged Valuations

With the lagged approach, firms recognize delayed valuations in the subsequent reporting period. Using the same example: if the March 31valuation is received in June, it is instead recorded as of June 30. In this case, the performance effect of the Q1 activity is pushed into Q2’sreporting.

Pros:
  • Faster Reporting: Performance reports can be completed shortly after quarter-end, meeting board, stakeholder, and regulatory timelines.
  • Operational Efficiency: Teams aren’t held up by a few delayed valuations, allowing them to close the books and move on to other tasks.

Cons:
  • Reduced Accuracy: Performance reported for Q2 includes valuation changes that actually occurred in Q1, misaligning performance with the period in which it was earned.
  • Misinterpretation Risk: If users are unaware of the lag, they may misattribute results to the wrong quarter, leading to flawed conclusions about manager skill or market behavior.

When It's Useful:
  • When quarterly reporting deadlines must be met (e.g., trustee meetings, consultant updates).
  • In environments where consistency and speed are prioritized, and the lag can be adequately disclosed and understood by users.

Choosing the Right Approach (and Sticking with It)

Both approaches are acceptable from a compliance and reporting perspective. However, the key lies in consistency.

Once an organization adopts an approach—whether back dating or lagging—it should be applied across all periods, portfolios, and asset classes. Inconsistent application opens the door to performance manipulation(or the appearance of it), where results might look better simply because a valuation was timed differently.

This kind of inconsistency can erode trust with boards, auditors and other stakeholders. Worse, it could raise red flags in a regulatory review or third-party verification.

Disclose, Disclose, Disclose

Regardless of the method you use, full transparency in reporting is essential. If you’re lagging valuations by a quarter, clearly state that in your disclosures. If you change methodologies at any point—perhaps transitioning from lagged to backdated—explain when and why that change occurred.

Clear disclosures help users of your reports—whether board members, beneficiaries, auditors, or consultants—understand how performance was calculated. It allows them to assess the results in context and make informed decisions based on the data.

Aligning Benchmarks with Valuation Timing

One important detail that’s often overlooked: your benchmark data should follow the same valuation timing as your portfolio.

If your private equity or real estate portfolio is lagged by a quarter, but your benchmark is not, your performance comparison becomes flawed. The timing mismatch can mislead stakeholders into believing the strategy outperformed or underperformed, simply due to misaligned reporting periods.

To ensure a fair and meaningful comparison, always apply your valuation timing method consistently across both your portfolio and benchmark data.

Building Trust Through Transparency

Valuation timing is a technical, often behind-the-scenes issue—but it plays a crucial role in how your investment results are perceived. Boards and stakeholders rely on accurate, timely, and understandable performance reporting to make decisions that impact beneficiaries, employees, and communities.

By taking the time to document your valuation policy, apply it consistently, and disclose it clearly, you are reinforcing your organization’s commitment to integrity and transparency. And in a world where scrutiny of investment performance is only increasing, that commitment can be just as valuable as the numbers themselves.

Need help defining your valuation timing policy or aligning performance reporting practices with industry standards?

Longs Peak Advisory Services specializes in helping investment firms and asset owners simplify their performance processes, maintain compliance, and build trust through transparent reporting. Contact us to learn how we can support your team.

Key Takeaways from the 2025 PMAR Conference
This year’s PMAR Conference delivered timely and thought-provoking content for performance professionals across the industry. In this post, we’ve highlighted our top takeaways from the event—including a recap of the WiPM gathering.
May 29, 2025
15 min

The Performance Measurement, Attribution & Risk (PMAR) Conference is always a highlight for investment performance professionals—and this year’s event did not disappoint. With a packed agenda spanning everything from economic uncertainty and automation to evolving training needs and private market complexities, PMAR 2025 gave attendees plenty to think about.

Here are some of our key takeaways from this year’s event:

Women in Performance Measurement (WiPM)

Although not officially a part of PMAR, WiPM often schedules its annual in-person gathering during the same week to take advantage of the broader industry presence at the event. This year’s in-person gathering, united female professionals from across the country for a full day of connection, learning, and mentorship. The agenda struck a thoughtful balance between professional development and personal connection, with standout sessions on AI and machine learning, resume building, and insights from the WiPM mentoring program. A consistent favorite among attendees is the interactive format—discussions are engaging, and the support among members is truly energizing. The day concluded with a cocktail reception and dinner, reinforcing the group’s strong sense of community and its ongoing commitment to advancing women in the performance measurement profession.

If you’re not yet a member and are interested in joining the community, find WiPM here on LinkedIn.

Uncertainty, Not Risk, is Driving Market Volatility

John Longo, Ph.D., Rutgers Business School kicked off the conference with a deep dive into the global economy, and his message was clear: today’s markets are more uncertain than risky. Tariffs, political volatility, and unconventional strategies—like the idea of purchasing Greenland—are reshaping global trade and investment decisions. His suggestion? Investors may want to look beyond U.S. borders and consider assets like gold or emerging markets as a hedge.

Longo also highlighted the looming national debt problem and inflationary effects of protectionist policies. For performance professionals, the implication is clear: macro-level policy choices are creating noise that can obscure traditional risk metrics. Understanding the difference between risk and uncertainty is more important than ever.

The Future of Training: Customized, Continuous, and Collaborative

In the “Developing Staff for Success” session, Frances Barney, CFA (former head of investment performance and risk analysis for BNY Mellon) and our very own Jocelyn Gilligan, CFA, CIPM explored the evolving nature of training in our field. The key message: cookie-cutter training doesn't cut it anymore. With increasing regulatory complexity and rapidly advancing technology, firms must invest in flexible, personalized learning programs.

Whether it's improving communication skills, building tech proficiency, or embedding a culture of curiosity, the session emphasized that training must be more than a check-the-box activity. Ongoing mentorship, cross-training, and embracing neurodiversity in learning styles are all part of building high-performing, engaged teams.

AI is Here—But It Needs a Human Co-Pilot

Several sessions explored the growing role of AI and automation in performance and reporting. The consensus? AI holds immense promise, but without strong data governance and human oversight, it’s not a silver bullet. From hallucinations in generative models to the ethical challenges of data usage, AI introduces new risks even as it streamlines workflows.

Use cases presented ranged from anomaly detection and report generation to client communication enhancements and predictive exception handling. But again and again, speakers emphasized: AI should augment, not replace, human expertise.

Private Markets Require Purpose-Built Tools

Private equity, private credit, real estate, and hedge funds remain among the trickiest asset classes to measure. Whether debating IRR vs. TWR, handling data lags, or selecting appropriate benchmarks, this year's sessions highlighted just how much nuance is involved in getting private market reporting right.

One particularly compelling idea: using replicating portfolios of public assets to assess the risk and performance of illiquid investments. This approach offers more transparency and a better sense of underlying exposures, especially in the absence of timely valuations.

Shorting and Leverage Complicate Performance Attribution

Calculating performance in long/short portfolios isn’t straightforward—and using absolute values can create misleading results. A session on this topic broke down the mechanics of short selling and explained why contribution-based return attribution is essential for accurate reporting.

The key insight: portfolio-level returns can fall outside the range of individual asset returns, especially in leveraged portfolios. Understanding the directional nature of each position is crucial for both internal attribution and external communication.

The SEC is Watching—Are You Ready?

Compliance was another hot topic, especially in light of recent enforcement actions under the SEC Marketing Rule. From misuse of hypothetical performance to sloppy use of testimonials, the panelists shared hard-earned lessons and emphasized the importance of documentation. This panel was moderated by Longs Peak’s Matt Deatherage, CFA, CIPM and included Lance Dial, of K&L Gates along with Thayne Gould from Vigilant.

FAQs have helped clarify gray areas (especially around extracted performance and proximity of net vs. gross returns), but more guidance is expected—particularly on model fees and performance portability. If you're not already documenting every performance claim, now is the time to start.

“Phantom Alpha” Is Real—And Preventable

David Spaulding of TSG, closed the conference with a deep dive into benchmark construction and the potential for “phantom alpha.” Even small differences in rebalancing frequency between portfolios and their benchmarks can create misleading outperformance. His recommendation? Either sync your rebalancing schedules or clearly disclose the differences.

This session served as a great reminder that even small implementation details can significantly impact reported performance—and that transparency is essential to maintaining trust.

Final Thoughts

From automation to attribution, PMAR 2025 showcased the depth and complexity of our field. If there’s one overarching takeaway, it’s that while tools and techniques continue to evolve, the core principles—transparency, accuracy, and accountability—remain as important a sever.

Did you attend PMAR this year? We’d love to hear your biggest takeaways. Reach out to us at hello@longspeakadvisory.com or drop us a note on LinkedIn!