Creating GIPS Compliant Presentations

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
January 25, 2018
15 min
Creating GIPS Compliant Presentations

Firms that are GIPS compliant are required to provide all prospective clients with a GIPS compliant presentation. Typically, each composite has its own separate one-page sheet that includes all the statistics and disclosures required for that composite. This one-page sheet can be attached as an appendix to your firm’s pitchbooks and other marketing materials to properly represent your firm to the public as a GIPS compliant firm.

Not all compliant presentations are the same. Your firm’s required statistics and disclosures will depend on your firm’s strategies and policies. In this article, we discuss the required statistics and disclosures applicable to most GIPS compliant firms. In addition, we provide information on common issues firms face when creating compliant presentations and what you might be able to do to avoid them.

Required GIPS Statistics

Although additional statistics may be required, the following are the most common statistics that GIPS compliant firms are required to present in their compliant presentations:

  • Annual composite time-weighted returns (gross and/or net) – GIPS recommends the use of gross-of-fee returns; however, at least in the United States, it is most common to include both gross and net-of-fee returns. Net returns can be based on actual management fees or a model fee. As discussed in a previous post titled “Are fee-related administrative issues causing errors in your investment performance?” using a model fee instead of actual fees may be necessary when you have clients that pay fees from an outside source (e.g., by check or from another account your firm manages for them).
  • Annual benchmark returns – GIPS requires the use of a benchmark unless you are able to disclose a reason why no meaningful benchmark is available. Even if your strategy is benchmark agnostic, most firms choose to include the most relevant benchmark available and then disclose any material differences between the benchmark and the strategy.
  • Number of portfolios in the composite as of each year-end – This is simply the number of portfolios that are included in the composite as of 31 December each year.
  • Total assets in the composite as of each year-end – This is simply the sum of the composite assets as of 31 December each year.
  • Total assets of the GIPS firm as of each year-end – This is the sum of all discretionary and non-discretionary portfolio assets that are included in the firm definition as of 31 December each year.
  • A measure of internal dispersion for each annual period – Internal dispersion is a measure used to give the user of the performance report an indication as to how tightly the strategy is managed. In other words, if you are reporting that the composite return was 10% for the most recent annual period, a low internal dispersion figure will tell the user that most portfolios in the composite returned approximately 10%. High dispersion would indicate that the portfolios in the composite had a more diverse set of returns (e.g., perhaps some returned 5% while others returned 15%). Typically, firms use standard deviation to present this, which can either be calculated on an equal-weighted or asset-weighted basis.
  • Three-year annualized ex-post standard deviation of both the composite and the benchmark based on monthly returns – This is a measure of risk. The standard deviation of the composite’s monthly returns and the benchmark’s monthly returns provides the user of the performance report an idea of the level of risk taken compared to the benchmark. Ideally, you want higher annual returns and lower annualized standard deviation compared to the composite’s benchmark. That would indicate that you were able to outperform while taking less risk. For composites where a different measure of risk would be more meaningful than standard deviation, firms may present an additional risk measure with an explanation as to why that measure is more relevant, but the annualized standard deviation must still be included.

Other statistics may also be required if, for example, your firm manages non-fee-paying or bundled-fee accounts. Firms with these types of accounts must show the percentage of the composite they represent as of each year-end. Firms with private equity or real estate composites also require different statistics which can be found in the Real Estate and Private Equity provisions of the GIPS Standards.

Required Disclosures

When reviewing compliant presentations before distribution, many firms focus purely on the statistics presented to ensure material errors do not exist. This is often done without realizing that missing or incorrect disclosures can also be considered a material error. Thus, you’ll want to make sure your review process incorporates an evaluation of both.

The disclosures that must be included in a GIPS compliant presentation will differ by firm and by composite. Rather than listing all of them here, we have compiled a checklist of required GIPS disclosures which can be used as part of your firm’s marketing material review process. This checklist can be used to help you incorporate the proper disclosures for each compliant presentation prior to approving them for external use.

When reviewing the disclosures included in your firm’s GIPS compliant presentations, it is important to ensure:

  1. No required disclosures are missing.
  2. The disclosures are consistent with the policies documented in your GIPS Policies and Procedures document (“GIPS P&P”), including any recent changes to policies. For example, if a minimum asset level is changed for a composite, it is important to ensure that this change is consistently:
    1. documented in your firm’s GIPS P&P,
    2. implemented in the actual composite construction, and
    3. disclosed in the GIPS compliant presentation.
  3. Any disclosures (such as the claim of compliance) that are required to be written word-for-word as stated in the standards, are not modified in any way.

Common Issues

Firms that do not have composite maintenance software or an external GIPS consultant to create their GIPS compliant presentations often create them manually. When creating and updating compliant presentations yourself, it is important to avoid theses common mistakes:

  1. Don’t double count assets. For example, if the same portfolio is included in more than one composite you will not be able to sum your composite assets to get to your total GIPS firm assets. Additionally, if you manage a fund and then some of the separate accounts you manage invest in that fund as part of their portfolio, you need to ensure you do not count those assets both as part of the fund and again as part of the separate accounts. It is also important to ensure that only actual accounts are included. Models and anything that is considered “advisory-only” should be excluded from your calculation.
  2. Ensure that the number of portfolios reported is the total number of portfolios included in the composite as of 31 December of that year. Since internal dispersion is calculated based on only the portfolios that were in the composite for the full year, some firms make the mistake of reporting their number of portfolios as just the number of portfolios that were included for the full year. This is not correct as this statistic is intended to be the total number of portfolios in the composite as of each year-end.
  3. When partial-year performance is presented, it is important to:
    1. Clearly label the period for which performance is presented.
    2. Match the benchmark period to the period presented for the composite.
  4. Keep your presentations up-to-date. This means:
    1. Updating presentations with corrected statistics if corrections are made to the composite’s data. For example, firms may make updates to transactions for reconciliation purposes, such as backdating dividends. If this results in a change to composite-level statistics, then the compliant presentations must be updated accordingly. It is important to consistently follow your firm’s GIPS error correction policy. Typically, immaterial changes to the statistics are updated for future use even if the changes are not large enough to trigger redistribution of the presentation.
    2. Updating presentations with the most recent year’s statistics as soon as they become available. It is not necessary to wait for the verification to be complete before adding and presenting updated statistics. For example, if your annual GIPS verification for calendar year 2017 will not be complete until mid-2018, you do not need to wait until the verification is complete to present the 2017 statistics in your compliant presentation. You just cannot update the date your firm is verified through until the verification report is issued (i.e., you can present unverified statistics for the 2017 period, but the date range of your verification will still be disclosed as ending 31 December 2016). This lets the user of your compliant presentation have the latest statistics while letting them know that the verification for the latest period is pending.
  5. Ensure there are no typos if you are manually entering the statistics into a table. Typos can easily cause material errors that would trigger the need for redistribution of the presentation with disclosure of the error. Establishing a simple review process can help your firm avoid this headache.
  6. Make sure the information for each composite is entered into the correct compliant presentation (i.e., ensure you do not enter the statistics for Composite A into the presentation for Composite B). Seems obvious, but you’d be surprise how often this mistake is made. Again, a reliable review process can help your firm avoid these mistakes.

Want to Learn More?

If you have any questions about creating compliant presentations or any GIPS statistics or disclosures, we would love to help. Longs Peak’s professionals have extensive experience helping firms become GIPS compliant as well as helping them maintain compliance with the GIPS Standards on an ongoing basis. Contact us to learn how we can help.

Recommended Post

View All Articles

When you're responsible for overseeing the performance of an endowment or public pension fund, one of the most critical tools at your disposal is the benchmark. But not just any benchmark—a meaningful one, designed with intention and aligned with your Investment Policy Statement(IPS). Benchmarks aren’t just numbers to report alongside returns; they represent the performance your total fund should have delivered if your strategic targets were passively implemented.

And yet, many asset owners still find themselves working with benchmarks that don’t quite match their objectives—either too generic, too simplified, or misaligned with how the total fund is structured. Let’s walkthrough how to build more effective benchmarks that reflect your IPS and support better performance oversight.

Start with the Policy: Your IPS Should Guide Benchmark Construction

Your IPS is more than a governance document—it is the road map that sets strategic asset allocation targets for the fund. Whether you're allocating 50% to public equity or 15% to private equity, each target signals an intentional risk/return decision. Your benchmark should be built to evaluate how well each segment of the total fund performed.

The key is to assign a benchmark to each asset class and sub-asset class listed in your IPS. This allows for layered performance analysis—at the individual sub-asset class level (such as large cap public equity), at the broader asset class level (like total public equity), and ultimately rolled up at the Total Fund level. When benchmarks reflect the same weights and structure as the strategic targets in your IPS, you can assess how tactical shifts in weights and active management within each segment are adding or detracting value.

Use Trusted Public Indexes for Liquid Assets

For traditional, liquid assets—like public equities and fixed income—benchmarking is straightforward. Widely recognized indexes like the S&P 500, MSCI ACWI, or Bloomberg U.S. Aggregate Bond Index are generally appropriate and provide a reasonable passive alternative against which to measure active strategies managed using a similar pool of investments as the index.

These benchmarks are also calculated using time-weighted returns (TWR), which strip out the impact of cash flows—ideal for evaluating manager skill. When each component of your total fund has a TWR-based benchmark, they can all be rolled up into a total fund benchmark with consistency and clarity.

Think Beyond the Index for Private Markets

Where benchmarking gets tricky is in illiquid or asset classes like private equity, real estate, or private credit. These don’t have public market indexes since they are private market investments, so you need a proxy that still supports a fair evaluation.

Some organizations use a peer group as the benchmark, but another approach is to use an annualized public market index plus a premium. For example, you might use the 7-year annualized return of the Russell 2000(lagged by 3 months) plus a 3% premium to account for illiquidity and risk.

Using the 7-year average rather than the current period return removes the public market volatility for the period that may not be as relevant for the private market comparison. The 3-month lag is used if your private asset valuations are updated when received rather than posted back to the valuation date. The purpose of the 3% premium (or whatever you decide is appropriate) is to account for the excess return you expect to receive from private investments above public markets to make the liquidity risk worthwhile.

By building in this hurdle, you create a reasonable, transparent benchmark that enables your board to ask: Is our private markets portfolio delivering enough excess return to justify the added risk and reduced liquidity?

Roll It All Up: Aggregated Benchmarks for Total Fund Oversight

Once you have individual benchmarks for each segment of the total fund, the next step is to aggregate them—using the strategic asset allocation weights from your IPS—to form a custom blended total fund benchmark.

This approach provides several advantages:

  • You can evaluate performance at both the micro (asset class) and macro (total fund) level.
  • You gain insight into where active management is adding value—and where it isn’t.
  • You ensure alignment between your strategic policy decisions and how performance is being measured.

For example, if your IPS targets 50% to public equities split among large-, mid-, and small-cap stocks, you can create a blended equity benchmark that reflects those sub-asset class allocations, and then roll it up into your total fund benchmark. Rebalancing of the blends should match there balancing frequency of the total fund.

What If There's No Market Benchmark?

In some cases, especially for highly customized or opportunistic strategies like hedge funds, there simply may not be a meaningful market index to use as a benchmark. In these cases, it is important to consider what hurdle would indicate success for this segment of the total fund. Examples of what some asset owners use include:

  • CPI + Premium – a simple inflation-based hurdle
  • Absolute return targets – such as a flat 7% annually
  • Total Fund return for the asset class – not helpful for evaluating the performance of this segment, but still useful for aggregation to create the total fund benchmark

While these aren’t perfect, they still serve an important function: they allow performance to be rolled into a total fund benchmark, even if the asset class itself is difficult to benchmark directly.

The Bottom Line: Better Benchmarks, Better Oversight

For public pension boards and endowment committees, benchmarks are essential for effective fiduciary oversight. A well-designed benchmark framework:

  • Reflects your strategic intent
  • Provides fair, consistent measurement of manager performance
  • Supports clear communication with stakeholders

At Longs Peak Advisory Services, we’ve worked with asset owners around the globe to develop custom benchmarking frameworks that align with their policies and support meaningful performance evaluation. If you’re unsure whether your current benchmarks are doing your IPS justice, we’re hereto help you refine them.

Want to dig deeper? Let’s talk about how to tailor a benchmark framework that’s right for your total fund—and your fiduciary responsibilities. Reach out to us today.

Valuation Timing for Illiquid Investments
Explore how firms & asset owners can balance accuracy & timeliness in performance reporting for illiquid investments.
June 23, 2025
15 min

For asset owners and investment firms managing private equity, real estate, or other illiquid assets, one of the most persistent challenges in performance reporting is determining the right approach to valuation timing. Accurate performance results are essential, but delays in receiving valuations can create friction with timely reporting goals. How can firms strike the right balance?

At Longs Peak Advisory Services, we’ve worked with hundreds of investment firms and asset owners globally to help them present meaningful, transparent performance results. When it comes to illiquid investments, the trade-offs and decisions surrounding valuation timing can have a significant impact—not just on performance accuracy, but also on how trustworthy and comparable the results appear to stakeholders.

Why Valuation Timing Matters

Illiquid investments are inherently different from their liquid counterparts. While publicly traded securities can be valued in real-time with market prices, private equity and real estate investments often report with a delay—sometimes months after quarter-end.

This delay creates a reporting dilemma: Should firms wait for final valuations to ensure accurate performance, or should they push ahead with estimates or lagged valuations to meet internal or external deadlines?

It’s a familiar struggle for investment teams and performance professionals. On one hand, accuracy supports sound decision-making and stakeholder trust. On the other, reporting delays can hinder communication with boards, consultants, and beneficiaries—particularly for asset owners like endowments and public pension plans that follow strict reporting cycles.

Common Approaches to Delayed Valuations

For strategies involving private equity, real estate, or other illiquid holdings, receiving valuations weeks—or even months—after quarter-end is the norm rather than the exception. To deal with this lag, investment organizations typically adopt one of two approaches to incorporate valuations into performance reporting: backdating valuations or lagging valuations. Each has benefits and drawbacks, and the choice between them often comes down to a trade-off between accuracy and timeliness.

1. Backdating Valuations

In the backdating approach, once a valuation is received—say, a March 31 valuation that arrives in mid-June—it is recorded as of March 31, the actual valuation date. This ensures that performance reports reflect economic activity during the appropriate time period, regardless of when the data became available.

Pros:
  • Accuracy: Provides the most accurate snapshot of asset values and portfolio performance for the period being reported.
  • Integrity: Maintains alignment between valuation dates and the underlying activity in the portfolio, which is particularly important for internal analysis or for investment committees wanting to evaluate manager decisions during specific market environments.
Cons:
  • Delayed Reporting: Final performance for the quarter may be delayed by 4–6 weeks or more, depending on how long it takes to receive valuations.
  • Stakeholder Frustration: Boards, consultants, and beneficiaries may grow  frustrated if they cannot access updated reports in a timely manner, especially if performance data is tied to compensation decisions, audit     deadlines, or public disclosures.

When It's Useful:
  • When transparency and accuracy are prioritized over speed—e.g., in annual audited performance reports or regulatory filings.
  • For internal purposes where precise attribution and alignment with economic events are critical, such as evaluating decision-making during periods of market volatility.

2. Lagged Valuations

With the lagged approach, firms recognize delayed valuations in the subsequent reporting period. Using the same example: if the March 31valuation is received in June, it is instead recorded as of June 30. In this case, the performance effect of the Q1 activity is pushed into Q2’sreporting.

Pros:
  • Faster Reporting: Performance reports can be completed shortly after quarter-end, meeting board, stakeholder, and regulatory timelines.
  • Operational Efficiency: Teams aren’t held up by a few delayed valuations, allowing them to close the books and move on to other tasks.

Cons:
  • Reduced Accuracy: Performance reported for Q2 includes valuation changes that actually occurred in Q1, misaligning performance with the period in which it was earned.
  • Misinterpretation Risk: If users are unaware of the lag, they may misattribute results to the wrong quarter, leading to flawed conclusions about manager skill or market behavior.

When It's Useful:
  • When quarterly reporting deadlines must be met (e.g., trustee meetings, consultant updates).
  • In environments where consistency and speed are prioritized, and the lag can be adequately disclosed and understood by users.

Choosing the Right Approach (and Sticking with It)

Both approaches are acceptable from a compliance and reporting perspective. However, the key lies in consistency.

Once an organization adopts an approach—whether back dating or lagging—it should be applied across all periods, portfolios, and asset classes. Inconsistent application opens the door to performance manipulation(or the appearance of it), where results might look better simply because a valuation was timed differently.

This kind of inconsistency can erode trust with boards, auditors and other stakeholders. Worse, it could raise red flags in a regulatory review or third-party verification.

Disclose, Disclose, Disclose

Regardless of the method you use, full transparency in reporting is essential. If you’re lagging valuations by a quarter, clearly state that in your disclosures. If you change methodologies at any point—perhaps transitioning from lagged to backdated—explain when and why that change occurred.

Clear disclosures help users of your reports—whether board members, beneficiaries, auditors, or consultants—understand how performance was calculated. It allows them to assess the results in context and make informed decisions based on the data.

Aligning Benchmarks with Valuation Timing

One important detail that’s often overlooked: your benchmark data should follow the same valuation timing as your portfolio.

If your private equity or real estate portfolio is lagged by a quarter, but your benchmark is not, your performance comparison becomes flawed. The timing mismatch can mislead stakeholders into believing the strategy outperformed or underperformed, simply due to misaligned reporting periods.

To ensure a fair and meaningful comparison, always apply your valuation timing method consistently across both your portfolio and benchmark data.

Building Trust Through Transparency

Valuation timing is a technical, often behind-the-scenes issue—but it plays a crucial role in how your investment results are perceived. Boards and stakeholders rely on accurate, timely, and understandable performance reporting to make decisions that impact beneficiaries, employees, and communities.

By taking the time to document your valuation policy, apply it consistently, and disclose it clearly, you are reinforcing your organization’s commitment to integrity and transparency. And in a world where scrutiny of investment performance is only increasing, that commitment can be just as valuable as the numbers themselves.

Need help defining your valuation timing policy or aligning performance reporting practices with industry standards?

Longs Peak Advisory Services specializes in helping investment firms and asset owners simplify their performance processes, maintain compliance, and build trust through transparent reporting. Contact us to learn how we can support your team.

Key Takeaways from the 2025 PMAR Conference
This year’s PMAR Conference delivered timely and thought-provoking content for performance professionals across the industry. In this post, we’ve highlighted our top takeaways from the event—including a recap of the WiPM gathering.
May 29, 2025
15 min

The Performance Measurement, Attribution & Risk (PMAR) Conference is always a highlight for investment performance professionals—and this year’s event did not disappoint. With a packed agenda spanning everything from economic uncertainty and automation to evolving training needs and private market complexities, PMAR 2025 gave attendees plenty to think about.

Here are some of our key takeaways from this year’s event:

Women in Performance Measurement (WiPM)

Although not officially a part of PMAR, WiPM often schedules its annual in-person gathering during the same week to take advantage of the broader industry presence at the event. This year’s in-person gathering, united female professionals from across the country for a full day of connection, learning, and mentorship. The agenda struck a thoughtful balance between professional development and personal connection, with standout sessions on AI and machine learning, resume building, and insights from the WiPM mentoring program. A consistent favorite among attendees is the interactive format—discussions are engaging, and the support among members is truly energizing. The day concluded with a cocktail reception and dinner, reinforcing the group’s strong sense of community and its ongoing commitment to advancing women in the performance measurement profession.

If you’re not yet a member and are interested in joining the community, find WiPM here on LinkedIn.

Uncertainty, Not Risk, is Driving Market Volatility

John Longo, Ph.D., Rutgers Business School kicked off the conference with a deep dive into the global economy, and his message was clear: today’s markets are more uncertain than risky. Tariffs, political volatility, and unconventional strategies—like the idea of purchasing Greenland—are reshaping global trade and investment decisions. His suggestion? Investors may want to look beyond U.S. borders and consider assets like gold or emerging markets as a hedge.

Longo also highlighted the looming national debt problem and inflationary effects of protectionist policies. For performance professionals, the implication is clear: macro-level policy choices are creating noise that can obscure traditional risk metrics. Understanding the difference between risk and uncertainty is more important than ever.

The Future of Training: Customized, Continuous, and Collaborative

In the “Developing Staff for Success” session, Frances Barney, CFA (former head of investment performance and risk analysis for BNY Mellon) and our very own Jocelyn Gilligan, CFA, CIPM explored the evolving nature of training in our field. The key message: cookie-cutter training doesn't cut it anymore. With increasing regulatory complexity and rapidly advancing technology, firms must invest in flexible, personalized learning programs.

Whether it's improving communication skills, building tech proficiency, or embedding a culture of curiosity, the session emphasized that training must be more than a check-the-box activity. Ongoing mentorship, cross-training, and embracing neurodiversity in learning styles are all part of building high-performing, engaged teams.

AI is Here—But It Needs a Human Co-Pilot

Several sessions explored the growing role of AI and automation in performance and reporting. The consensus? AI holds immense promise, but without strong data governance and human oversight, it’s not a silver bullet. From hallucinations in generative models to the ethical challenges of data usage, AI introduces new risks even as it streamlines workflows.

Use cases presented ranged from anomaly detection and report generation to client communication enhancements and predictive exception handling. But again and again, speakers emphasized: AI should augment, not replace, human expertise.

Private Markets Require Purpose-Built Tools

Private equity, private credit, real estate, and hedge funds remain among the trickiest asset classes to measure. Whether debating IRR vs. TWR, handling data lags, or selecting appropriate benchmarks, this year's sessions highlighted just how much nuance is involved in getting private market reporting right.

One particularly compelling idea: using replicating portfolios of public assets to assess the risk and performance of illiquid investments. This approach offers more transparency and a better sense of underlying exposures, especially in the absence of timely valuations.

Shorting and Leverage Complicate Performance Attribution

Calculating performance in long/short portfolios isn’t straightforward—and using absolute values can create misleading results. A session on this topic broke down the mechanics of short selling and explained why contribution-based return attribution is essential for accurate reporting.

The key insight: portfolio-level returns can fall outside the range of individual asset returns, especially in leveraged portfolios. Understanding the directional nature of each position is crucial for both internal attribution and external communication.

The SEC is Watching—Are You Ready?

Compliance was another hot topic, especially in light of recent enforcement actions under the SEC Marketing Rule. From misuse of hypothetical performance to sloppy use of testimonials, the panelists shared hard-earned lessons and emphasized the importance of documentation. This panel was moderated by Longs Peak’s Matt Deatherage, CFA, CIPM and included Lance Dial, of K&L Gates along with Thayne Gould from Vigilant.

FAQs have helped clarify gray areas (especially around extracted performance and proximity of net vs. gross returns), but more guidance is expected—particularly on model fees and performance portability. If you're not already documenting every performance claim, now is the time to start.

“Phantom Alpha” Is Real—And Preventable

David Spaulding of TSG, closed the conference with a deep dive into benchmark construction and the potential for “phantom alpha.” Even small differences in rebalancing frequency between portfolios and their benchmarks can create misleading outperformance. His recommendation? Either sync your rebalancing schedules or clearly disclose the differences.

This session served as a great reminder that even small implementation details can significantly impact reported performance—and that transparency is essential to maintaining trust.

Final Thoughts

From automation to attribution, PMAR 2025 showcased the depth and complexity of our field. If there’s one overarching takeaway, it’s that while tools and techniques continue to evolve, the core principles—transparency, accuracy, and accountability—remain as important a sever.

Did you attend PMAR this year? We’d love to hear your biggest takeaways. Reach out to us at hello@longspeakadvisory.com or drop us a note on LinkedIn!