Investment Performance Outlier Testing

Sean P. Gilligan, CFA, CPA, CIPM
Managing Partner
January 29, 2020
15 min
Investment Performance Outlier Testing

For any firm that aggregates portfolios of the same strategy into a composite, or otherwise groups portfolios by mandate, how do you know that each portfolio truly follows that strategy? The answer is outlier testing.

Why Utilize Composites?

The GIPS standards require firms managing separate accounts to construct composites, which aggregate all discretionary portfolios of the same strategy. However, even for firms that are not GIPS compliant, the use of composites is considered best practice when reporting investment performance to prospective clients. Composites offer a more complete picture than presenting performance of a model or “representative portfolio” – which usually leave prospects wondering whether the information is truly representative or if the portfolio presented was “cherry picked.”

When creating and maintaining composites, firms must ensure that portfolios are included in the correct composite for the right time period – the period for which you had full discretion to implement the composite strategy for that portfolio. This is achieved by following a clearly documented set of policies and procedures for composite inclusion and exclusion. However, what happens when changes are made to a portfolio and those changes are not communicated to the person maintaining the composite?

In an ideal world, information in your firm would flow perfectly so that the person maintaining your composites knows exactly what is happening with the firm’s clients. In reality, client requests commonly result in small or temporary changes to the portfolio (e.g., halt trading, raise cash) that are not formally documented in the client’s investment guidelines or investment policy statement.

Without formal documentation of these changes, information may not flow down to the manager of your composites. While these minor or temporary changes may not affect the client’s long-term objectives, they may cause the portfolio to deviate from the strategy, requiring (at least temporary) removal from its composite. When these restricted portfolios are left in the composite, they often become performance outliers and create “noise” in the composite results. This “noise” prevents the composite from providing a meaningful representation of the portfolio manager’s ability to implement the strategy. This will also interfere with your prospective clients’ ability to analyze and interpret your performance results.

Why test for performance outliers?

Testing for performance outliers prior to finalizing and publishing performance results can help your firm remove this “noise” and can prevent costly errors in performance presentations. Firms that lack adequate composite construction policies and controls to ensure the policies are consistently followed often end up with errors in their composite presentations. In fact, it is very likely that errors in your performance exist. It is rare for us at Longs Peak to conduct an outlier analysis where no issues are found. Outlier testing should be completed quarterly and at a minimum, before any related verification or performance examination.

Many firms, especially those that are GIPS compliant, rely on their verifier to catch errors in their composites. We do not recommend this and suggest firms perform testing internally (or with the help of a performance consultant like Longs Peak) because:

  1. Verifiers only test a sample and will likely not catch all of your issues.
  2. Verification may happen months after the performance has been published. When errors are found, it may require redistribution of presentations with disclosures regarding prior performance errors.
  3. When verifiers find errors, they generally increase their sample size as well as their assessment of engagement risk. These two things lead to more time spent on the verification and a potential increase in your verification fee.

Even if not GIPS compliant, when firms use composites, regulators may test to ensure the composites are a meaningful representation of the strategy. In addition to improving accuracy, testing for performance outliers can help your firm‘s composites meet the standards expected by regulators.

How can performance outliers be identified?

Testing for performance outliers involves reviewing the performance of portfolios within the same composite or strategy to test if they are performing similarly. This testing allows you to flag any portfolios that may be performing differently so you can evaluate if their inclusion in the composite is appropriate.

For example, if your firm has a Large Cap Growth composite, testing performance outliers would involve compiling the return data for all of your Large Cap Growth portfolios, identifying which portfolios performed materially different from their peers, researching why they performed differently, and then taking the appropriate action if an issue is discovered. This may sound like a daunting task, but it doesn’t have to be. Let us walk you through this in more detail.

Some firms simply look at the absolute difference between each portfolio’s monthly return and the monthly return of the composite. While this may be straight forward, relying only on the absolute difference to determine outliers does not take into consideration the size of the return and the normal distribution of portfolio returns in the composite. For example, if you set a threshold to look at all portfolios that deviate from the composite return by 50bps, the result for a composite with low dispersion and a total return of 2% would very be different than a composite with higher dispersion and a total return of 20%.

In the outlier analysis Longs Peak conducts for clients, we use standard deviation in conjunction with a comparison of the absolute differences to identify the outlier portfolios that require review. Utilizing standard deviation allows us to identify portfolios that are truly outside the normal distribution of returns for each period. For example, reviewing all portfolios that are more than 3 standard deviations from the composite mean will provide the portfolios outside the normal distribution of returns for that period, regardless of the size of the return or the level of dispersion in that composite.

What to consider when reviewing outlier performance

The severity of the outlier

The larger the outlier, the more likely it is that the portfolio has an issue that would require it to be removed from the composite. We typically start by looking at the most extreme outliers first. Generally, we look at portfolios with performance periods flagged with +/-3 standard deviations from the mean return for the period. By addressing these first (including removing them if it is determined they do not belong in the composite), we are able to re-run the outlier test to assess what outliers exist without these extreme cases disrupting the analysis.

Once these extreme outliers are addressed, we move on to review the portfolios that are +/-2 standard deviations and even +/-1.5 standard deviations, if needed. We keep reviewing accounts with returns closer and closer to the composite’s mean return until we are consistently confirming that the portfolios do in fact belong in the composite and errors are not being found.

Each firm will be different in how much they need to drill down to get to a point of comfort that no more errors exist. If your composite is managed strictly to a model, the outliers will be very clear and easy to identify. If each portfolio you manage is customized, more research is often needed to determine if the outlier performance is simply a result of the portfolio’s customization or if the portfolio was included in the wrong composite.

How often the portfolio is an outlier

Longs Peak’s performance outlier reports show a portfolio’s performance, the number of standard deviations it is from the mean each month, and the number of months the portfolio was an outlier throughout its history in that composite. Our reports also show whether there was a cash flow during that period or not. The following are examples of outlier frequencies we evaluate:

Infrequent: If you see that a portfolio is only an outlier for one month and that month had a large cash flow, then you will know that the portfolio is likely only an outlier for that period because of the cash flow and, often, no further research is required.

Frequent: If you can see that the portfolio is an outlier for most of the months under review, then you will know that there is likely an issue with this portfolio.

As of a specific date: If you can see that the portfolio was not an outlier historically, but became a frequent outlier from a certain month forward, this may indicate that a restriction was added or that the strategy changed as of that period. The portfolio may then need to be reclassified to the appropriate composite or flagged as non-discretionary.

The most common causes of outlier performance and how to address performance outliers

Common causes of outlier performance:

  • Data issues – When outliers are extreme, it is likely that there is an issue with the data. Examples include a pricing issue that caused a material jump in performance or a late dividend hitting a portfolio that is closing and had most of its assets already transferred out. These issues are often easily addressed, depending on the circumstance of each case.
  • Cash flows – If a portfolio is only an outlier for one month and during that month the portfolio experienced a large cash flow, this is likely the reason for the outlier performance. If the portfolio had high cash for a period of time around the cash flow and the market moved during that period, this portfolio likely would perform differently than its fully invested peers. Nothing needs to be done in this scenario since the outlier performance is explained and there is no indication that the portfolio is invested incorrectly or grouped with the wrong portfolios.
  • Legacy positions or other client restrictions – If your clients hold legacy positions that you are restricted from selling or have other similar restrictions, this will likely cause these portfolios to perform differently when compared to their unrestricted peers. Depending on your composite construction rules, unless immaterial, these portfolios likely need to be excluded from the composite. With these portfolios removed, other outliers may appear that were not as noticeable when the restricted portfolios were included. It is important to refer to your firm’s composite construction policies, which should outline clear parameters for when restricted portfolios should be included/excluded in composites.
  • Portfolio categorized incorrectly – A portfolio may appear as an outlier because it was placed in the wrong composite. This often happens if a portfolio’s composite changed and it was not removed from its prior composite. If this is the case, the portfolio must be removed (after the change) and added to the new composite based on the timing outlined in your firm’s composite construction policies.
  • Portfolio managed incorrectly – Performance outlier analysis may help identify a portfolio that is managed to the wrong strategy. For example, it is possible that the portfolio is grouped with the correct portfolios, but the wrong strategy was implemented in the portfolio. This is one of the most important errors that performance outlier testing can identify because it means that the client is actually not having their money managed to the strategy for which your firm was hired. In this case, the portfolio would need to be rebalanced to the correct strategy. Likely, a review of the history would need to be conducted as well to ensure the client was not disadvantaged by the error.
  • High dispersion between portfolio managers – Especially when more than one portfolio manager is implementing the same composite at your firm, material differences may exist in the way they each manage the strategy. Outlier performers may be due to differences in the portfolio managers’ discretionary management. If the composite is being sold as one cohesive product, it is important to identify where the portfolio managers deviate and determine if they can work more closely together to avoid high dispersion or if the strategy should actually be run as two different products.

When researching outlier performance, keep in mind that, on its own, a portfolio’s performance deviating from its peers is not a valid reason to remove the portfolio from its composite. You need to determine the root cause of the deviation and remove the portfolio from its composite only if the root cause was client-driven. If the deviation was caused by tactical, discretionary moves made by the portfolio manager, the portfolio must remain in the composite as its performance is still a representation of the portfolio manager’s implementation of the strategy.

Ready to implement performance outlier testing at your firm?

While it is best practice to create a flow of information that will allow portfolios to proactively be included/excluded in the correct composite at the appropriate time, testing for performance outliers acts as a back-up plan to catch anything that was missed.

If analyzing your composite data to identify performance outliers is not something you have the resources to do internally, Longs Peak is available to help. Longs Peak offers both consulting and reporting services that can assist your firm with outlier analysis. Conducting outlier analysis should be done at least quarterly to help ensure your firm is managing your portfolios consistently and are reporting strategy or composite performance that is meaningful and accurate. Please contact us to discuss how we can help implement this practice for your firm.

Questions? 

If you have questions about investment performance, composite construction, or the GIPS standards, we would be love to talk to you. Longs Peak’s professionals have extensive experience helping firms with all of their investment performance needs. Please feel free to email Sean Gilligan directly at sean@longspeakadvisory.com.

Recommended Post

View All Articles

When you're responsible for overseeing the performance of an endowment or public pension fund, one of the most critical tools at your disposal is the benchmark. But not just any benchmark—a meaningful one, designed with intention and aligned with your Investment Policy Statement(IPS). Benchmarks aren’t just numbers to report alongside returns; they represent the performance your total fund should have delivered if your strategic targets were passively implemented.

And yet, many asset owners still find themselves working with benchmarks that don’t quite match their objectives—either too generic, too simplified, or misaligned with how the total fund is structured. Let’s walkthrough how to build more effective benchmarks that reflect your IPS and support better performance oversight.

Start with the Policy: Your IPS Should Guide Benchmark Construction

Your IPS is more than a governance document—it is the road map that sets strategic asset allocation targets for the fund. Whether you're allocating 50% to public equity or 15% to private equity, each target signals an intentional risk/return decision. Your benchmark should be built to evaluate how well each segment of the total fund performed.

The key is to assign a benchmark to each asset class and sub-asset class listed in your IPS. This allows for layered performance analysis—at the individual sub-asset class level (such as large cap public equity), at the broader asset class level (like total public equity), and ultimately rolled up at the Total Fund level. When benchmarks reflect the same weights and structure as the strategic targets in your IPS, you can assess how tactical shifts in weights and active management within each segment are adding or detracting value.

Use Trusted Public Indexes for Liquid Assets

For traditional, liquid assets—like public equities and fixed income—benchmarking is straightforward. Widely recognized indexes like the S&P 500, MSCI ACWI, or Bloomberg U.S. Aggregate Bond Index are generally appropriate and provide a reasonable passive alternative against which to measure active strategies managed using a similar pool of investments as the index.

These benchmarks are also calculated using time-weighted returns (TWR), which strip out the impact of cash flows—ideal for evaluating manager skill. When each component of your total fund has a TWR-based benchmark, they can all be rolled up into a total fund benchmark with consistency and clarity.

Think Beyond the Index for Private Markets

Where benchmarking gets tricky is in illiquid or asset classes like private equity, real estate, or private credit. These don’t have public market indexes since they are private market investments, so you need a proxy that still supports a fair evaluation.

Some organizations use a peer group as the benchmark, but another approach is to use an annualized public market index plus a premium. For example, you might use the 7-year annualized return of the Russell 2000(lagged by 3 months) plus a 3% premium to account for illiquidity and risk.

Using the 7-year average rather than the current period return removes the public market volatility for the period that may not be as relevant for the private market comparison. The 3-month lag is used if your private asset valuations are updated when received rather than posted back to the valuation date. The purpose of the 3% premium (or whatever you decide is appropriate) is to account for the excess return you expect to receive from private investments above public markets to make the liquidity risk worthwhile.

By building in this hurdle, you create a reasonable, transparent benchmark that enables your board to ask: Is our private markets portfolio delivering enough excess return to justify the added risk and reduced liquidity?

Roll It All Up: Aggregated Benchmarks for Total Fund Oversight

Once you have individual benchmarks for each segment of the total fund, the next step is to aggregate them—using the strategic asset allocation weights from your IPS—to form a custom blended total fund benchmark.

This approach provides several advantages:

  • You can evaluate performance at both the micro (asset class) and macro (total fund) level.
  • You gain insight into where active management is adding value—and where it isn’t.
  • You ensure alignment between your strategic policy decisions and how performance is being measured.

For example, if your IPS targets 50% to public equities split among large-, mid-, and small-cap stocks, you can create a blended equity benchmark that reflects those sub-asset class allocations, and then roll it up into your total fund benchmark. Rebalancing of the blends should match there balancing frequency of the total fund.

What If There's No Market Benchmark?

In some cases, especially for highly customized or opportunistic strategies like hedge funds, there simply may not be a meaningful market index to use as a benchmark. In these cases, it is important to consider what hurdle would indicate success for this segment of the total fund. Examples of what some asset owners use include:

  • CPI + Premium – a simple inflation-based hurdle
  • Absolute return targets – such as a flat 7% annually
  • Total Fund return for the asset class – not helpful for evaluating the performance of this segment, but still useful for aggregation to create the total fund benchmark

While these aren’t perfect, they still serve an important function: they allow performance to be rolled into a total fund benchmark, even if the asset class itself is difficult to benchmark directly.

The Bottom Line: Better Benchmarks, Better Oversight

For public pension boards and endowment committees, benchmarks are essential for effective fiduciary oversight. A well-designed benchmark framework:

  • Reflects your strategic intent
  • Provides fair, consistent measurement of manager performance
  • Supports clear communication with stakeholders

At Longs Peak Advisory Services, we’ve worked with asset owners around the globe to develop custom benchmarking frameworks that align with their policies and support meaningful performance evaluation. If you’re unsure whether your current benchmarks are doing your IPS justice, we’re hereto help you refine them.

Want to dig deeper? Let’s talk about how to tailor a benchmark framework that’s right for your total fund—and your fiduciary responsibilities. Reach out to us today.

Valuation Timing for Illiquid Investments
Explore how firms & asset owners can balance accuracy & timeliness in performance reporting for illiquid investments.
June 23, 2025
15 min

For asset owners and investment firms managing private equity, real estate, or other illiquid assets, one of the most persistent challenges in performance reporting is determining the right approach to valuation timing. Accurate performance results are essential, but delays in receiving valuations can create friction with timely reporting goals. How can firms strike the right balance?

At Longs Peak Advisory Services, we’ve worked with hundreds of investment firms and asset owners globally to help them present meaningful, transparent performance results. When it comes to illiquid investments, the trade-offs and decisions surrounding valuation timing can have a significant impact—not just on performance accuracy, but also on how trustworthy and comparable the results appear to stakeholders.

Why Valuation Timing Matters

Illiquid investments are inherently different from their liquid counterparts. While publicly traded securities can be valued in real-time with market prices, private equity and real estate investments often report with a delay—sometimes months after quarter-end.

This delay creates a reporting dilemma: Should firms wait for final valuations to ensure accurate performance, or should they push ahead with estimates or lagged valuations to meet internal or external deadlines?

It’s a familiar struggle for investment teams and performance professionals. On one hand, accuracy supports sound decision-making and stakeholder trust. On the other, reporting delays can hinder communication with boards, consultants, and beneficiaries—particularly for asset owners like endowments and public pension plans that follow strict reporting cycles.

Common Approaches to Delayed Valuations

For strategies involving private equity, real estate, or other illiquid holdings, receiving valuations weeks—or even months—after quarter-end is the norm rather than the exception. To deal with this lag, investment organizations typically adopt one of two approaches to incorporate valuations into performance reporting: backdating valuations or lagging valuations. Each has benefits and drawbacks, and the choice between them often comes down to a trade-off between accuracy and timeliness.

1. Backdating Valuations

In the backdating approach, once a valuation is received—say, a March 31 valuation that arrives in mid-June—it is recorded as of March 31, the actual valuation date. This ensures that performance reports reflect economic activity during the appropriate time period, regardless of when the data became available.

Pros:
  • Accuracy: Provides the most accurate snapshot of asset values and portfolio performance for the period being reported.
  • Integrity: Maintains alignment between valuation dates and the underlying activity in the portfolio, which is particularly important for internal analysis or for investment committees wanting to evaluate manager decisions during specific market environments.
Cons:
  • Delayed Reporting: Final performance for the quarter may be delayed by 4–6 weeks or more, depending on how long it takes to receive valuations.
  • Stakeholder Frustration: Boards, consultants, and beneficiaries may grow  frustrated if they cannot access updated reports in a timely manner, especially if performance data is tied to compensation decisions, audit     deadlines, or public disclosures.

When It's Useful:
  • When transparency and accuracy are prioritized over speed—e.g., in annual audited performance reports or regulatory filings.
  • For internal purposes where precise attribution and alignment with economic events are critical, such as evaluating decision-making during periods of market volatility.

2. Lagged Valuations

With the lagged approach, firms recognize delayed valuations in the subsequent reporting period. Using the same example: if the March 31valuation is received in June, it is instead recorded as of June 30. In this case, the performance effect of the Q1 activity is pushed into Q2’sreporting.

Pros:
  • Faster Reporting: Performance reports can be completed shortly after quarter-end, meeting board, stakeholder, and regulatory timelines.
  • Operational Efficiency: Teams aren’t held up by a few delayed valuations, allowing them to close the books and move on to other tasks.

Cons:
  • Reduced Accuracy: Performance reported for Q2 includes valuation changes that actually occurred in Q1, misaligning performance with the period in which it was earned.
  • Misinterpretation Risk: If users are unaware of the lag, they may misattribute results to the wrong quarter, leading to flawed conclusions about manager skill or market behavior.

When It's Useful:
  • When quarterly reporting deadlines must be met (e.g., trustee meetings, consultant updates).
  • In environments where consistency and speed are prioritized, and the lag can be adequately disclosed and understood by users.

Choosing the Right Approach (and Sticking with It)

Both approaches are acceptable from a compliance and reporting perspective. However, the key lies in consistency.

Once an organization adopts an approach—whether back dating or lagging—it should be applied across all periods, portfolios, and asset classes. Inconsistent application opens the door to performance manipulation(or the appearance of it), where results might look better simply because a valuation was timed differently.

This kind of inconsistency can erode trust with boards, auditors and other stakeholders. Worse, it could raise red flags in a regulatory review or third-party verification.

Disclose, Disclose, Disclose

Regardless of the method you use, full transparency in reporting is essential. If you’re lagging valuations by a quarter, clearly state that in your disclosures. If you change methodologies at any point—perhaps transitioning from lagged to backdated—explain when and why that change occurred.

Clear disclosures help users of your reports—whether board members, beneficiaries, auditors, or consultants—understand how performance was calculated. It allows them to assess the results in context and make informed decisions based on the data.

Aligning Benchmarks with Valuation Timing

One important detail that’s often overlooked: your benchmark data should follow the same valuation timing as your portfolio.

If your private equity or real estate portfolio is lagged by a quarter, but your benchmark is not, your performance comparison becomes flawed. The timing mismatch can mislead stakeholders into believing the strategy outperformed or underperformed, simply due to misaligned reporting periods.

To ensure a fair and meaningful comparison, always apply your valuation timing method consistently across both your portfolio and benchmark data.

Building Trust Through Transparency

Valuation timing is a technical, often behind-the-scenes issue—but it plays a crucial role in how your investment results are perceived. Boards and stakeholders rely on accurate, timely, and understandable performance reporting to make decisions that impact beneficiaries, employees, and communities.

By taking the time to document your valuation policy, apply it consistently, and disclose it clearly, you are reinforcing your organization’s commitment to integrity and transparency. And in a world where scrutiny of investment performance is only increasing, that commitment can be just as valuable as the numbers themselves.

Need help defining your valuation timing policy or aligning performance reporting practices with industry standards?

Longs Peak Advisory Services specializes in helping investment firms and asset owners simplify their performance processes, maintain compliance, and build trust through transparent reporting. Contact us to learn how we can support your team.

Key Takeaways from the 2025 PMAR Conference
This year’s PMAR Conference delivered timely and thought-provoking content for performance professionals across the industry. In this post, we’ve highlighted our top takeaways from the event—including a recap of the WiPM gathering.
May 29, 2025
15 min

The Performance Measurement, Attribution & Risk (PMAR) Conference is always a highlight for investment performance professionals—and this year’s event did not disappoint. With a packed agenda spanning everything from economic uncertainty and automation to evolving training needs and private market complexities, PMAR 2025 gave attendees plenty to think about.

Here are some of our key takeaways from this year’s event:

Women in Performance Measurement (WiPM)

Although not officially a part of PMAR, WiPM often schedules its annual in-person gathering during the same week to take advantage of the broader industry presence at the event. This year’s in-person gathering, united female professionals from across the country for a full day of connection, learning, and mentorship. The agenda struck a thoughtful balance between professional development and personal connection, with standout sessions on AI and machine learning, resume building, and insights from the WiPM mentoring program. A consistent favorite among attendees is the interactive format—discussions are engaging, and the support among members is truly energizing. The day concluded with a cocktail reception and dinner, reinforcing the group’s strong sense of community and its ongoing commitment to advancing women in the performance measurement profession.

If you’re not yet a member and are interested in joining the community, find WiPM here on LinkedIn.

Uncertainty, Not Risk, is Driving Market Volatility

John Longo, Ph.D., Rutgers Business School kicked off the conference with a deep dive into the global economy, and his message was clear: today’s markets are more uncertain than risky. Tariffs, political volatility, and unconventional strategies—like the idea of purchasing Greenland—are reshaping global trade and investment decisions. His suggestion? Investors may want to look beyond U.S. borders and consider assets like gold or emerging markets as a hedge.

Longo also highlighted the looming national debt problem and inflationary effects of protectionist policies. For performance professionals, the implication is clear: macro-level policy choices are creating noise that can obscure traditional risk metrics. Understanding the difference between risk and uncertainty is more important than ever.

The Future of Training: Customized, Continuous, and Collaborative

In the “Developing Staff for Success” session, Frances Barney, CFA (former head of investment performance and risk analysis for BNY Mellon) and our very own Jocelyn Gilligan, CFA, CIPM explored the evolving nature of training in our field. The key message: cookie-cutter training doesn't cut it anymore. With increasing regulatory complexity and rapidly advancing technology, firms must invest in flexible, personalized learning programs.

Whether it's improving communication skills, building tech proficiency, or embedding a culture of curiosity, the session emphasized that training must be more than a check-the-box activity. Ongoing mentorship, cross-training, and embracing neurodiversity in learning styles are all part of building high-performing, engaged teams.

AI is Here—But It Needs a Human Co-Pilot

Several sessions explored the growing role of AI and automation in performance and reporting. The consensus? AI holds immense promise, but without strong data governance and human oversight, it’s not a silver bullet. From hallucinations in generative models to the ethical challenges of data usage, AI introduces new risks even as it streamlines workflows.

Use cases presented ranged from anomaly detection and report generation to client communication enhancements and predictive exception handling. But again and again, speakers emphasized: AI should augment, not replace, human expertise.

Private Markets Require Purpose-Built Tools

Private equity, private credit, real estate, and hedge funds remain among the trickiest asset classes to measure. Whether debating IRR vs. TWR, handling data lags, or selecting appropriate benchmarks, this year's sessions highlighted just how much nuance is involved in getting private market reporting right.

One particularly compelling idea: using replicating portfolios of public assets to assess the risk and performance of illiquid investments. This approach offers more transparency and a better sense of underlying exposures, especially in the absence of timely valuations.

Shorting and Leverage Complicate Performance Attribution

Calculating performance in long/short portfolios isn’t straightforward—and using absolute values can create misleading results. A session on this topic broke down the mechanics of short selling and explained why contribution-based return attribution is essential for accurate reporting.

The key insight: portfolio-level returns can fall outside the range of individual asset returns, especially in leveraged portfolios. Understanding the directional nature of each position is crucial for both internal attribution and external communication.

The SEC is Watching—Are You Ready?

Compliance was another hot topic, especially in light of recent enforcement actions under the SEC Marketing Rule. From misuse of hypothetical performance to sloppy use of testimonials, the panelists shared hard-earned lessons and emphasized the importance of documentation. This panel was moderated by Longs Peak’s Matt Deatherage, CFA, CIPM and included Lance Dial, of K&L Gates along with Thayne Gould from Vigilant.

FAQs have helped clarify gray areas (especially around extracted performance and proximity of net vs. gross returns), but more guidance is expected—particularly on model fees and performance portability. If you're not already documenting every performance claim, now is the time to start.

“Phantom Alpha” Is Real—And Preventable

David Spaulding of TSG, closed the conference with a deep dive into benchmark construction and the potential for “phantom alpha.” Even small differences in rebalancing frequency between portfolios and their benchmarks can create misleading outperformance. His recommendation? Either sync your rebalancing schedules or clearly disclose the differences.

This session served as a great reminder that even small implementation details can significantly impact reported performance—and that transparency is essential to maintaining trust.

Final Thoughts

From automation to attribution, PMAR 2025 showcased the depth and complexity of our field. If there’s one overarching takeaway, it’s that while tools and techniques continue to evolve, the core principles—transparency, accuracy, and accountability—remain as important a sever.

Did you attend PMAR this year? We’d love to hear your biggest takeaways. Reach out to us at hello@longspeakadvisory.com or drop us a note on LinkedIn!