Factor Covariance Matrix Forecasts#

In this tutorial we are going to create several covariance matrices of factor returns. This covariance matrix follows from a standard factor model. We will also tie out the numbers against a pandas/numpy-based reimplementation. More specifically, we will:

  • Create a basic risk model.

  • Extract the factor returns.

  • Use the covariance matrix forecast report to compute the covariance matrix.

  • Use pandas to extract the factor volatility and correlation matrix time-series.

  • Replicate the volatility forecast.

  • Replicate the correlation forecast.

Throughout this notebook we work with a randomly generated dataset. The results should generalize to real data, but for legal reasons we do not show any real data on our public API. Bayesline clients can run this notebook on real data.

Imports & Setup#

For this tutorial notebook, you will need to import the following packages.

import pandas as pd
import polars as pl
import numpy as np

from bayesline.api.equity import (
    FactorCovarianceReportSettingsV2,
    ExposureSettings,
    CategoricalExposureGroupSettings,
    ContinuousExposureGroupSettings,
    FactorRiskModelSettings,
    ModelConstructionSettings,
    UniverseSettings,
)
from bayesline.apiclient import BayeslineApiClient

We will also need to have a Bayesline API client configured.

bln = BayeslineApiClient.new_client(
    endpoint="https://[ENDPOINT]",
    api_key="[API-KEY]",
)

Creating the covariance matrix forecasts#

Let’s first set up a basic risk model and use it to generate the forecasts. We choose to run with mostly default settings. The steps involved are:

  1. Creating the settings of the risk model.

  2. Loading the report model engine.

  3. Running the engine to generate the covariance report.

The first step is creating the risk model settings.

factorriskmodel_settings = FactorRiskModelSettings(
    universe=UniverseSettings(dataset="Bayesline-US-All-1y"),
    exposures=ExposureSettings(
        exposures=[
            ContinuousExposureGroupSettings(hierarchy="market"),
            CategoricalExposureGroupSettings(hierarchy="trbc"),
            ContinuousExposureGroupSettings(hierarchy="style"),
        ]
    ),
    modelconstruction=ModelConstructionSettings(
        estimation_universe=None,
        zero_sum_constraints={"trbc": "mcap_weighted"},
    ),
)

Next, we create the report engine from the report settings. We run with the defaults here.

report_settings = FactorCovarianceReportSettingsV2(
    factor_model_settings=factorriskmodel_settings
)
report_engine = bln.equity.reports.load(report_settings)

Let’s see what these settings really are by printing them out.

print(report_settings.factor_cov_settings.model_dump_json(indent=2))
{
  "halflife_vol": 60.0,
  "halflife_cor": 120.0,
  "halflife_vra": null,
  "nw_lags_vol": 0,
  "nw_lags_vol_halflife_override": null,
  "nw_lags_cor": 0,
  "shrink_cor_method": null,
  "shrink_cor_length": null,
  "combine_standardized": false
}

The different settings that jointly make up the covariance matrix are:

  1. halflife_factor_vol The halflife of the factor volatility. The default is a 60-day halflife.

  2. halflife_factor_vra The halflife of the cross-sectional factor volatility adjustment. The default is to not do any adjustment.

  3. halflife_factor_cor The halflife of the factor correlation. The default is a 120-day halflife.

  4. nw_lags_factor_vol The overlap or Newey-West lags to incluce on the factor volatility forecast. The default is zero, meaning no autocorrelation correction is performed.

  5. nw_lags_factor_cor The overlap or Newey-West lags to incluce on the factor correlation forecast. The default is zero, meaning no autocorrelation correction is performed.

Now we get the actual time-series of the covariance matrices.

# generate the report data
report = report_engine.calculate(start_date=None, end_date=None)

# get the covariance matrix (and skip the first date)
df_report = report.get_covariance().filter(pl.col("date") > pl.col("date").min())

# convert to pandas
df_vcov = df_report.to_pandas().set_index(["date", "factor"]).rename(columns=lambda c: c.split("^")[0])
df_vcov
Academic & Educational Services Basic Materials Consumer Cyclicals Consumer Non-Cyclicals Dividend Energy Financials Government Activity Growth Healthcare ... Institutions, Associations & Organizations Leverage Market Momentum Real Estate Size Technology Utilities Value Volatility
date factor
2025-04-01 Academic & Educational Services 0.025434 -0.001846 0.006746 0.002403 -0.003761 0.013763 0.002887 -0.013663 0.004263 -0.057754 ... 0.001808 0.000835 0.003058 0.004084 -0.004142 0.000644 0.008898 0.004983 -0.004099 0.005047
Basic Materials -0.001846 0.000134 -0.000490 -0.000174 0.000273 -0.000999 -0.000210 0.000992 -0.000309 0.004192 ... -0.000131 -0.000061 -0.000222 -0.000296 0.000301 -0.000047 -0.000646 -0.000362 0.000297 -0.000366
Consumer Cyclicals 0.006746 -0.000490 0.001789 0.000637 -0.000997 0.003650 0.000766 -0.003624 0.001131 -0.015318 ... 0.000479 0.000221 0.000811 0.001083 -0.001099 0.000171 0.002360 0.001322 -0.001087 0.001339
Consumer Non-Cyclicals 0.002403 -0.000174 0.000637 0.000227 -0.000355 0.001300 0.000273 -0.001291 0.000403 -0.005456 ... 0.000171 0.000079 0.000289 0.000386 -0.000391 0.000061 0.000841 0.000471 -0.000387 0.000477
Dividend -0.003761 0.000273 -0.000997 -0.000355 0.000556 -0.002035 -0.000427 0.002020 -0.000630 0.008540 ... -0.000267 -0.000123 -0.000452 -0.000604 0.000612 -0.000095 -0.001316 -0.000737 0.000606 -0.000746
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2026-03-31 Size -0.000240 -0.000153 0.000119 -0.001072 -0.000078 -0.000152 0.001089 -0.001486 0.000043 -0.000503 ... -0.000727 0.000080 0.001238 -0.000183 -0.000323 0.000650 -0.000243 -0.001206 0.000216 0.001249
Technology -0.001614 -0.004033 -0.002424 -0.003407 -0.000120 -0.002893 -0.000662 -0.001822 0.000206 -0.002584 ... -0.002089 -0.000592 0.001510 0.000687 -0.002019 -0.000243 0.003244 -0.001333 -0.000699 0.001884
Utilities -0.002122 0.003419 -0.002619 0.006028 0.000259 0.005275 -0.001843 0.010601 -0.000310 0.000726 ... 0.006940 -0.000910 -0.005571 0.003410 0.005160 -0.001206 -0.001333 0.016572 -0.001013 -0.003184
Value 0.000279 0.001366 0.001299 0.000277 0.000130 0.001123 -0.000056 0.000474 0.000067 0.000247 ... -0.000720 0.000383 0.000956 -0.000392 -0.000256 0.000216 -0.000699 -0.001013 0.001161 0.000255
Volatility -0.009717 -0.003011 -0.003352 -0.007472 0.000124 -0.006026 0.005280 -0.030797 0.000076 -0.007280 ... -0.019289 -0.001040 0.017239 0.003792 -0.002445 0.001249 0.001884 -0.003184 0.000255 0.020740

5271 rows × 21 columns

For downstream comparisons, we split these into factor volatilities and correlations.

# calculate actual factor volatilities from vcovs
df_vol = df_vcov.groupby(level="date").apply(
    lambda df: pd.Series(np.diag(df) ** 0.5, df.index.droplevel("date"))
)
df_vol.columns.name = None
df_vol.tail()
Academic & Educational Services Basic Materials Consumer Cyclicals Consumer Non-Cyclicals Dividend Energy Financials Government Activity Growth Healthcare ... Institutions, Associations & Organizations Leverage Market Momentum Real Estate Size Technology Utilities Value Volatility
date
2026-03-25 0.186803 0.135197 0.085725 0.103875 0.025571 0.169620 0.064445 0.546927 0.018382 0.111329 ... 0.176007 0.026514 0.138444 0.069647 0.088610 0.024487 0.056239 0.127778 0.034710 0.134647
2026-03-26 0.186054 0.135095 0.085208 0.103440 0.025459 0.174187 0.064205 0.544028 0.018778 0.113726 ... 0.175040 0.026642 0.139284 0.071588 0.088279 0.024388 0.056906 0.127632 0.034534 0.136749
2026-03-27 0.185076 0.136677 0.084701 0.106129 0.025305 0.176637 0.063979 0.541673 0.018925 0.114096 ... 0.178136 0.026479 0.140880 0.071862 0.087787 0.024499 0.056771 0.128743 0.034337 0.136551
2026-03-30 0.184248 0.137411 0.084451 0.105516 0.025218 0.176239 0.064102 0.539096 0.018884 0.114241 ... 0.177478 0.026338 0.141172 0.073828 0.087746 0.024738 0.057125 0.127997 0.034140 0.138475
2026-03-31 0.184671 0.137022 0.083946 0.107057 0.025234 0.183978 0.065007 0.537366 0.018777 0.113578 ... 0.178359 0.026605 0.144843 0.074641 0.087296 0.025502 0.056956 0.128733 0.034073 0.144013

5 rows × 21 columns

# calculate actual factor volatilities from vcovs
df_vol = df_vcov.groupby(level="date").apply(
    lambda df: pd.Series(np.diag(df) ** 0.5, df.index.droplevel("date"))
)
df_vol.columns.name = None
df_vol.tail()
Academic & Educational Services Basic Materials Consumer Cyclicals Consumer Non-Cyclicals Dividend Energy Financials Government Activity Growth Healthcare ... Institutions, Associations & Organizations Leverage Market Momentum Real Estate Size Technology Utilities Value Volatility
date
2026-03-25 0.186803 0.135197 0.085725 0.103875 0.025571 0.169620 0.064445 0.546927 0.018382 0.111329 ... 0.176007 0.026514 0.138444 0.069647 0.088610 0.024487 0.056239 0.127778 0.034710 0.134647
2026-03-26 0.186054 0.135095 0.085208 0.103440 0.025459 0.174187 0.064205 0.544028 0.018778 0.113726 ... 0.175040 0.026642 0.139284 0.071588 0.088279 0.024388 0.056906 0.127632 0.034534 0.136749
2026-03-27 0.185076 0.136677 0.084701 0.106129 0.025305 0.176637 0.063979 0.541673 0.018925 0.114096 ... 0.178136 0.026479 0.140880 0.071862 0.087787 0.024499 0.056771 0.128743 0.034337 0.136551
2026-03-30 0.184248 0.137411 0.084451 0.105516 0.025218 0.176239 0.064102 0.539096 0.018884 0.114241 ... 0.177478 0.026338 0.141172 0.073828 0.087746 0.024738 0.057125 0.127997 0.034140 0.138475
2026-03-31 0.184671 0.137022 0.083946 0.107057 0.025234 0.183978 0.065007 0.537366 0.018777 0.113578 ... 0.178359 0.026605 0.144843 0.074641 0.087296 0.025502 0.056956 0.128733 0.034073 0.144013

5 rows × 21 columns

# calculate actual factor correlations from vcovs
df_cor = df_vcov.groupby(level="date").apply(
    lambda df: df.droplevel("date")
    / np.outer(np.diag(df) ** 0.5, np.diag(df) ** 0.5)
)
df_cor.tail()
Academic & Educational Services Basic Materials Consumer Cyclicals Consumer Non-Cyclicals Dividend Energy Financials Government Activity Growth Healthcare ... Institutions, Associations & Organizations Leverage Market Momentum Real Estate Size Technology Utilities Value Volatility
date factor
2026-03-31 Size -0.051064 -0.043888 0.055528 -0.392713 -0.120525 -0.032322 0.656639 -0.108453 0.090187 -0.173650 ... -0.159905 0.117890 0.335214 -0.096059 -0.144872 1.000000 -0.167283 -0.367235 0.249031 0.340013
Technology -0.153468 -0.516823 -0.506988 -0.558794 -0.083228 -0.276119 -0.178860 -0.059522 0.192184 -0.399519 ... -0.205639 -0.390468 0.183019 0.161592 -0.406044 -0.167283 1.000000 -0.181858 -0.360320 0.229686
Utilities -0.089278 0.193802 -0.242320 0.437409 0.079585 0.222716 -0.220199 0.153246 -0.128391 0.049687 ... 0.302273 -0.265790 -0.298783 0.354842 0.459180 -0.367235 -0.181858 1.000000 -0.230939 -0.171734
Value 0.044355 0.292505 0.454107 0.075867 0.151141 0.179119 -0.025266 0.025911 0.103999 0.063748 ... -0.118393 0.422054 0.193675 -0.154199 -0.086144 0.249031 -0.360320 -0.230939 1.000000 0.051918
Volatility -0.365376 -0.152597 -0.277303 -0.484649 0.034258 -0.227436 0.563958 -0.397952 0.028157 -0.445104 ... -0.750944 -0.271474 0.826436 0.352790 -0.194513 0.340013 0.229686 -0.171734 0.051918 1.000000

5 rows × 21 columns

Manually replicating the covariance forecasts#

We can also estimated the risk model and get the returns directly. From these returns we can construct the covariance forecasts. Bayesline returns dataframes in polars, but they can be easily converted to pandas dataframes. We also remove the factor group (market, style, industry, etc.) for convenience.

risk_model = bln.equity.riskmodels.load(factorriskmodel_settings).get_model()
df_factor_returns = risk_model.fret().tail(-1).to_pandas().set_index("date").rename(columns=lambda c: c.split(".")[1])
df_factor_returns.tail()
Market Dividend Growth Leverage Momentum Size Value Volatility Academic & Educational Services Basic Materials ... Consumer Non-Cyclicals Energy Financials Government Activity Healthcare Industrials Institutions, Associations & Organizations Real Estate Technology Utilities
date
2026-03-25 0.007150 0.000044 -0.002162 -0.001435 0.002522 0.000975 0.000589 0.005781 0.012054 0.010456 ... 0.003439 -0.007601 0.000979 -0.063025 0.007372 0.000312 -0.006510 -0.002908 -0.002001 -0.001520
2026-03-26 -0.012332 0.000855 -0.002477 0.002234 -0.010419 -0.000891 0.000904 -0.016057 0.006894 0.007969 ... 0.003670 0.025015 -0.002526 0.012508 0.014999 -0.002526 0.003506 0.003482 -0.006097 0.007255
2026-03-27 -0.014925 0.000130 0.001790 -0.000079 0.005758 -0.002032 0.000554 -0.007520 0.004359 0.014580 ... 0.015039 0.020015 -0.002635 0.018467 -0.008877 0.000675 0.021866 0.001636 -0.002802 0.012549
2026-03-30 -0.010278 0.001058 -0.000956 0.000582 -0.010674 0.002496 -0.000524 -0.015705 0.006007 0.011824 ... 0.001545 -0.008834 0.004623 0.015984 0.007900 -0.003383 0.007038 0.005313 -0.005094 0.001816
2026-03-31 0.020540 0.001667 -0.000316 -0.002717 0.007813 0.003867 -0.001769 0.024225 -0.013630 -0.006333 ... -0.012291 -0.032145 0.007378 -0.023353 0.001579 -0.000816 -0.015080 -0.002195 0.002577 -0.011257

5 rows × 21 columns

From these returns we can run standard pandas functions to get the EWMAs.

# calculate expected factor volatilities using the ewma
df_vol_tieout = (
    pd.DataFrame(df_factor_returns**2)
    .ewm(halflife=report_settings.factor_cov_settings.halflife_vol)
    .mean()
    .astype(np.float32)
    ** 0.5
    * 252**0.5
)
df_vol_tieout.tail()
Market Dividend Growth Leverage Momentum Size Value Volatility Academic & Educational Services Basic Materials ... Consumer Non-Cyclicals Energy Financials Government Activity Healthcare Industrials Institutions, Associations & Organizations Real Estate Technology Utilities
date
2026-03-25 0.138444 0.025571 0.018382 0.026514 0.069647 0.024487 0.034710 0.134647 0.186803 0.135197 ... 0.103875 0.169620 0.064445 0.546928 0.111329 0.068609 0.176007 0.088610 0.056239 0.127778
2026-03-26 0.139284 0.025459 0.018778 0.026642 0.071588 0.024388 0.034534 0.136749 0.186054 0.135095 ... 0.103440 0.174187 0.064205 0.544028 0.113726 0.068333 0.175040 0.088279 0.056906 0.127632
2026-03-27 0.140880 0.025305 0.018925 0.026479 0.071862 0.024499 0.034337 0.136551 0.185076 0.136677 ... 0.106129 0.176637 0.063979 0.541673 0.114096 0.067926 0.178136 0.087787 0.056771 0.128743
2026-03-30 0.141172 0.025218 0.018884 0.026338 0.073828 0.024738 0.034140 0.138475 0.184248 0.137411 ... 0.105516 0.176239 0.064102 0.539096 0.114241 0.067771 0.177478 0.087746 0.057125 0.127997
2026-03-31 0.144843 0.025234 0.018777 0.026605 0.074641 0.025502 0.034073 0.144013 0.184671 0.137022 ... 0.107057 0.183978 0.065007 0.537366 0.113578 0.067373 0.178359 0.087296 0.056956 0.128733

5 rows × 21 columns

pd.testing.assert_frame_equal(df_vol, df_vol_tieout, check_column_type=False, check_categorical=False, check_like=True)

The correlation are a bit more involved. We need to create the outer products and then run EWMAs on each cell in the outer product matrix.

# calculate the ewma on the outer product (vcov with mean zero)
df_factor_returns_outer = df_factor_returns.groupby("date").apply(
    lambda df: pd.DataFrame(np.outer(df, df), df.columns, df.columns)
)
df_factor_returns_outer.index.names = ["date", "factor"]
df_cor_tieout = (
    pd.DataFrame(df_factor_returns_outer)
    .unstack()
    .ewm(halflife=report_settings.factor_cov_settings.halflife_cor)
    .mean()
    .stack(future_stack=True)
    .reindex(df_factor_returns_outer.columns, axis=1)
    .groupby("date")
    .apply(
        lambda df: df.droplevel("date")
        / np.outer(np.diag(df) ** 0.5, np.diag(df) ** 0.5)
    )
    .astype(np.float32)
)

df_cor_tieout
Market Dividend Growth Leverage Momentum Size Value Volatility Academic & Educational Services Basic Materials ... Consumer Non-Cyclicals Energy Financials Government Activity Healthcare Industrials Institutions, Associations & Organizations Real Estate Technology Utilities
date factor
2025-04-01 Market 1.000000 -1.000000 1.000000 1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000 -1.000000 ... 1.000000 1.000000 1.000000 -1.000000 -1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000
Dividend -1.000000 1.000000 -1.000000 -1.000000 -1.000000 -1.000000 1.000000 -1.000000 -1.000000 1.000000 ... -1.000000 -1.000000 -1.000000 1.000000 1.000000 -1.000000 -1.000000 1.000000 -1.000000 -1.000000
Growth 1.000000 -1.000000 1.000000 1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000 -1.000000 ... 1.000000 1.000000 1.000000 -1.000000 -1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000
Leverage 1.000000 -1.000000 1.000000 1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000 -1.000000 ... 1.000000 1.000000 1.000000 -1.000000 -1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000
Momentum 1.000000 -1.000000 1.000000 1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000 -1.000000 ... 1.000000 1.000000 1.000000 -1.000000 -1.000000 1.000000 1.000000 -1.000000 1.000000 1.000000
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2026-03-31 Industrials 0.170070 -0.044535 -0.001732 0.282298 0.075855 0.135843 0.312093 0.112918 0.055694 0.375272 ... 0.137219 0.076103 0.085461 -0.026741 -0.059674 1.000000 -0.077878 0.071112 -0.503669 0.036674
Institutions, Associations & Organizations -0.933890 -0.033733 0.117979 0.094535 0.157461 -0.159906 -0.118393 -0.750945 0.227706 0.111177 ... 0.461493 0.291784 -0.292655 0.420177 0.212377 -0.077878 1.000000 0.140786 -0.205639 0.302273
Real Estate -0.148931 0.074664 -0.101923 0.084998 -0.011548 -0.144872 -0.086143 -0.194514 0.034206 0.163428 ... 0.296618 0.086221 -0.015049 0.077926 0.153705 0.071112 0.140786 1.000000 -0.406044 0.459180
Technology 0.183019 -0.083228 0.192184 -0.390468 0.161592 -0.167283 -0.360320 0.229686 -0.153468 -0.516823 ... -0.558795 -0.276119 -0.178860 -0.059522 -0.399519 -0.503669 -0.205639 -0.406044 1.000000 -0.181858
Utilities -0.298784 0.079584 -0.128391 -0.265789 0.354843 -0.367235 -0.230939 -0.171734 -0.089277 0.193802 ... 0.437409 0.222716 -0.220199 0.153245 0.049687 0.036674 0.302273 0.459180 -0.181858 1.000000

5271 rows × 21 columns

pd.testing.assert_frame_equal(df_cor, df_cor_tieout, check_index_type=False, check_categorical=False, check_like=True, atol=1e-5)