Idiosyncratic volatility and correlation forecasting

Idiosyncratic volatility and correlation forecasting#

Use this notebook to extract a volatility forecast report for the idiosyncratic returns. The notebook also shows how to compute idiosyncratic correlations. Getting these correlations out of the system directly is not available yet.

import datetime as dt
from itertools import combinations_with_replacement

import polars as pl

from bayesline.api.equity import (
    ReportSettings,
    ExposureSettings,
    FactorRiskModelSettings,
    ModelConstructionSettings,
    ReportSettings,
    UniverseSettings,
    CategoricalExposureGroupSettings,
    ContinuousExposureGroupSettings,
    PortfolioHierarchySettings,
    IdiosyncraticVolatilityReportSettings,
    IdiosyncraticReturnReportSettings,
)
from bayesline.apiclient import BayeslineApiClient
bln = BayeslineApiClient.new_client(
    endpoint="https://[ENDPOINT]",
    api_key="[API-KEY]",
)

We begin by specifying a standard factor model that we can compute the idiosyncratic returns in reference to.

factorriskmodel_settings = FactorRiskModelSettings(
    universe=UniverseSettings(dataset="Bayesline-US-All-1y"),
    exposures=ExposureSettings(
        exposures=[
            ContinuousExposureGroupSettings(hierarchy="market"),
            CategoricalExposureGroupSettings(hierarchy="trbc"),
            ContinuousExposureGroupSettings(hierarchy="style"),
        ]
    ),
    modelconstruction=ModelConstructionSettings(
        zero_sum_constraints={"trbc": "mcap_weighted"}
    ),
)

Idiosyncratic risk calculations need a portfolio. This is necessary because we need to make sure we deliver the risk numbers at the asset level in the right ID space. The portfolio can be entirely synthetic, and the weights don’t really matter. Here we consider a portfolio of six stocks. The date is the earliest date for which we want to compute output.

portfolios_loader = bln.equity.portfolios
uploader = portfolios_loader.uploader
demo_portfolio_dataset = uploader.create_or_replace_dataset("Demo-Portfolio")

df = pl.DataFrame({
    "portfolio_id": ["Test-Portfolio"]*6,
    "asset_id": [
        "02079K107", # Alphabet
        "2592345",  # Microsoft
        "67066G10", # NVIDIA
        "30303M102", # Meta
        "57636Q104", # Mastercard
        "92826C839", # Visa
    ],
    "asset_id_type": ["cusip9", "sedol7", "cusip8", "cusip9", "cusip9", "cusip9"],
    "date": [dt.date(2025, 1, 1)]*6,  # earliest date to calculate things for
    "value": [100]*6  # not important, we just need the unique identifiers
})

demo_portfolio_dataset.fast_commit(df, mode="append")
UploadCommitResult(version=1, committed_names=[])
demo_portfolio_dataset.get_data().collect()
shape: (6, 5)
dateportfolio_idasset_idasset_id_typevalue
datestrstrstrf32
2025-01-01"Test-Portfolio""02079K107""cusip9"100.0
2025-01-01"Test-Portfolio""2592345""sedol7"100.0
2025-01-01"Test-Portfolio""30303M102""cusip9"100.0
2025-01-01"Test-Portfolio""57636Q104""cusip9"100.0
2025-01-01"Test-Portfolio""67066G10""cusip8"100.0
2025-01-01"Test-Portfolio""92826C839""cusip9"100.0

After uploading the portfolio, we put it in a hierarchy of just this portfolio. This is necessary to pass to the report api.

ph_settings = PortfolioHierarchySettings.from_source(
    source="Demo-Portfolio",
    portfolio_ids=["Test-Portfolio"],
)
ph_loader = bln.equity.portfoliohierarchies
ph_api = ph_loader.load(ph_settings)

We can quickly take a look at the (drifted) portfolio constituents.

ph_api.get(None, None)
shape: (876, 6)
dateportfolio_idinput_asset_idinput_asset_id_typevaluevalue_bench
datestrstrstrf32f32
2025-01-01"Test-Portfolio""30303M102""cusip9"100.0null
2025-01-01"Test-Portfolio""57636Q104""cusip9"100.0null
2025-01-01"Test-Portfolio""67066G10""cusip8"100.0null
2025-01-01"Test-Portfolio""92826C839""cusip9"100.0null
2025-01-02"Test-Portfolio""30303M102""cusip9"102.344971null
2025-08-06"Test-Portfolio""92826C839""cusip9"107.858368null
2025-08-07"Test-Portfolio""30303M102""cusip9"130.323837null
2025-08-07"Test-Portfolio""57636Q104""cusip9"107.042076null
2025-08-07"Test-Portfolio""67066G10""cusip8"134.632904null
2025-08-07"Test-Portfolio""92826C839""cusip9"105.429733null

Getting the idiosyncratic volatility forecasts#

For the idiosyncratic volatility, we can directly query the output. Below we extract a dataframe with the sqrt-diagonal of the idiosyncratic risk matrix. We run with default settings here, but underneath IdiosyncraticVolatilityReportSettings many different options are available.

report_settings = ReportSettings(
    report=IdiosyncraticVolatilityReportSettings(
        # we can be flexible with the settings here, e.g. half-life
    ),
    risk_model=factorriskmodel_settings,
)
report_engine = bln.equity.portfolioreport.load(
    report_settings, hierarchy_ref_or_settings=ph_settings,
)
order = {"date": ["date"], "asset": ["input_asset_id"]}
report = report_engine.get_report(
    order, date_start=dt.date(2025, 1, 2), date_end=dt.date(2025, 1, 31)
)
idio_vol_df = (
    report.get_data([], expand=("date", "input_asset_id"), value_cols=report.metric_cols)
    .rename({"input_asset_id": "asset_id", "IdiosyncraticVolatility": "idio_vol"})
    .with_columns(pl.col("date").str.to_date())
)
idio_vol_df
shape: (80, 3)
dateasset_ididio_vol
datestrf32
2025-01-02"30303M102"0.207958
2025-01-02"57636Q104"0.133962
2025-01-02"67066G10"0.319194
2025-01-02"92826C839"0.169348
2025-01-03"30303M102"0.20616
2025-01-30"92826C839"0.161184
2025-01-31"30303M102"0.231164
2025-01-31"57636Q104"0.165577
2025-01-31"67066G10"0.44243
2025-01-31"92826C839"0.159845

Computing the idiosyncratic correlations#

Sometimes it is necessary to allow for density in the idiosyncratic risk matrix. Factor models may not be able to explain co-movement in smaller clusters of highly similar assets. We are working on integrations, but for now it is only possible to manually compute these off-diagonal correlations from the idiosyncratic return time-series as a post-processing step. In the code below, we will extract the idiosyncratic returns, and subsequently compute the correlation matrix for two groups within our portfolio of six assets.

First, we run a very similar report as above to extract the idiosyncratic returns time-series.

report_settings = ReportSettings(
    report=IdiosyncraticReturnReportSettings(),
    risk_model=factorriskmodel_settings,
)
report_engine = bln.equity.portfolioreport.load(
    report_settings, hierarchy_ref_or_settings=ph_settings,
)
order = {"date": ["date"], "asset": ["input_asset_id"]}
report = report_engine.get_report(
    order, date_start=dt.date(2025, 1, 2), date_end=dt.date(2025, 1, 31)
)
idio_ret_df = (
    report.get_data([], expand=("date", "input_asset_id"), value_cols=report.metric_cols)
    .rename({"input_asset_id": "asset_id", "IdiosyncraticReturn": "idio_ret"})
    .with_columns(pl.col("date").str.to_date(), pl.col("idio_ret").fill_nan(None))
)
idio_ret_df
shape: (80, 3)
dateasset_ididio_ret
datestrf32
2025-01-02"30303M102"0.022441
2025-01-02"57636Q104"-0.007102
2025-01-02"67066G10"0.022337
2025-01-02"92826C839"-0.00343
2025-01-03"30303M102"-0.005653
2025-01-30"92826C839"0.018964
2025-01-31"30303M102"0.003787
2025-01-31"57636Q104"-0.018969
2025-01-31"67066G10"-0.035816
2025-01-31"92826C839"-0.00405

Next, we define the groups of similar assets. These need to be mutually exclusive. I.e. we cannot have one asset that is part of multiple groups. Not all assets have to be part of a group.

In the example below, we put Alphabet, Microsoft and Meta in a group, and Mastercast and Visa in a separate group. NVIDIA is not part of a group.

groups = [
    ["02079K107", "2592345", "30303M102"],  # Alphabet, Microsoft, Meta
    ["57636Q104", "92826C839"],  # Mastercard, Visa
]

# create a dataframe with all combinations
df_offdiag = pl.DataFrame(
    [
        (left, right)
        for group in groups
        for left, right in combinations_with_replacement(group, 2)
    ], 
    schema=["asset_id", "asset_id_right"],
    orient="row",
)

# just for display, this is in realistic scenarios a very large dataframe
(
    df_offdiag.sort("asset_id", "asset_id_right")
    .with_columns(pl.lit(1))
    .pivot("asset_id_right", index="asset_id", maintain_order=True, sort_columns=True)
)
shape: (5, 6)
asset_id02079K107259234530303M10257636Q10492826C839
stri32i32i32i32i32
"02079K107"111nullnull
"2592345"null11nullnull
"30303M102"nullnull1nullnull
"57636Q104"nullnullnull11
"92826C839"nullnullnullnull1
# join the time series such that we have each combination that we need to compute
idio_ret_df_joined = (
    idio_ret_df.join(df_offdiag, on="asset_id")
    .join(idio_ret_df, left_on=("date", "asset_id_right"), right_on=("date", "asset_id"))
)
idio_ret_df_joined
shape: (80, 5)
dateasset_ididio_retasset_id_rightidio_ret_right
datestrf32strf32
2025-01-02"30303M102"0.022441"30303M102"0.022441
2025-01-02"57636Q104"-0.007102"57636Q104"-0.007102
2025-01-02"57636Q104"-0.007102"92826C839"-0.00343
2025-01-02"92826C839"-0.00343"92826C839"-0.00343
2025-01-03"30303M102"-0.005653"30303M102"-0.005653
2025-01-30"92826C839"0.018964"92826C839"0.018964
2025-01-31"30303M102"0.003787"30303M102"0.003787
2025-01-31"57636Q104"-0.018969"57636Q104"-0.018969
2025-01-31"57636Q104"-0.018969"92826C839"-0.00405
2025-01-31"92826C839"-0.00405"92826C839"-0.00405

We compute the covariance matrix first, and then standardize into the correlation matrix. The computation of the covariance matrix relies on computing a rolling mean to correct for autocorrelation, and subsequently an exponentially weighted moving average. We then divide by the standard deviations to obtain the correlations.

# compute the covariance matrix by first using a rolling mean (for overlap),
# and then an exponential weighted moving average (for smoothing)
overlap_window = 5
half_life = 126

idio_vcov_df = (
    idio_ret_df_joined
    .with_columns(
        pl.col("idio_ret").rolling_mean(window_size=overlap_window, min_samples=1).over("asset_id"),
        pl.col("idio_ret_right").rolling_mean(window_size=overlap_window, min_samples=1).over("asset_id_right"),
    )
    .with_columns(
        (pl.col("idio_ret") * pl.col("idio_ret_right"))
        .ewm_mean(half_life=half_life)
        .over(("asset_id", "asset_id_right"))
        .alias("idio_vcov")
    )
)
idio_vcov_df
shape: (80, 6)
dateasset_ididio_retasset_id_rightidio_ret_rightidio_vcov
datestrf32strf32f32
2025-01-02"30303M102"0.022441"30303M102"0.0224410.000504
2025-01-02"57636Q104"-0.007102"57636Q104"-0.0071020.00005
2025-01-02"57636Q104"-0.007102"92826C839"-0.003430.000024
2025-01-02"92826C839"-0.00343"92826C839"-0.003430.000012
2025-01-03"30303M102"0.008394"30303M102"0.0083940.000286
2025-01-30"92826C839"0.006418"92826C839"0.0079810.000013
2025-01-31"30303M102"0.011433"30303M102"0.0114330.000119
2025-01-31"57636Q104"0.01015"57636Q104"0.0059790.000027
2025-01-31"57636Q104"0.005145"92826C839"0.0099130.000021
2025-01-31"92826C839"0.004509"92826C839"0.0075340.000014
# to translate the covariance matrix to a correlation matrix, 
# we need to select the variance of the idiosyncratic returns
idio_var_df = (
    idio_vcov_df.filter(pl.col("asset_id") == pl.col("asset_id_right"))
    .select("date", "asset_id", pl.col("idio_vcov").alias("idio_var"))
)
idio_var_df
shape: (60, 3)
dateasset_ididio_var
datestrf32
2025-01-02"30303M102"0.000504
2025-01-02"57636Q104"0.00005
2025-01-02"92826C839"0.000012
2025-01-03"30303M102"0.000286
2025-01-03"57636Q104"0.000056
2025-01-30"57636Q104"0.000026
2025-01-30"92826C839"0.000013
2025-01-31"30303M102"0.000119
2025-01-31"57636Q104"0.000027
2025-01-31"92826C839"0.000014
# by joining twice and normalizing, we get the correlation matrix
idio_corr_df = (
    idio_vcov_df.join(idio_var_df, on=("date", "asset_id"))
    .join(idio_var_df, left_on=("date", "asset_id_right"), right_on=("date", "asset_id"))
    .select(
        "date",
        "asset_id",
        "asset_id_right",
        (pl.col("idio_vcov") / (pl.col("idio_var") * pl.col("idio_var_right")).sqrt()).alias("idio_corr"))
)
idio_corr_df
shape: (80, 4)
dateasset_idasset_id_rightidio_corr
datestrstrf32
2025-01-02"30303M102""30303M102"1.0
2025-01-02"57636Q104""57636Q104"1.0
2025-01-02"57636Q104""92826C839"1.0
2025-01-02"92826C839""92826C839"1.0
2025-01-03"30303M102""30303M102"1.0
2025-01-30"92826C839""92826C839"1.0
2025-01-31"30303M102""30303M102"1.0
2025-01-31"57636Q104""57636Q104"1.0
2025-01-31"57636Q104""92826C839"1.067919
2025-01-31"92826C839""92826C839"1.0
# for small portfolios, the dataframe is small enough to pivot and display
(
    idio_corr_df.pivot("asset_id_right", index=("date", "asset_id"), maintain_order=True, sort_columns=True)
    .filter(pl.col("date") > pl.col("date").min())
)
shape: (57, 5)
dateasset_id30303M10257636Q10492826C839
datestrf32f32f32
2025-01-03"30303M102"1.0nullnull
2025-01-03"57636Q104"null1.00.997729
2025-01-03"92826C839"nullnull1.0
2025-01-06"30303M102"1.0nullnull
2025-01-06"57636Q104"null1.01.00303
2025-01-30"57636Q104"null1.01.060426
2025-01-30"92826C839"nullnull1.0
2025-01-31"30303M102"1.0nullnull
2025-01-31"57636Q104"null1.01.067919
2025-01-31"92826C839"nullnull1.0