Factor Returns Computation Validation#

In this tutorial we are going to compute factor returns from a standard Bayesline factor model and compare the results to those we manually compute with statsmodels. The specific steps we will follow are:

  1. Create a basic risk model and extract factor returns.

  2. Manually compute factor returns with statsmodels.

  3. Compare results of extracted factor returns and statsmodels regression.

Throughout this notebook we work with a randomly generated dataset. The results should generalize to real data, but we do not show any real data on our public API. Bayesline clients can run this notebook on real data.

0. Imports & Setup#

For this tutorial notebook, you will need to import the following packages.

import numpy as np
import numpy.testing as tst
import pandas as pd
from statsmodels.api import WLS
from statsmodels.stats.weightstats import DescrStatsW

from bayesline.api.equity import (
    ExposureSettings,
    FactorRiskModelSettings,
    ModelConstructionSettings,
    UniverseSettings,
)
from bayesline.apiclient import BayeslineApiClient

We will also need to have a Bayesline API client configured.

bln = BayeslineApiClient.new_client(
    endpoint="https://[ENDPOINT]",
    api_key="[API-KEY]",
)

1. Create a basic risk model and extract factor returns#

Set up the risk model settings#

We will set up a risk model to use InvIdioVar weights and otherwise default settings.

factorriskmodel_settings = FactorRiskModelSettings(
    universe=UniverseSettings(),
    exposures=ExposureSettings(),
    modelconstruction=ModelConstructionSettings(
        weights="InvIdioVar",
        return_clip_bounds=(None, None),
    ),
)

Let’s verify the risk model settings we configured above.

print(factorriskmodel_settings.model_dump_json(indent=2))
{
  "universe": [
    {
      "dataset": null,
      "id_type": "bayesid",
      "calendar": {
        "dataset": null,
        "filters": [
          [
            "XNYS"
          ]
        ]
      },
      "industry": {
        "hierarchy": "trbc",
        "include": "All",
        "exclude": []
      },
      "region": {
        "hierarchy": "continent",
        "include": "All",
        "exclude": []
      },
      "mcap": {
        "lower": 0.0,
        "upper": 1e20,
        "gdp_deflator_asof": null
      }
    }
  ],
  "exposures": [
    {
      "market": true,
      "styles": null,
      "standardize_styles": true,
      "industries": {
        "hierarchy": "trbc",
        "level": 1
      },
      "regions": {
        "hierarchy": "continent",
        "level": 2
      },
      "other": {}
    }
  ],
  "modelconstruction": [
    {
      "currency": "USD",
      "weights": "InvIdioVar",
      "alpha": 0.0,
      "alpha_overrides": {},
      "return_clip_bounds": [
        null,
        null
      ],
      "known_factors": {}
    }
  ]
}

Construct risk model with settings#

risk_model = bln.equity.riskmodels.load(factorriskmodel_settings).get()

Extract factor returns from risk model#

factor_returns = (
    (risk_model.fret())
    .to_pandas()
    .set_index("date")
)
factor_returns.columns = pd.MultiIndex.from_tuples(
    factor_returns.columns.str.split(".").to_list(), names=["factor_group", "factor"]
)

Let’s take a peek at the factor returns that our risk model computed. We have several groups of factors (market, industry, region, and style), and one or two factors in each group.

factor_returns.head()
factor_group market industry region style
factor Market Energy Basic Materials Industrials Consumer Cyclicals Consumer Non-Cyclicals Financials Healthcare Technology Utilities ... Government Activity Academic & Educational Services United States Size Value Growth Volatility Momentum Dividend Leverage
date
2024-07-11 0.003720 0.000109 0.010011 0.009490 0.008789 0.000428 -0.005419 0.005167 -0.003788 0.006332 ... 0.0 0.0 0.0 -0.002078 0.002625 -0.000767 -0.002144 -0.006058 0.000702 0.001400
2024-07-12 0.008739 -0.001840 -0.000279 -0.001702 0.002250 0.000048 0.001824 -0.003889 0.000235 -0.000113 ... 0.0 0.0 0.0 -0.000545 -0.001136 0.001093 0.002604 -0.000475 0.000280 0.001058
2024-07-15 0.002315 0.006814 -0.002595 0.006462 -0.002786 -0.006974 0.001122 -0.008616 0.002514 -0.019141 ... 0.0 0.0 0.0 0.001011 0.000874 -0.000873 0.002326 0.000934 0.000639 -0.000327
2024-07-16 0.014378 -0.016056 0.009234 0.012379 0.009606 0.000147 -0.004086 0.004208 -0.002130 -0.007843 ... 0.0 0.0 0.0 -0.000222 0.003202 -0.001790 0.003833 -0.001764 0.002052 -0.000404
2024-07-17 -0.010342 0.004459 0.005016 0.003758 0.002242 0.013622 -0.009670 0.011663 -0.003574 0.007700 ... 0.0 0.0 0.0 0.000318 0.001227 -0.001440 -0.010021 -0.007815 0.002638 0.000894

5 rows × 22 columns

2. Manually compute factor returns with statsmodels#

Now, we will manually compute factor returns for numerical comparison to our results.

Write manual regression code#

This is our implementation of a basic factor returns regression with statsmodels which we will use for comparison. This implementation uses weighted constrained linear regression to model factor returns. Specifically, it solves the following optimization problem to compute factor returns \(f_t\)

(1)#\[\begin{align} \min_{f_t\in\mathbb{R}^k}{\sum_{i=1}^{n_t}{W_{i,t}\left(r_{i,t}-X_{i,t}^\top f_t\right)^2}}, \quad\mathrm{subject\ to} \quad\sum_{j\in\mathrm{ind}}{\left(\sum_{i=1}^{n_t}\mathrm{MCap}_{i,t}X_{i,j,t}\right)f_{j,t}}=0, \end{align}\]

where \(j\in\mathrm{ind}\) are all industry factors and \(W_{i,t}\) are the weights of each stock. The constraint effectively says that the (market-cap weighted) industry factor returns sum to zero.

We use weighted linear regression, and from an econometric perspective, the optimal regression weights are (proportional to) the inverses of the variance, \(W_{i,t}=1/\mathrm{Var}(\varepsilon_{i,t})\). Since this is not known, we use the idiosyncratic volatility, or the estimated error variance of a 100-day rolling time-series regression of the returns of each stock against its market factor, \(W_{i,t}=100/\sum_{\tau=1}^{100}{e_{i,t-\tau}^2}\), for the fitted regression,

(2)#\[\begin{align} r_{i,t-\tau}&=a_i + r_{t-\tau}^Mb_i+e_{i,t-\tau},&\tau&=1,\ldots,100, \end{align}\]

where the market return is simply the market-cap weighted return of all assets in the estimation universe, \(r_{t}^M=\sum_{i\in\mathcal{I}_t^E}{\mathrm{MCap}_{i,t}r_{i,t}}\).

def statsmodels_regression(
    df: pd.DataFrame,
    market_caps: pd.DataFrame,
    industry_names: list[str],
    substyle_names: list[str],
) -> pd.DataFrame:
    df = df.dropna()
    df = df[df["estimation_universe"]]
    factor_names = ["Market", *industry_names, *substyle_names]
    weights = (1 / df["idio_vol"] ** 2).fillna(0.0)

    X = df[factor_names].copy().astype(np.float32)
    y = df["return"]

    # compute the adjustment for industry exposure summing to zero
    adj = market_caps[1:] / market_caps[0]
    X.loc[:, industry_names[1:]] -= adj.values * X.loc[:, industry_names[:1]].values
    X = X.drop(columns=industry_names[0])

    wls = WLS(
        endog=y,
        exog=X,
        weights=weights,
        missing="drop",
        hasconst=False,  # for r-squared calculation
    ).fit()
    sigma2_eps = DescrStatsW(wls.resid, weights=weights, ddof=0).var

    # if the mcap is zero, then t_stats are 0.0 and p_values are 1.0
    zero_mcap = [i.replace("industry.", "") for i in market_caps.loc[market_caps == 0.0].index]
    wls.tvalues[zero_mcap] = 0.0
    wls.pvalues[zero_mcap] = 1.0

    wls_results = pd.concat(
        {
            "factor_returns": wls.params,
            "t_stats": wls.tvalues,
            "p_values": wls.pvalues,
            "r2": pd.Series([wls.rsquared], [None]),
            "sigma2": pd.Series(sigma2_eps, [None]),
        },
    )
    return wls_results

Extract the required regression input data to compute factor returns#

A. Exposures#

First, let’s get the exposures for this universe as a pandas DataFrame.

X = risk_model.exposures().to_pandas()
X.head()
date bayesid market.Market industry.Energy industry.Basic Materials industry.Industrials industry.Consumer Cyclicals industry.Consumer Non-Cyclicals industry.Financials industry.Healthcare ... industry.Government Activity industry.Academic & Educational Services region.United States style.Size style.Value style.Growth style.Volatility style.Momentum style.Dividend style.Leverage
0 2024-07-10 00163T10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 1.0 0.976562 0.551758 0.934570 0.480957 -1.534180 0.541992 -0.395996
1 2024-07-10 00287Y10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 1.0 1.520508 -1.212891 1.333984 -0.263916 0.601562 1.267578 1.652344
2 2024-07-10 03042010 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 1.0 -0.527344 0.419189 0.322021 -0.217163 -1.499023 0.248413 0.577637
3 2024-07-10 03674B10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 1.0 0.998047 0.350830 -0.254883 -0.475830 -0.064880 -0.339844 -0.136108
4 2024-07-10 03761230 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 ... 0.0 0.0 1.0 1.012695 0.134033 -0.843262 0.619141 0.748535 0.072693 0.396484

5 rows × 24 columns

Next, we’ll clean up our exposures dataframe slightly and massage it into the desired shape and structure.

# rename to be cleaner names corresponding to df
X.columns = [col.split('.')[-1] if '.' in col else col for col in X.columns]

X['date'] = pd.to_datetime(X["date"])
exposures = X.set_index(["date", "bayesid"])
exposures.head()
Market Energy Basic Materials Industrials Consumer Cyclicals Consumer Non-Cyclicals Financials Healthcare Technology Utilities ... Government Activity Academic & Educational Services United States Size Value Growth Volatility Momentum Dividend Leverage
date bayesid
2024-07-10 00163T10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 1.0 0.976562 0.551758 0.934570 0.480957 -1.534180 0.541992 -0.395996
00287Y10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.0 0.0 1.0 1.520508 -1.212891 1.333984 -0.263916 0.601562 1.267578 1.652344
03042010 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... 0.0 0.0 1.0 -0.527344 0.419189 0.322021 -0.217163 -1.499023 0.248413 0.577637
03674B10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.0 0.0 1.0 0.998047 0.350830 -0.254883 -0.475830 -0.064880 -0.339844 -0.136108
03761230 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 0.0 0.0 1.0 1.012695 0.134033 -0.843262 0.619141 0.748535 0.072693 0.396484

5 rows × 22 columns

B. Asset Returns#

Next, we get the returns of our asset universe across dates. Note that some values are NaN since our estimation universe does not include all the securities in the larger universe across all dates.

returns = (
    risk_model.future_asset_returns()
    .to_pandas()
    .set_index("date")
    .stack(dropna=False)
    .rename("return")
    .to_frame()
)
returns.head()
return
date
2024-07-10 00163T10 0.033208
00287Y10 0.010527
03042010 0.030175
03674B10 -0.001440
03761230 0.015854

C. Idiosyncractic Volatility#

idio_vol = (
    risk_model.weights()
    .to_pandas()
    .set_index("date")
    .stack()
    .rename("idio_vol")
    .to_frame()
)
idio_vol.head()
idio_vol
date
2024-07-10 00163T10 0.013234
00287Y10 0.012614
03042010 0.012296
03674B10 0.011921
03761230 0.014413

D. Estimation Universe#

estimation_universe = (
    risk_model.estimation_universe()
    .to_pandas()
    .set_index("date")
    .stack()
    .rename("estimation_universe")
    .astype('bool')
    .to_frame()
)
estimation_universe.head()
estimation_universe
date
2024-07-10 00163T10 True
00287Y10 True
03042010 True
03674B10 True
03761230 True

Join all the regression components#

df_all = pd.concat(
    [exposures, returns, idio_vol, estimation_universe],
    axis=1
)

key = pd.MultiIndex.from_product(
    [sorted(X['date'].unique()), sorted(X['bayesid'].unique())],
)
df_all = df_all.reindex(key)
df_all[["estimation_universe"]] = df_all[["estimation_universe"]].fillna(False)
df_all.head()
Market Energy Basic Materials Industrials Consumer Cyclicals Consumer Non-Cyclicals Financials Healthcare Technology Utilities ... Size Value Growth Volatility Momentum Dividend Leverage return idio_vol estimation_universe
2024-07-10 00163T10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.976562 0.551758 0.934570 0.480957 -1.534180 0.541992 -0.395996 0.033208 0.013234 True
00287Y10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 1.520508 -1.212891 1.333984 -0.263916 0.601562 1.267578 1.652344 0.010527 0.012614 True
03042010 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... -0.527344 0.419189 0.322021 -0.217163 -1.499023 0.248413 0.577637 0.030175 0.012296 True
03674B10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.998047 0.350830 -0.254883 -0.475830 -0.064880 -0.339844 -0.136108 -0.001440 0.011921 True
03761230 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 1.012695 0.134033 -0.843262 0.619141 0.748535 0.072693 0.396484 0.015854 0.014413 True

5 rows × 25 columns

Extract Aggregated Market Cap#

We also need market cap data, but this comes in the form of already industry-aggregated market caps, so cannot be merged with the rest of the input data as it does not span across assets. We only need the market caps corresponding to our industry exposures, so we can filter out any other columns.

# get the market caps
market_caps = (
    risk_model.market_caps()
    .to_pandas()
    .set_index("date")
    .reindex(df_all.index.get_level_values(0).unique())
)
# get all industry names and reformat strings from factor returns
industries = factor_returns.loc[:,'industry'].columns.to_list()
industries_long = [f"industry.{i}" for i in industries]

# filter market caps to industries
market_caps = market_caps[industries_long]

Drop weekly and forward fill#

Since factor returns tend to be stable over short periods of time, we only compute the regression every week on Wednesdays. Thus, we will drop all data on non-Wednesdays and forward fill.

# slice on reset day (every wednesday) and fill forward
forward_fill_cols = [*exposures.columns, "estimation_universe", "idio_vol"]

# fill returns
df_all.loc[df_all["estimation_universe"], "return"] = df_all.loc[
    df_all["estimation_universe"],
    "return",
].fillna(0.0)

df_all.head()
Market Energy Basic Materials Industrials Consumer Cyclicals Consumer Non-Cyclicals Financials Healthcare Technology Utilities ... Size Value Growth Volatility Momentum Dividend Leverage return idio_vol estimation_universe
2024-07-10 00163T10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.976562 0.551758 0.934570 0.480957 -1.534180 0.541992 -0.395996 0.033208 0.013234 True
00287Y10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 1.520508 -1.212891 1.333984 -0.263916 0.601562 1.267578 1.652344 0.010527 0.012614 True
03042010 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... -0.527344 0.419189 0.322021 -0.217163 -1.499023 0.248413 0.577637 0.030175 0.012296 True
03674B10 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 ... 0.998047 0.350830 -0.254883 -0.475830 -0.064880 -0.339844 -0.136108 -0.001440 0.011921 True
03761230 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 ... 1.012695 0.134033 -0.843262 0.619141 0.748535 0.072693 0.396484 0.015854 0.014413 True

5 rows × 25 columns

market_caps = market_caps.ffill()
market_caps.index.name = "date"

market_caps.head()
industry.Energy industry.Basic Materials industry.Industrials industry.Consumer Cyclicals industry.Consumer Non-Cyclicals industry.Financials industry.Healthcare industry.Technology industry.Utilities industry.Real Estate industry.Institutions, Associations & Organizations industry.Government Activity industry.Academic & Educational Services
date
2024-07-10 1.904221e+12 8.785948e+11 3.498445e+12 5.831768e+12 3.587556e+12 9.009189e+12 5.333572e+12 2.377126e+13 9.625473e+11 7.884271e+11 0.0 0.0 0.0
2024-07-11 1.922298e+12 8.875750e+11 3.536727e+12 5.732125e+12 3.601812e+12 9.030046e+12 5.364956e+12 2.318909e+13 9.795558e+11 8.098559e+11 0.0 0.0 0.0
2024-07-12 1.927028e+12 8.950382e+11 3.556060e+12 5.780063e+12 3.627238e+12 9.061463e+12 5.398659e+12 2.332728e+13 9.862986e+11 8.156006e+11 0.0 0.0 0.0
2024-07-13 1.927028e+12 8.950382e+11 3.556060e+12 5.780063e+12 3.627238e+12 9.061463e+12 5.398659e+12 2.332728e+13 9.862986e+11 8.156006e+11 0.0 0.0 0.0
2024-07-14 1.927028e+12 8.950382e+11 3.556060e+12 5.780063e+12 3.627238e+12 9.061463e+12 5.398659e+12 2.332728e+13 9.862986e+11 8.156006e+11 0.0 0.0 0.0

Run statsmodels regression to compute factor returns#

Now, we will take all the components we created above and run our manual regression function to compute factor returns.

# get trade days from the universe
trade_days = risk_model.universe().to_pandas().set_index("date").index

# get all style names factor returns
styles = factor_returns.loc[:,'style'].columns.to_list()

# perform regression on each date
manual_computation = {
    g: statsmodels_regression(df, mcap, industries, styles)
    for (g, df), (_, mcap) in zip(df_all.groupby(level=0), market_caps.iterrows())
    if g in trade_days
}

manual_computation = (
    pd.concat(manual_computation, axis=1, names=["date"])
    .T.reindex(trade_days)
    .shift(1)
    .tail(-1)
)
manual_computation.head()
factor_returns ... p_values r2 sigma2
Market Basic Materials Industrials Consumer Cyclicals Consumer Non-Cyclicals Financials Healthcare Technology Utilities Real Estate ... Academic & Educational Services Size Value Growth Volatility Momentum Dividend Leverage NaN NaN
date
2024-07-11 0.003720 0.010011 0.009490 0.008789 0.000428 -0.005419 0.005167 -0.003788 0.006332 0.012967 ... 1.0 0.000756 0.018510 0.518221 2.607067e-04 3.612547e-22 0.513767 0.161875 0.367868 0.000092
2024-07-12 0.008739 -0.000279 -0.001702 0.002250 0.000048 0.001824 -0.003889 0.000235 -0.000113 -0.005196 ... 1.0 0.111995 0.063170 0.092116 6.120009e-15 1.500278e-01 0.632274 0.053296 0.556770 0.000029
2024-07-15 0.002315 -0.002595 0.006462 -0.002786 -0.006974 0.001122 -0.008616 0.002514 -0.019141 0.005136 ... 1.0 0.026739 0.282576 0.310186 1.406784e-07 3.949190e-02 0.411712 0.651988 0.283848 0.000051
2024-07-16 0.014378 0.009234 0.012379 0.009606 0.000147 -0.004086 0.004208 -0.002130 -0.007843 -0.006177 ... 1.0 0.691972 0.001407 0.090299 2.531491e-12 2.066883e-03 0.032275 0.649578 0.554508 0.000077
2024-07-17 -0.010342 0.005016 0.003758 0.002242 0.013622 -0.009670 0.011663 -0.003574 0.007701 0.014041 ... 1.0 0.613145 0.274585 0.223212 4.001509e-48 7.265039e-29 0.014195 0.372200 0.582626 0.000097

5 rows × 62 columns

3. Compare results of extracted factor returns and statsmodels regression.#

In order to perform a complete comparison, we take our factor returns and check that all of the following match the statsmodels regression computation.

  • factor returns

  • t-stats

  • p-values

Compare factor returns#

# drop the last industry and region due to the constraint handling and
# since the US is the only region present in the dataset
last_ind = "industry." + industries[0]
drop_cols = ["date", last_ind, "region.United States"]

factor_returns_array = risk_model.fret()
factor_returns_array = factor_returns_array.drop(drop_cols).to_numpy()

tst.assert_array_almost_equal(factor_returns_array, manual_computation["factor_returns"].values)

Compare t-stats#

t_stats = risk_model.t_stats()
t_stats = t_stats.drop(drop_cols).to_numpy()

tst.assert_array_almost_equal(t_stats, manual_computation["t_stats"].values, decimal=3)

Compare p-values#

p_values = risk_model.p_values()
p_values = p_values.drop(drop_cols).to_numpy()

# compare the p-values in different ranges with different precisions (high for low p-values)
# this is due to numerical precision of the t-distribution CDF approximation
tst.assert_array_almost_equal(p_values.clip(None, 0.05), manual_computation["p_values"].clip(None, 0.05).values, decimal=4)
tst.assert_array_almost_equal(p_values.clip(0.05, 0.9), manual_computation["p_values"].clip(0.05, 0.9).values, decimal=3)
tst.assert_array_almost_equal(p_values.clip(0.9, None), manual_computation["p_values"].clip(0.9, None).values, decimal=2)