The DGP from section 6.4.1 in Zhou, Athey, and Wager (2023): There are \(d=3\) actions \((a_0,a_1,a_2)\) which depend on 3 regions the covariates \(X \sim U[0,1]^p\) reside in. Observed outcomes: \(Y \sim N(\mu_{a_i}(X_i), 4)\)

gen_data_mapl(n, p = 10, sigma2 = 4)

Arguments

n

Number of observations \(X\).

p

Number of features (minimum 7). Default is 10.

sigma2

Noise variance. Default is 4.

Value

A list with realized action \(a_i\), region \(r_i\), conditional mean \(\mu\), outcome \(Y\) and covariates \(X\)

References

Zhou, Zhengyuan, Susan Athey, and Stefan Wager. "Offline multi-action policy learning: Generalization and optimization." Operations Research 71.1 (2023).