Gli errori standard dei coefficienti del modello sono le radici quadrate delle voci diagonali della matrice di covarianza. Considera quanto segue:
X = ⎡⎣⎢⎢⎢⎢⎢11⋮1X1 , 1X2 , 1⋮Xn , 1......⋱...X1 , pX2 , p⋮Xn , p⎤⎦⎥⎥⎥⎥⎥Xio , jjio
(NOTA: questo presuppone un modello con un'intercettazione.)
- V = ⎡⎣⎢⎢⎢⎢⎢π^1( 1 - π^1)0⋮00π^2( 1 - π^2)⋮0......⋱...00⋮π^n( 1 - π^n)⎤⎦⎥⎥⎥⎥⎥π^ioio
La matrice di covarianza può essere scritta come:
(XTV X)- 1
Questo può essere implementato con il seguente codice:
import numpy as np
from sklearn import linear_model
# Initiate logistic regression object
logit = linear_model.LogisticRegression()
# Fit model. Let X_train = matrix of predictors, y_train = matrix of variable.
# NOTE: Do not include a column for the intercept when fitting the model.
resLogit = logit.fit(X_train, y_train)
# Calculate matrix of predicted class probabilities.
# Check resLogit.classes_ to make sure that sklearn ordered your classes as expected
predProbs = resLogit.predict_proba(X_train)
# Design matrix -- add column of 1's at the beginning of your X_train matrix
X_design = np.hstack([np.ones((X_train.shape[0], 1)), X_train])
# Initiate matrix of 0's, fill diagonal with each predicted observation's variance
V = np.diagflat(np.product(predProbs, axis=1))
# Covariance matrix
# Note that the @-operater does matrix multiplication in Python 3.5+, so if you're running
# Python 3.5+, you can replace the covLogit-line below with the more readable:
# covLogit = np.linalg.inv(X_design.T @ V @ X_design)
covLogit = np.linalg.inv(np.dot(np.dot(X_design.T, V), X_design))
print("Covariance matrix: ", covLogit)
# Standard errors
print("Standard errors: ", np.sqrt(np.diag(covLogit)))
# Wald statistic (coefficient / s.e.) ^ 2
logitParams = np.insert(resLogit.coef_, 0, resLogit.intercept_)
print("Wald statistics: ", (logitParams / np.sqrt(np.diag(covLogit))) ** 2)
Tutto ciò che verrà detto, statsmodels
sarà probabilmente un pacchetto migliore da utilizzare se si desidera accedere a MOLTE diagnosi "out-of-the-box".