Calculates y-hat \(\left( \mathbf{\hat{y}} \right)\), that is, the predicted value of \(\mathbf{y}\) given \(\mathbf{X}\) using $$ \mathbf{\hat{y}} = \mathbf{P} \mathbf{y} $$ where $$ \mathbf{P} = \mathbf{X} \left( \mathbf{X}^{T} \mathbf{X} \right)^{-1} \mathbf{X}^{T} . $$

.Py(y, P = NULL, X = NULL)

Arguments

y

Numeric vector of length n or n by 1 matrix. The vector \(\mathbf{y}\) is an \(n \times 1\) vector of observations on the regressand variable.

P

n by n numeric matrix. The \(n \times n\) projection matrix \(\left( \mathbf{P} \right)\).

X

n by k numeric matrix. The data matrix \(\mathbf{X}\) (also known as design matrix, model matrix or regressor matrix) is an \(n \times k\) matrix of \(n\) observations of \(k\) regressors, which includes a regressor whose value is 1 for each observation on the first column.

Value

Returns y-hat \(\left( \mathbf{\hat{y}} \right)\).

Details

If P = NULL, the P matrix is computed using P() with X as its argument. If P is provided, X is not needed.

References

Wikipedia: Linear Regression

Wikipedia: Ordinary Least Squares

See also

Other y-hat functions: .Xbetahat(), Py(), Xbetahat(), yhat()

Author

Ivan Jacob Agaloos Pesigan