[英]What type of orthogonal polynomials does R use?
I was trying to match the orthogonal polynomials in the following code in R: 我试图在R中的以下代码中匹配正交多项式:
X <- cbind(1, poly(x = x, degree = 9))
but in python. 但是在python中。
To do this I implemented my own method for giving orthogonal polynomials: 为此,我实现了自己的方法来给出正交多项式:
def get_hermite_poly(x,degree):
#scipy.special.hermite()
N, = x.shape
##
X = np.zeros( (N,degree+1) )
for n in range(N):
for deg in range(degree+1):
X[n,deg] = hermite( n=deg, z=float(x[deg]) )
return X
though it does not seem to match it. 虽然它似乎不匹配它。 Does someone know type of orthogonal polynomial it uses? 有人知道它使用的正交多项式的类型吗? I tried search in the documentation but didn't say. 我尝试在文档中搜索但没有说。
To give some context I am trying to implement the following R code in python ( https://stats.stackexchange.com/questions/313265/issue-with-convergence-with-sgd-with-function-approximation-using-polynomial-lin/315185#comment602020_315185 ): 为了给出一些上下文,我试图在python中实现以下R代码( https://stats.stackexchange.com/questions/313265/issue-with-convergence-with-sgd-with-function-approximation-using-polynomial- lin / 315185#comment602020_315185 ):
set.seed(1234)
N <- 10
x <- seq(from = 0, to = 1, length = N)
mu <- sin(2 * pi * x * 4)
y <- mu
plot(x,y)
X <- cbind(1, poly(x = x, degree = 9))
# X <- sapply(0:9, function(i) x^i)
w <- rnorm(10)
learning_rate <- function(t) .1 / t^(.6)
n_samp <- 2
for(t in 1:100000) {
mu_hat <- X %*% w
idx <- sample(1:N, n_samp)
X_batch <- X[idx,]
y_batch <- y[idx]
score_vec <- t(X_batch) %*% (y_batch - X_batch %*% w)
change <- score_vec * learning_rate(t)
w <- w + change
}
plot(mu_hat, ylim = c(-1, 1))
lines(mu)
fit_exact <- predict(lm(y ~ X - 1))
lines(fit_exact, col = 'red')
abs(w - coef(lm(y ~ X - 1)))
because it seems to be the only one that works with gradient descent with linear regression with polynomial features. 因为它似乎是唯一一个使用具有多项式特征的线性回归的梯度下降。
I feel that any orthogonal polynomial (or at least orthonormal) should work and give a hessian with condition number 1 but I can't seem to make it work in python. 我觉得任何正交多项式(或至少是正交)都应该工作并给出条件号为1的粗体,但我似乎无法使它在python中工作。 Related question: How does one use Hermite polynomials with Stochastic Gradient Descent (SGD)? 相关问题: 如何使用具有随机梯度下降(SGD)的Hermite多项式?
poly
uses QR factorization, as described in some detail in this answer . poly
使用QR分解,如本答案中的一些细节所述。
I think that what you really seem to be looking for is how to replicate the output of R's poly
using python. 我认为你真正想要的是如何使用python复制R的poly
的输出。
Here I have written a function to do that based on R's implementation. 在这里,我已经编写了一个基于R实现的功能。 I have also added some comments so that you can see the what the equivalent statements in R look like: 我还添加了一些注释,以便您可以看到R中的等效语句:
import numpy as np
def poly(x, degree):
xbar = np.mean(x)
x = x - xbar
# R: outer(x, 0L:degree, "^")
X = x[:, None] ** np.arange(0, degree+1)
#R: qr(X)$qr
q, r = np.linalg.qr(X)
#R: r * (row(r) == col(r))
z = np.diag((np.diagonal(r)))
# R: Z = qr.qy(QR, z)
Zq, Zr = np.linalg.qr(q)
Z = np.matmul(Zq, z)
# R: colSums(Z^2)
norm1 = (Z**2).sum(0)
#R: (colSums(x * Z^2)/norm2 + xbar)[1L:degree]
alpha = ((x[:, None] * (Z**2)).sum(0) / norm1 +xbar)[0:degree]
# R: c(1, norm2)
norm2 = np.append(1, norm1)
# R: Z/rep(sqrt(norm1), each = length(x))
Z = Z / np.reshape(np.repeat(norm1**(1/2.0), repeats = x.size), (-1, x.size), order='F')
#R: Z[, -1]
Z = np.delete(Z, 0, axis=1)
return [Z, alpha, norm2];
Checking that this works: 检查这是否有效:
x = np.arange(10) + 1
degree = 9
poly(x, degree)
The first row of the returned matrix is 返回矩阵的第一行是
[-0.49543369, 0.52223297, -0.45342519, 0.33658092, -0.21483446,
0.11677484, -0.05269379, 0.01869894, -0.00453516],
compared to the same operation in R 与R中的相同操作相比
poly(1:10, 9)
# [1] -0.495433694 0.522232968 -0.453425193 0.336580916 -0.214834462
# [6] 0.116774842 -0.052693786 0.018698940 -0.004535159
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.