簡體   English   中英

將邏輯回歸從R遷移到rpy2

[英]Migrating a logistic regression from R to rpy2

我正在嘗試使用ryp2進行邏輯回歸。 我設法執行它,但不知道如何從結果中提取系數和p值。 我不想在屏幕上打印值,創建一個獨立使用它們的功能。

import rpy2.robjects as ro
mydata = ro.r['data.frame']
read = ro.r['read.csv']
head = ro.r['head']
summary = ro.r['summary']

mydata = read("http://www.ats.ucla.edu/stat/data/binary.csv")
#cabecalho = head(mydata)
formula = 'admit ~ gre + gpa + rank'
mylogit = ro.r.glm(formula=ro.r(formula), data=mydata,family=ro.r('binomial(link="logit")'))
#What NEXT?

我不知道你如何獲得p值,但對於任何其他人,它應該是這樣的:

In [24]:
#what is stored in mylogit?
mylogit.names
Out[24]:
<StrVector - Python:0x10a01a0e0 / R:0x10353ab20>

['coef..., 'resi..., 'fitt..., ..., 'meth..., 'cont..., 'xlev...]
In [25]:
#looks like the first item is the coefficients
mylogit.names[0]
Out[25]:
'coefficients'
In [26]:
#OK, let's get the the coefficients.
mylogit[0]
Out[26]:
<FloatVector - Python:0x10a01a5f0 / R:0x1028bcc80>
[-3.449548, 0.002294, 0.777014, -0.560031]
In [27]:
#be careful that the index from print is R index, starting with 1. I don't see p values here
print mylogit.names
 [1] "coefficients"      "residuals"         "fitted.values"    
 [4] "effects"           "R"                 "rank"             
 [7] "qr"                "family"            "linear.predictors"
[10] "deviance"          "aic"               "null.deviance"    
[13] "iter"              "weights"           "prior.weights"    
[16] "df.residual"       "df.null"           "y"                
[19] "converged"         "boundary"          "model"            
[22] "call"              "formula"           "terms"            
[25] "data"              "offset"            "control"          
[28] "method"            "contrasts"         "xlevels"   

編輯

每個術語的P值:

In [55]:
#p values:
list(summary(mylogit)[-6])[-4:]
Out[55]:
[0.0023265825120094407,
 0.03564051883525258,
 0.017659683902155117,
 1.0581094283250368e-05]

和:

In [56]:
#coefficients 
list(summary(mylogit)[-6])[:4]
Out[56]:
[-3.449548397668471,
 0.0022939595044433334,
 0.7770135737198545,
 -0.5600313868499897]
In [57]:
#S.E.
list(summary(mylogit)[-6])[4:8]
Out[57]:
[1.1328460085495897,
 0.001091839095422917,
 0.327483878497867,
 0.12713698917130048]
In [58]:
#Z value
list(summary(mylogit)[-6])[8:12]
Out[58]:
[-3.0450285137032984,
 2.1010050968680347,
 2.3726773277632214,
 -4.4049445444662885]

或者更一般地說:

In [60]:

import numpy as np
In [62]:

COEF=np.array(summary(mylogit)[-6]) #it has a shape of (number_of_terms, 4)
In [63]:

COEF[:, -1] #p-value
Out[63]:
array([  2.32658251e-03,   3.56405188e-02,   1.76596839e-02,
         1.05810943e-05])
In [66]:

COEF[:, 0] #coefficients
Out[66]:
array([ -3.44954840e+00,   2.29395950e-03,   7.77013574e-01,
        -5.60031387e-01])
In [68]:

COEF[:, 1] #S.E.
Out[68]:
array([  1.13284601e+00,   1.09183910e-03,   3.27483878e-01,
         1.27136989e-01])
In [69]:

COEF[:, 2] #Z
Out[69]:
array([-3.04502851,  2.1010051 ,  2.37267733, -4.40494454])

如果您知道coefficient在摘要向量中,您還可以summary(mylogit).rx2('coefficient') (或rx )。

這不是你問的答案,但如果你的問題更普遍是“如何將邏輯回歸轉移到Python”,為什么不使用statsmodels?

import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf

df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv")
model = smf.glm('admit ~ gre + gpa + rank', df, family=sm.families.Binomial()).fit()
print model.summary()

這打印:

                 Generalized Linear Model Regression Results                  
==============================================================================
Dep. Variable:                  admit   No. Observations:                  400
Model:                            GLM   Df Residuals:                      396
Model Family:                Binomial   Df Model:                            3
Link Function:                  logit   Scale:                             1.0
Method:                          IRLS   Log-Likelihood:                -229.72
Date:                Sat, 29 Mar 2014   Deviance:                       459.44
Time:                        11:56:19   Pearson chi2:                     399.
No. Iterations:                     5                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
Intercept     -3.4495      1.133     -3.045      0.002        -5.670    -1.229
gre            0.0023      0.001      2.101      0.036         0.000     0.004
gpa            0.7770      0.327      2.373      0.018         0.135     1.419
rank          -0.5600      0.127     -4.405      0.000        -0.809    -0.311
==============================================================================

雖然仍然有一些統計程序只在R中有一個很好的實現,但對於線性模型等簡單的事情,使用statsmodel可能比使用RPy2更容易,因為所有的內省,內置文檔,選項卡完成(在IPython中)等將直接在statsmodels對象上工作。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM