简体   繁体   English

R代码:如何使用自己的数据集更正心理包中的“解析错误(文本= x,keep.source = FALSE)”输出

[英]R code: How to correct “Error in parse(text = x, keep.source = FALSE)” output in psych package using own dataset

This is a problem related to my last question referred to the omegaSem() function in the psych package (that is already solved because I realized that I was missing a variable assignment and because of that I had an 'object not found' error: 这是一个与我在psych软件包中的omegaSem()函数相关的最后一个问题相关的问题(已经解决,因为我意识到我缺少变量分配,并且因此我遇到了“找不到对象”错误:

One of the omegaSem function arguments is an object not found omegaSem函数参数之一是找不到对象

I was trying to use that function following the guide to find McDonald's hierarchical Omega by Dr William Revelle: 我正在尝试按照指南使用该功能,以查找William Revelle博士的麦当劳分层Omega:

http://personality-project.org/r/psych/HowTo/omega.pdf http://personality-project.org/r/psych/HowTo/omega.pdf

So now, with the variable error corrected, I'm having a different error that does not occur when I use the same function with the example database (Thurstone) provided in the tutorial that comes with the psych package. 因此,现在,在纠正了可变错误的情况下,当我对psych软件包随附的教程中提供的示例数据库(Thurstone)使用相同的功能时,我遇到了一个不同的错误。 I mean, I'm able to use the function succesfully using the Thurstone data (with no other action, I have the expected result) but the function doesn't work when I use my own data. 我的意思是,我可以使用Thurstone数据成功使用该函数(没有其他操作,我得到了预期的结果),但是当我使用自己的数据时,该函数不起作用。

I searched over other posted questions, and the actions that they perform are not even similar to what I'm trying to do. 我搜索了其他已发布的问题,它们所执行的操作甚至与我尝试执行的操作都不相似。 I have almost two weeks using R, so I'm not able to identify yet how can I extrapolate the solutions for that error message to my procedure (because it seems to be frequent), although I have basic code knowledge. 我使用R差不多有两个星期,所以尽管我有基本的代码知识,但是我仍然无法确定该如何将该错误消息的解决方案推断到我的过程中(因为它似乎很常见)。 However related questions give no anwer by now. 但是,相关问题现在没有答案。

Additionally, I decided to look over more documentation about the package, and when I was testing other functions, I was able to use the omegaSem() function with another example database, BUT after and only after I did the schmid transformation. 此外,我决定查看有关该软件包的更多文档,并且当我测试其他功能时,我可以在进行schmid转换之后且之后才将omegaSem()函数与另一个示例数据库BUT一起使用。 So with that, I discovered that when I tried to use the omegaSem() function before the schmid tranformation I had the same error message, but not after that tranformation with this second example database. 因此,我发现,当我尝试在schmid转换之前使用omegaSem()函数时,我有相同的错误消息,但在第二个示例数据库进行转换之后却没有。

This make sense with the actual procedure of the omegaSem() procedure, but I'm suposing that it must be done completely and automatically by the omegaSem() function as it is explained in the guide and I have understood until now, as it follows: 这对于omegaSem()过程的实际过程是有意义的,但是我想它必须完全由omegaSem()函数完全自动完成,正如指南中所解释的那样,到目前为止,我已经了解到,如下:

  1. omegaSem() applies factor analysis omegaSem()应用因子分析
  2. omegaSem() rotate factors obliquely omegaSem()倾斜旋转因子
  3. omegaSem() transform data with Schmid Leiman (schmid) 使用Schmid Leiman(schmid)的omegaSem()转换数据

-------necessary steps to print output------------------- -------打印输出的必要步骤-------------------

  1. omegaSem() print McDonald's hierarchical Omega omegaSem()打印麦当劳的分层欧米茄

So here, another questions appears: - Why the omegaSem() function works with the Thurstone database without any other action and only works for the second example database after performing the schmid transformation? 因此,这里出现了另一个问题:-为什么omegaSem()函数无需任何其他操作即可与Thurstone数据库一起使用,而仅在执行schmid转换后才对第二个示例数据库起作用? - Why with other databases I dont have the same output applying the omegaSem() function directly? -为什么对于其他数据库,我没有直接应用omegaSem()函数的相同输出? - How is this related to the error message that the compiler shows when I try to apply the function directly to the database? -这与我尝试直接将函数应用于数据库时编译器显示的错误消息有什么关系?

This is the code that I'm using now: (example of the succesfull omegaSem() done after schmid tranformation not included) 这是我现在正在使用的代码:(不包括在施密特转换后成功完成的omegaSem()的示例)

> library(psych)
> library(ctv, lavaan)
> library(GPArotation)
> my.data <- read.file()
Data from the .csv file D:\Users\Admon\Documents\prueba_export_1563806208742.csv has been loaded.
> describe(my.data)
           vars   n mean   sd median trimmed  mad min max range  skew kurtosis
AUT_10_04     1 195 4.11 0.90      4    4.23 1.48   1   5     4 -0.92     0.33
AUN_07_01     2 195 3.79 1.14      4    3.90 1.48   1   5     4 -0.59    -0.71
AUN_07_02     3 195 3.58 1.08      4    3.65 1.48   1   5     4 -0.39    -0.56
AUN_09_01     4 195 4.15 0.80      4    4.23 1.48   1   5     4 -0.76     0.51
AUN_10_01     5 195 4.25 0.79      4    4.34 1.48   1   5     4 -0.91     0.74
AUT_11_01     6 195 4.43 0.77      5    4.56 0.00   1   5     4 -1.69     3.77
AUT_17_01     7 195 4.46 0.67      5    4.55 0.00   1   5     4 -1.34     2.96
AUT_20_03     8 195 4.44 0.65      5    4.53 0.00   2   5     3 -0.84     0.12
CRE_05_02     9 195 2.47 1.01      2    2.43 1.48   1   5     4  0.35    -0.46
CRE_07_04    10 195 2.42 1.08      2    2.34 1.48   1   5     4  0.51    -0.43
CRE_10_01    11 195 4.41 0.68      5    4.51 0.00   2   5     3 -0.79    -0.12
CRE_16_02    12 195 2.75 1.23      3    2.69 1.48   1   5     4  0.29    -0.96
EFEC_03_07   13 195 4.35 0.69      4    4.45 1.48   1   5     4 -0.95     1.59
EFEC_05      14 195 4.53 0.59      5    4.60 0.00   3   5     2 -0.82    -0.34
EFEC_09_02   15 195 2.19 0.91      2    2.11 1.48   1   5     4  0.57    -0.03
EFEC_16_03   16 195 4.21 0.77      4    4.29 1.48   2   5     3 -0.71    -0.04
EVA_02_01    17 195 4.47 0.61      5    4.54 0.00   3   5     2 -0.70    -0.50
EVA_07_01    18 195 4.38 0.60      4    4.43 1.48   3   5     2 -0.40    -0.70
EVA_12_02    19 195 2.64 1.22      2    2.59 1.48   1   5     4  0.30    -1.00
EVA_15_06    20 195 4.19 0.74      4    4.26 1.48   2   5     3 -0.55    -0.29
FLX_04_01    21 195 4.32 0.69      4    4.41 1.48   2   5     3 -0.71     0.05
FLX_04_05    22 195 4.23 0.74      4    4.32 0.00   1   5     4 -0.99     1.69
FLX_08_02    23 195 2.87 1.19      3    2.86 1.48   1   5     4  0.07    -1.05
FLX_10_03    24 195 4.30 0.71      4    4.39 1.48   2   5     3 -0.84     0.66
IDO_01_06    25 195 3.10 1.26      3    3.13 1.48   1   5     4 -0.19    -1.08
IDO_05_02    26 195 2.89 1.26      3    2.87 1.48   1   5     4 -0.03    -1.16
IDO_09_03    27 195 3.87 0.97      4    3.99 1.48   1   5     4 -0.84     0.47
IDO_17_01    28 195 3.94 0.88      4    4.02 0.00   1   5     4 -0.93     1.23
IE_01_03     29 195 4.01 0.88      4    4.10 1.48   1   5     4 -0.91     0.94
IE_10_03     30 195 4.15 1.00      4    4.34 1.48   1   5     4 -1.31     1.28
IE_13_03     31 195 4.16 0.91      4    4.30 1.48   1   5     4 -1.26     1.74
IE_15_01     32 195 4.26 0.85      4    4.39 1.48   1   5     4 -1.16     1.08
LC_07_03     33 195 4.25 0.72      4    4.34 0.00   1   5     4 -1.07     2.64
LC_08_02     34 195 3.25 1.22      4    3.31 1.48   1   5     4 -0.41    -0.90
LC_11_03     35 195 3.50 1.14      4    3.56 1.48   1   5     4 -0.38    -0.68
LC_11_05     36 195 4.42 0.69      5    4.52 0.00   1   5     4 -1.14     1.97
ME_02_03     37 195 4.11 0.92      4    4.25 1.48   1   5     4 -1.18     1.29
ME_07_06     38 195 3.19 1.28      3    3.24 1.48   1   5     4 -0.28    -1.03
ME_09_01     39 195 4.24 0.77      4    4.34 1.48   1   5     4 -1.12     2.19
ME_09_06     40 195 3.23 1.33      4    3.29 1.48   1   5     4 -0.31    -1.14
NEG_01_03    41 195 4.18 0.76      4    4.27 0.00   1   5     4 -1.28     3.33
NEG_05_04    42 195 4.27 0.69      4    4.35 0.00   1   5     4 -0.87     1.75
NEG_07_03    43 195 4.32 0.73      4    4.43 1.48   1   5     4 -1.05     1.55
NEG_08_01    44 195 3.95 0.88      4    4.02 1.48   1   5     4 -0.67     0.29
OP_03_05     45 195 4.32 0.66      4    4.39 0.00   1   5     4 -0.99     2.54
OP_12_01     46 195 4.16 0.80      4    4.25 1.48   1   5     4 -1.02     1.57
OP_14_01     47 195 4.27 0.78      4    4.38 1.48   1   5     4 -1.15     1.67
OP_14_02     48 195 4.36 0.68      4    4.44 1.48   1   5     4 -1.07     2.35
ORL_01_03    49 195 4.36 0.77      4    4.49 1.48   1   5     4 -1.31     2.08
ORL_03_01    50 195 4.41 0.69      4    4.50 1.48   1   5     4 -1.28     2.77
ORL_03_05    51 195 4.36 0.74      4    4.48 1.48   2   5     3 -1.13     1.28
ORL_10_05    52 195 4.40 0.68      4    4.48 1.48   1   5     4 -1.18     2.57
PER_08_02    53 195 3.23 1.29      4    3.29 1.48   1   5     4 -0.26    -1.17
PER_16_01    54 195 4.29 0.70      4    4.38 1.48   2   5     3 -0.74     0.27
PER_19_06    55 195 3.19 1.25      3    3.24 1.48   1   5     4 -0.20    -1.06
PER_22_06    56 195 4.21 0.73      4    4.29 0.00   1   5     4 -0.89     1.46
PLA_01_03    57 195 4.23 0.68      4    4.31 0.00   2   5     3 -0.81     1.18
PLA_05_01    58 195 4.06 0.77      4    4.13 0.00   1   5     4 -0.89     1.29
PLA_07_02    59 195 2.94 1.19      3    2.94 1.48   1   5     4  0.00    -1.02
PLA_10_01    60 195 4.03 0.76      4    4.08 0.00   1   5     4 -0.68     0.87
PLA_12_02    61 195 2.67 1.11      2    2.62 1.48   1   5     4  0.41    -0.61
PLA_18_01    62 195 4.01 0.85      4    4.09 1.48   1   5     4 -0.82     0.78
PR_06_02     63 195 3.02 1.27      3    3.02 1.48   1   5     4 -0.01    -1.13
PR_15_03     64 195 3.55 1.07      4    3.62 1.48   1   5     4 -0.46    -0.22
PR_25_01     65 195 2.36 1.04      2    2.27 1.48   1   5     4  0.73     0.06
PR_25_06     66 195 2.95 1.17      3    2.94 1.48   1   5     4  0.04    -0.86
REL_09_05    67 195 3.81 0.95      4    3.89 1.48   1   5     4 -0.51    -0.31
REL_14_03    68 195 3.99 0.88      4    4.08 1.48   1   5     4 -0.75     0.39
REL_14_06    69 195 2.93 1.26      3    2.92 1.48   1   5     4  0.06    -1.11
REL_16_04    70 195 3.16 1.27      3    3.20 1.48   1   5     4 -0.13    -1.11
RS_02_03     71 195 4.14 0.75      4    4.22 0.00   1   5     4 -0.82     1.14
RS_07_05     72 195 4.29 0.67      4    4.38 0.00   2   5     3 -0.72     0.59
RS_08_05     73 195 4.04 0.88      4    4.13 1.48   1   5     4 -0.97     1.26
RS_13_03     74 195 4.19 0.69      4    4.25 0.00   2   5     3 -0.46    -0.17
TF_03_01     75 195 4.01 0.82      4    4.06 1.48   1   5     4 -0.63     0.32
TF_04_01     76 195 4.09 0.76      4    4.15 0.00   1   5     4 -0.70     0.76
TF_10_03     77 195 4.11 0.85      4    4.21 1.48   1   5     4 -0.96     0.99
TF_12_01     78 195 4.11 0.85      4    4.21 1.48   1   5     4 -1.10     1.66
TRE_09_05    79 195 4.29 0.79      4    4.39 1.48   1   5     4 -1.12     1.74
TRE_09_06    80 195 4.33 0.69      4    4.42 1.48   1   5     4 -1.10     2.36
TRE_26_04    81 195 2.97 1.20      3    2.96 1.48   1   5     4  0.08    -1.01
TRE_26_05    82 195 3.99 0.84      4    4.03 1.48   1   5     4 -0.41    -0.37

Until now, I have charged the libraries, import the my own database and did some simple descriptive statistics. 到目前为止,我已经对库进行了收费,导入了我自己的数据库,并进行了一些简单的描述性统计。


> r9 <- my.data
> omega(r9)
Omega 
Call: omega(m = r9)
Alpha:                 0.95 
G.6:                   0.98 
Omega Hierarchical:    0.85 
Omega H asymptotic:    0.89 
Omega Total            0.96 

Schmid Leiman Factor loadings greater than  0.2 
                g   F1*   F2*   F3*   h2   u2   p2
AUT_10_04    0.43              0.30 0.27 0.73 0.68
AUN_07_01                           0.05 0.95 0.53
AUN_07_02                           0.06 0.94 0.26
AUN_09_01    0.38              0.30 0.24 0.76 0.59
AUN_10_01    0.35              0.55 0.44 0.56 0.29
AUT_11_01    0.42              0.30 0.27 0.73 0.66
AUT_17_01    0.32              0.40 0.28 0.72 0.37
AUT_20_03    0.41              0.25 0.24 0.76 0.73
CRE_05_02-   0.24       -0.53       0.34 0.66 0.17
CRE_07_04-   0.37       -0.51       0.39 0.61 0.35
CRE_10_01    0.46              0.48 0.46 0.54 0.47
CRE_16_02-              -0.70       0.48 0.52 0.01
EFEC_03_07   0.46              0.31 0.31 0.69 0.68
EFEC_05      0.43              0.32 0.29 0.71 0.64
EFEC_09_02-  0.29       -0.46       0.29 0.71 0.28
EFEC_16_03   0.49              0.26 0.31 0.69 0.77
EVA_02_01    0.55              0.21 0.36 0.64 0.85
EVA_07_01    0.57                   0.37 0.63 0.89
EVA_12_02-              -0.61       0.39 0.61 0.06
EVA_15_06    0.50              0.37 0.39 0.61 0.65
FLX_04_01    0.57              0.30 0.42 0.58 0.78
FLX_04_05    0.52              0.26 0.34 0.66 0.80
FLX_08_02-              -0.78       0.60 0.40 0.00
FLX_10_03    0.39              0.29 0.24 0.76 0.63
IDO_01_06-              -0.80       0.64 0.36 0.00
IDO_05_02-              -0.78       0.62 0.38 0.00
IDO_09_03    0.41              0.49 0.42 0.58 0.40
IDO_17_01    0.51              0.51 0.54 0.46 0.49
IE_01_03     0.44              0.60 0.56 0.44 0.35
IE_10_03     0.41              0.53 0.44 0.56 0.37
IE_13_03     0.39              0.48 0.38 0.62 0.40
IE_15_01     0.39              0.40 0.31 0.69 0.49
LC_07_03     0.50                   0.27 0.73 0.91
LC_08_02                 0.83       0.69 0.31 0.00
LC_11_03     0.25                   0.10 0.90 0.60
LC_11_05     0.45        0.24       0.27 0.73 0.75
ME_02_03     0.55                   0.31 0.69 0.99
ME_07_06                 0.85       0.75 0.25 0.02
ME_09_01     0.64                   0.45 0.55 0.93
ME_09_06                 0.81       0.69 0.31 0.02
NEG_01_03    0.58              0.20 0.38 0.62 0.88
NEG_05_04    0.70                   0.50 0.50 0.98
NEG_07_03    0.64                   0.43 0.57 0.96
NEG_08_01    0.43              0.25 0.25 0.75 0.74
OP_03_05     0.62                   0.40 0.60 0.98
OP_12_01     0.67                   0.46 0.54 0.98
OP_14_01     0.60                   0.38 0.62 0.95
OP_14_02     0.66                   0.47 0.53 0.93
ORL_01_03    0.67                   0.47 0.53 0.96
ORL_03_01    0.66                   0.48 0.52 0.91
ORL_03_05    0.64                   0.46 0.54 0.90
ORL_10_05    0.66                   0.49 0.51 0.89
PER_08_02    0.21        0.84       0.75 0.25 0.06
PER_16_01    0.68              0.21 0.50 0.50 0.91
PER_19_06    0.20        0.73       0.58 0.42 0.07
PER_22_06    0.53                   0.30 0.70 0.94
PLA_01_03    0.57                   0.36 0.64 0.89
PLA_05_01    0.61                   0.42 0.58 0.89
PLA_07_02                0.75       0.61 0.39 0.04
PLA_10_01    0.56                   0.36 0.64 0.88
PLA_12_02                0.61       0.37 0.63 0.00
PLA_18_01    0.63                   0.47 0.53 0.85
PR_06_02                 0.77       0.62 0.38 0.03
PR_15_03     0.31       -0.39  0.24 0.31 0.69 0.31
PR_25_01-               -0.56       0.32 0.68 0.00
PR_25_06                 0.74       0.55 0.45 0.01
REL_09_05    0.41       -0.23  0.38 0.37 0.63 0.45
REL_14_03    0.41       -0.21  0.29 0.30 0.70 0.56
REL_14_06                0.66  0.21 0.48 0.52 0.04
REL_16_04                0.78       0.63 0.37 0.03
RS_02_03     0.57                   0.36 0.64 0.90
RS_07_05     0.68                   0.47 0.53 0.99
RS_08_05     0.44                   0.20 0.80 0.95
RS_13_03     0.67                   0.46 0.54 0.97
TF_03_01     0.66                   0.44 0.56 0.98
TF_04_01     0.74                   0.56 0.44 0.98
TF_10_03     0.70                   0.50 0.50 0.98
TF_12_01     0.61                   0.40 0.60 0.92
TRE_09_05    0.70              0.23 0.55 0.45 0.89
TRE_09_06    0.62                   0.41 0.59 0.93
TRE_26_04-              -0.68       0.47 0.53 0.00
TRE_26_05    0.55       -0.21       0.34 0.66 0.88

With eigenvalues of:
    g   F1*   F2*   F3* 
18.06  0.04 11.47  4.32 

general/max  1.57   max/min =   267.1
mean percent general =  0.58    with sd =  0.36 and cv of  0.63 
Explained Common Variance of the general factor =  0.53 

The degrees of freedom are 3078  and the fit is  34.62 
The number of observations was  195  with Chi Square =  5671.12  with prob <  2.8e-157
The root mean square of the residuals is  0.06 
The df corrected root mean square of the residuals is  0.06
RMSEA index =  0.078  and the 10 % confidence intervals are  0.063 NA
BIC =  -10559.18

Compare this with the adequacy of just a general factor and no group factors
The degrees of freedom for just the general factor are 3239  and the fit is  51.52 
The number of observations was  195  with Chi Square =  8509.84  with prob <  0
The root mean square of the residuals is  0.16 
The df corrected root mean square of the residuals is  0.16 

RMSEA index =  0.104  and the 10 % confidence intervals are  0.089 NA
BIC =  -8569.4 

Measures of factor score adequacy             
                                                 g   F1*  F2*  F3*
Correlation of scores with factors            0.98  0.07 0.98 0.91
Multiple R square of scores with factors      0.95  0.00 0.97 0.83
Minimum correlation of factor score estimates 0.91 -0.99 0.94 0.66

 Total, General and Subset omega for each subset
                                                 g F1*  F2*  F3*
Omega total for total scores and subscales    0.96  NA 0.83 0.95
Omega general for total scores and subscales  0.85  NA 0.82 0.76
Omega group for total scores and subscales    0.09  NA 0.01 0.19

Now, until here, I apply the basic (non hierarchical) omega() function to my own database 现在,直到这里,我将基本(非分层)omega()函数应用于自己的数据库

> omegaSem(r9,n.obs=198)
Error in parse(text = x, keep.source = FALSE) : 
  <text>:2:0: unexpected end of input
1: ~ 

The previous is the error message that appears after trying to use the omegaSem() function directly with my own database. 上一个是尝试将omegaSem()函数直接用于我自己的数据库后出现的错误消息。

Now, following, I present the expected output of omegaSem() applied directly using the Thurstone database. 现在,接下来,我介绍使用Thurstone数据库直接应用的omegaSem()的预期输出。 It's similar to the output of the basic omega() function but it has certain distinctions: 它类似于基本的omega()函数的输出,但是有一些区别:


> r9 <- Thurstone
> omegaSem(r9,n.obs=500)

Call: omegaSem(m = r9, n.obs = 500)
Omega 
Call: omega(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
    digits = digits, title = title, sl = sl, labels = labels, 
    plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option)
Alpha:                 0.89 
G.6:                   0.91 
Omega Hierarchical:    0.74 
Omega H asymptotic:    0.79 
Omega Total            0.93 

Schmid Leiman Factor loadings greater than  0.2 
                     g   F1*   F2*   F3*   h2   u2   p2
Sentences         0.71  0.56             0.82 0.18 0.61
Vocabulary        0.73  0.55             0.84 0.16 0.63
Sent.Completion   0.68  0.52             0.74 0.26 0.63
First.Letters     0.65        0.56       0.73 0.27 0.57
Four.Letter.Words 0.62        0.49       0.63 0.37 0.61
Suffixes          0.56        0.41       0.50 0.50 0.63
Letter.Series     0.59              0.62 0.73 0.27 0.48
Pedigrees         0.58  0.24        0.34 0.51 0.49 0.66
Letter.Group      0.54              0.46 0.52 0.48 0.56

With eigenvalues of:
   g  F1*  F2*  F3* 
3.58 0.96 0.74 0.72 

general/max  3.73   max/min =   1.34
mean percent general =  0.6    with sd =  0.05 and cv of  0.09 
Explained Common Variance of the general factor =  0.6 

The degrees of freedom are 12  and the fit is  0.01 
The number of observations was  500  with Chi Square =  7.12  with prob <  0.85
The root mean square of the residuals is  0.01 
The df corrected root mean square of the residuals is  0.01
RMSEA index =  0  and the 10 % confidence intervals are  0 0.026
BIC =  -67.45

Compare this with the adequacy of just a general factor and no group factors
The degrees of freedom for just the general factor are 27  and the fit is  1.48 
The number of observations was  500  with Chi Square =  730.93  with prob <  1.3e-136
The root mean square of the residuals is  0.14 
The df corrected root mean square of the residuals is  0.16 

RMSEA index =  0.23  and the 10 % confidence intervals are  0.214 0.243
BIC =  563.14 

Measures of factor score adequacy             
                                                 g  F1*  F2*  F3*
Correlation of scores with factors            0.86 0.73 0.72 0.75
Multiple R square of scores with factors      0.74 0.54 0.51 0.57
Minimum correlation of factor score estimates 0.49 0.07 0.03 0.13

 Total, General and Subset omega for each subset
                                                 g  F1*  F2*  F3*
Omega total for total scores and subscales    0.93 0.92 0.83 0.79
Omega general for total scores and subscales  0.74 0.58 0.50 0.47
Omega group for total scores and subscales    0.16 0.34 0.32 0.32

 The following analyses were done using the  lavaan  package 

 Omega Hierarchical from a confirmatory model using sem =  0.79
 Omega Total  from a confirmatory model using sem =  0.93 
With loadings of 
                     g  F1*  F2*  F3*   h2   u2   p2
Sentences         0.77 0.49           0.83 0.17 0.71
Vocabulary        0.79 0.45           0.83 0.17 0.75
Sent.Completion   0.75 0.40           0.73 0.27 0.77
First.Letters     0.61      0.61      0.75 0.25 0.50
Four.Letter.Words 0.60      0.51      0.61 0.39 0.59
Suffixes          0.57      0.39      0.48 0.52 0.68
Letter.Series     0.57           0.73 0.85 0.15 0.38
Pedigrees         0.66           0.25 0.50 0.50 0.87
Letter.Group      0.53           0.41 0.45 0.55 0.62

With eigenvalues of:
   g  F1*  F2*  F3* 
3.87 0.60 0.79 0.76 

The degrees of freedom of the confimatory model are  18  and the fit is  57.11391  with p =  5.936744e-06
general/max  4.92   max/min =   1.3
mean percent general =  0.65    with sd =  0.15 and cv of  0.23 
Explained Common Variance of the general factor =  0.64 

Measures of factor score adequacy             
                                                 g   F1*  F2*  F3*
Correlation of scores with factors            0.90  0.68 0.80 0.85
Multiple R square of scores with factors      0.81  0.46 0.64 0.73
Minimum correlation of factor score estimates 0.62 -0.08 0.27 0.45

 Total, General and Subset omega for each subset
                                                 g  F1*  F2*  F3*
Omega total for total scores and subscales    0.93 0.92 0.82 0.80
Omega general for total scores and subscales  0.79 0.69 0.48 0.50
Omega group for total scores and subscales    0.14 0.23 0.35 0.31

To get the standard sem fit statistics, ask for summary on the fitted object>

I'm expecting to have the same output applying the function directly. 我期望直接将功能应用到相同的输出。 My expectation is to make sure if its mandatory to make the schmid transformation before the omegaSem(). 我的期望是确保是否必须在omegaSem()之前进行schmid转换。 I'm supposing that not, because its not supposed to work like that as it says in the guide. 我以为那不是,因为它不应该像指南中所说的那样工作。 Maybe this can be solved correcting the error message: 也许可以通过更正错误消息来解决:

> r9 <- my.data
> omegaSem(r9,n.obs=198)
Error in parse(text = x, keep.source = FALSE) : 
  <text>:2:0: unexpected end of input
1: ~ 
   ^

Hope I've been clear enough. 希望我已经足够清楚了。 Feel free to ask any other information that you might need. 随时询问您可能需要的任何其他信息。

Thank you so much for giving me any guidance to reach the answer of this issue. 非常感谢您给我提供任何指导以解决此问题。 I higly appreciate any help. 我非常感谢您的帮助。

Regards, 问候,

Danilo 达尼洛

--------------------------EDIT-------------------------------------------------- - - - - - - - - - - - - - 编辑 - - - - - - - - - - - - ---------------------------

Now I was asking for help on slack and someone there help me. 现在,我正在寻求关于松弛的帮助,那里有人帮助我。 We find that when we created a fake data file just replacing some of the original data file, the function works! 我们发现,当我们创建一个伪数据文件而只是替换一些原始数据文件时,该函数起作用了! Just replacing, for example the 4 for the 5 and save it back again. 只需将4替换为5,然后再次保存即可。 The thing is that when I use the original database i have the same error message 问题是,当我使用原始数据库时,我有相同的错误消息

Well, I was cross.posting this question in several sites, I forgot to mention that. 好吧,我在多个站点交叉发布此问题,我忘了提了。 The thing is that It seems that there is a bug with the package. 事实是,该软件包似乎存在错误。 Following, I'm citing an answer that I received by mail 以下是我通过邮件收到的答复

"This error probably comes from calling factor("~") and psych::omegaSem(data) will do that if all the columns in data are very highly correlated with one another. In that case omega(data, nfactor=n) will not be able to find n factors in the data but it returns "~" in place of the factors that it could not find. Eg, “此错误可能是由于调用factor(“〜”)而造成的,如果data中的所有列之间都高度相关,那么psych :: omegaSem(data)将会这样做。在这种情况下,omega(data,nfactor = n)将无法在数据中找到n个因子,但它返回“〜”代替它找不到的因子。例如,

> psych::omega(my.data)$model$lavaan [1] g =~ +AUT_10_04+AUN_07_01+AUN_07_02+AUN_09_01+AUN_10_01+AUT_11_01+AUT_17_01+AUT_20_03+CRE_05_02+CRE_07_04+CRE_10_01+CRE_16_02+EFEC_03_07+EFEC_05+EFEC_09_02+EFEC_16_03+EVA_02_01+EVA_07_01+EVA_12_02+EVA_15_06+FLX_04_01+FLX_04_05+FLX_08_02+FLX_10_03+IDO_01_06+IDO_05_02+IDO_09_03+IDO_17_01+IE_01_03+IE_10_03+IE_13_03+IE_15_01+LC_07_03+LC_08_02+LC_11_03+LC_11_05+ME_02_03+ME_07_06+ME_09_01+ME_09_06+NEG_01_03+NEG_05_04+NEG_07_03+NEG_08_01+OP_03_05+OP_12_01+OP_14_01+OP_14_02+ORL_01_03+ORL_03_01+ORL_03_05+ORL_10_05+PER_08_02+PER_16_01+PER_19_06+PER_22_06+PLA_01_03+PLA_05_01+PLA_07_02+PLA_10_01+PLA_12_02+PLA_18_01+PR_06_02+PR_15_03+PR_25_01+PR_25_06+REL_09_05+REL_14_03+REL_14_06+REL_16_04+RS_02_03+RS_07_05+RS_08_05+RS_13_03+TF_03_01+TF_04_01+TF_10_03+TF_12_01+TRE_09_05+TRE_09_06+TRE_26_04+TRE_26_05 [2] F1=~ > psych :: omega(my.data)$ model $ lavaan [1] g =〜+ AUT_10_04 + AUN_07_01 + AUN_07_02 + AUN_09_01 + AUN_10_01 + AUT_11_01 + AUT_17_01 + AUT_20_03 + CRE_05_02 + CRE_07_EF_16_EF01_02 + 02 + EFEC_16_03 + EVA_02_01 + EVA_07_01 + EVA_12_02 + EVA_15_06 + FLX_04_01 + FLX_04_05 + FLX_08_02 + FLX_10_03 + IDO_01_06 + IDO_05_02 + IDO_09_03 + IDO_17_01 + IE_01_03 + IE_10_03 + IE_13_03 + IE_15_01 + LC_07_03 + LC_08_02 + LC_11_03 + LC_11_05 + ME_02_03 + ME_07_06 + ME_09_01 + ME_09_06 + NEG_01_03 + NEG_05_04 + NEG_07_03 + NEG_08_01 + OP_03_05 + OP_12_01 + OP_14_01 + OP_14_02 + ORL_01_03 + ORL_03_01 + ORL_03_05 + ORL_10_05 + PER_08_02 + PER_16_01 + PER_19_06 + PER_22_06 + PLA_01_03 + PLA_05_01 + PLA_07_02 + PLA_10_01 + PLA_12_02 + PLA_18_01 + PR_06_02 + PR_15_03 + PR_25_01 + PR_25_06 + REL_09_05 + REL_14_03 + REL_14_06 + REL_16_04 + RS_02_03 + RS_07_05 + RS_08_05 + RS_13_03 + TF_03_01 + TF_04_01 + TF_10_03 + TF_12_01 + TRE_09_05 + TRE_09_06 + TRE_26

Element #2 of that output, the empty fomula " F1=~ ", triggers the bug in omegaSem. 该输出的元素#2,即空的公式“ F1 =〜”会触发omegaSem中的错误。

omegaSem needs to ignore such entries in omega's output. omegaSem需要忽略omega输出中的此类条目。 psych's author should be able to fix things up." Psych的作者应该能够解决问题。”

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何解决解析错误(text = text,keep.source = FALSE):创建 Dataframe 时 - How to resolve Error in parse(text = text, keep.source = FALSE) : when creating a Dataframe R:无法查看 Psych package 的源代码 - R: Failure to see source code of Psych package 使用R中的心理包对输出进行反向编码以进行可靠性分析 - Reverse coding the output for reliability analysis using psych package in R R CMD check (devtools:check()) with keep.source = TRUE 可能吗? - Is R CMD check (devtools:check()) with keep.source = TRUE possible? 你如何使误差线的底部在 R 的 psych 包中的 error.bars.by() 中的 x 轴上开始? - How do you make the bottom of the error bars start on the x-axis in error.bars.by() in psych package in R? 带有 psych 包的 R 中的 Screeplot - Screeplot in R with psych package R:心理包的fa.parallel,全局设置为false - R: fa.parallel of psych package, global set to false 在 R 中使用神经网络包时如何实现自己的错误函数? - how to implement own error function while using neuralnet package in R? 在您自己的 R 包的源代码中包含一个外部函数 - Including a foreign function in you own R package's source code 使用R中的心理包对“ timeSeries”结构数据进行描述性统计 - Descriptive Statistics of “timeSeries” structure data using psych package in R
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM