Estimate parameters. MLE, IV or user-defined estimator.
Usage
# S3 method for class 'lvm'
estimate(
x,
data = parent.frame(),
estimator = NULL,
control = list(),
missing = FALSE,
weights,
weightsname,
data2,
id,
fix,
index = !quick,
graph = FALSE,
messages = lava.options()$messages,
quick = FALSE,
method,
param,
cluster,
p,
...
)Arguments
- x
lvm-object- data
data.frame- estimator
String defining the estimator (see details below)
- control
control/optimization parameters (see details below)
- missing
Logical variable indiciating how to treat missing data. Setting to FALSE leads to complete case analysis. In the other case likelihood based inference is obtained by integrating out the missing data under assumption the assumption that data is missing at random (MAR).
- weights
Optional weights to used by the chosen estimator.
- weightsname
Weights names (variable names of the model) in case
weightswas given as a vector of column names ofdata- data2
Optional additional dataset used by the chosen estimator.
- id
Vector (or name of column in
data) that identifies correlated groups of observations in the data leading to variance estimates based on a sandwich estimator- fix
Logical variable indicating whether parameter restriction automatically should be imposed (e.g. intercepts of latent variables set to 0 and at least one regression parameter of each measurement model fixed to ensure identifiability.)
- index
For internal use only
- graph
For internal use only
- messages
Control how much information should be printed during estimation (0: none)
- quick
If TRUE the parameter estimates are calculated but all additional information such as standard errors are skipped
- method
Optimization method
- param
set parametrization (see
help(lava.options))- cluster
Obsolete. Alias for 'id'.
- p
Evaluate model in parameter 'p' (no optimization)
- ...
Additional arguments to be passed to lower-level functions
Details
A list of parameters controlling the estimation and optimization procedures
is parsed via the control argument. By default Maximum Likelihood is
used assuming multivariate normal distributed measurement errors. A list
with one or more of the following elements is expected:
- start:
Starting value. The order of the parameters can be shown by calling
coef(withmean=TRUE) on thelvm-object or withplot(..., labels=TRUE). Note that this requires a check that it is actual the model being estimated, asestimatemight add additional restriction to the model, e.g. through thefixandexo.fixarguments. Thelvm-object of a fitted model can be extracted with theModel-function.- starterfun:
Starter-function with syntax
function(lvm, S, mu). Three builtin functions are available:startvalues,startvalues0,startvalues1, ...- estimator:
String defining which estimator to use (Defaults to “
gaussian”)- meanstructure
Logical variable indicating whether to fit model with meanstructure.
- method:
String pointing to alternative optimizer (e.g.
optimto use simulated annealing).- control:
Parameters passed to the optimizer (default
stats::nlminb).- tol:
Tolerance of optimization constraints on lower limit of variance parameters.
Examples
dd <- read.table(header=TRUE,
text="x1 x2 x3
0.0 -0.5 -2.5
-0.5 -2.0 0.0
1.0 1.5 1.0
0.0 0.5 0.0
-2.5 -1.5 -1.0")
e <- estimate(lvm(c(x1,x2,x3)~u),dd)
## Simulation example
m <- lvm(list(y~v1+v2+v3+v4,c(v1,v2,v3,v4)~x))
covariance(m) <- v1~v2+v3+v4
dd <- sim(m,10000) ## Simulate 10000 observations from model
e <- estimate(m, dd) ## Estimate parameters
e
#> Estimate Std. Error Z-value P-value
#> Regressions:
#> y~v1 1.00149 0.01785 56.11702 <1e-12
#> y~v2 0.99393 0.01085 91.60386 <1e-12
#> y~v3 1.00639 0.01103 91.22640 <1e-12
#> y~v4 0.99367 0.01099 90.42082 <1e-12
#> v1~x 0.99195 0.01008 98.39443 <1e-12
#> v2~x 1.00054 0.01011 98.96869 <1e-12
#> v3~x 0.98866 0.01020 96.95567 <1e-12
#> v4~x 1.00539 0.00986 101.96561 <1e-12
#> Intercepts:
#> y 0.00585 0.01001 0.58393 0.5593
#> v1 -0.00897 0.01001 -0.89547 0.3705
#> v2 0.00585 0.01004 0.58266 0.5601
#> v3 -0.01329 0.01013 -1.31181 0.1896
#> v4 -0.01010 0.00979 -1.03097 0.3026
#> Residual Variances:
#> y 1.00198 0.01417 70.71068
#> v1 1.00276 0.01121 89.48353
#> v1~~v2 0.49738 0.00864 57.55835 <1e-12
#> v1~~v3 0.52060 0.00894 58.26409 <1e-12
#> v1~~v4 0.48319 0.00841 57.48108 <1e-12
#> v2 1.00840 0.01426 70.71068
#> v3 1.02590 0.01451 70.71068
#> v4 0.95924 0.01357 70.71068
## Using just sufficient statistics
n <- nrow(dd)
e0 <- estimate(m,data=list(S=cov(dd)*(n-1)/n,mu=colMeans(dd),n=n))
rm(dd)
## Multiple group analysis
m <- lvm()
regression(m) <- c(y1,y2,y3)~u
regression(m) <- u~x
d1 <- sim(m,100,p=c("u,u"=1,"u~x"=1))
d2 <- sim(m,100,p=c("u,u"=2,"u~x"=-1))
mm <- baptize(m)
regression(mm,u~x) <- NA
covariance(mm,~u) <- NA
intercept(mm,~u) <- NA
ee <- estimate(list(mm,mm),list(d1,d2))
## Missing data
d0 <- makemissing(d1,cols=1:2)
e0 <- estimate(m,d0,missing=TRUE)
e0
#> Estimate Std. Error Z value Pr(>|z|)
#> Regressions:
#> y1~u 1.11173 0.07587 14.65369 <1e-12
#> y2~u 1.00679 0.07355 13.68871 <1e-12
#> y3~u 1.03929 0.06293 16.51400 <1e-12
#> u~x 1.14033 0.12487 9.13202 <1e-12
#> Intercepts:
#> y1 0.00405 0.10782 0.03752 0.9701
#> y2 0.10148 0.10931 0.92832 0.3532
#> y3 0.07911 0.09085 0.87082 0.3839
#> u -0.03681 0.10653 -0.34549 0.7297
#> Residual Variances:
#> y1 0.97632 0.15066 6.48007
#> y2 0.96712 0.15198 6.36330
#> y3 0.82342 0.11646 7.07021
#> u 1.13361 0.16033 7.07044
