Estimate parameters. MLE, IV or user-defined estimator.

# S3 method for lvm
estimate(
  x,
  data = parent.frame(),
  estimator = NULL,
  control = list(),
  missing = FALSE,
  weights,
  weightsname,
  data2,
  id,
  fix,
  index = !quick,
  graph = FALSE,
  messages = lava.options()$messages,
  quick = FALSE,
  method,
  param,
  cluster,
  p,
  ...
)

Arguments

x

lvm-object

data

data.frame

estimator

String defining the estimator (see details below)

control

control/optimization parameters (see details below)

missing

Logical variable indiciating how to treat missing data. Setting to FALSE leads to complete case analysis. In the other case likelihood based inference is obtained by integrating out the missing data under assumption the assumption that data is missing at random (MAR).

weights

Optional weights to used by the chosen estimator.

weightsname

Weights names (variable names of the model) in case weights was given as a vector of column names of data

data2

Optional additional dataset used by the chosen estimator.

id

Vector (or name of column in data) that identifies correlated groups of observations in the data leading to variance estimates based on a sandwich estimator

fix

Logical variable indicating whether parameter restriction automatically should be imposed (e.g. intercepts of latent variables set to 0 and at least one regression parameter of each measurement model fixed to ensure identifiability.)

index

For internal use only

graph

For internal use only

messages

Control how much information should be printed during estimation (0: none)

quick

If TRUE the parameter estimates are calculated but all additional information such as standard errors are skipped

method

Optimization method

param

set parametrization (see help(lava.options))

cluster

Obsolete. Alias for 'id'.

p

Evaluate model in parameter 'p' (no optimization)

...

Additional arguments to be passed to lower-level functions

Value

A lvmfit-object.

Details

A list of parameters controlling the estimation and optimization procedures is parsed via the control argument. By default Maximum Likelihood is used assuming multivariate normal distributed measurement errors. A list with one or more of the following elements is expected:

start:

Starting value. The order of the parameters can be shown by calling coef (with mean=TRUE) on the lvm-object or with plot(..., labels=TRUE). Note that this requires a check that it is actual the model being estimated, as estimate might add additional restriction to the model, e.g. through the fix and exo.fix arguments. The lvm-object of a fitted model can be extracted with the Model-function.

starterfun:

Starter-function with syntax function(lvm, S, mu). Three builtin functions are available: startvalues, startvalues0, startvalues1, ...

estimator:

String defining which estimator to use (Defaults to ``gaussian'')

meanstructure

Logical variable indicating whether to fit model with meanstructure.

method:

String pointing to alternative optimizer (e.g. optim to use simulated annealing).

control:

Parameters passed to the optimizer (default stats::nlminb).

tol:

Tolerance of optimization constraints on lower limit of variance parameters.

See also

estimate.default score, information

Author

Klaus K. Holst

Examples

dd <- read.table(header=TRUE, text="x1 x2 x3 0.0 -0.5 -2.5 -0.5 -2.0 0.0 1.0 1.5 1.0 0.0 0.5 0.0 -2.5 -1.5 -1.0") e <- estimate(lvm(c(x1,x2,x3)~u),dd) ## Simulation example m <- lvm(list(y~v1+v2+v3+v4,c(v1,v2,v3,v4)~x)) covariance(m) <- v1~v2+v3+v4 dd <- sim(m,10000) ## Simulate 10000 observations from model e <- estimate(m, dd) ## Estimate parameters e
#> Estimate Std. Error Z-value P-value #> Regressions: #> y~v1 1.00471 0.01771 56.73304 <1e-12 #> y~v2 0.99905 0.01082 92.37083 <1e-12 #> y~v3 1.00161 0.01102 90.85102 <1e-12 #> y~v4 0.98634 0.01095 90.05146 <1e-12 #> v1~x 1.00227 0.01007 99.52555 <1e-12 #> v2~x 1.00698 0.01011 99.61970 <1e-12 #> v3~x 0.99571 0.01006 98.96857 <1e-12 #> v4~x 1.01511 0.00984 103.11320 <1e-12 #> Intercepts: #> y 0.00169 0.01001 0.16869 0.866 #> v1 -0.00924 0.01003 -0.92104 0.357 #> v2 0.01013 0.01007 1.00576 0.3145 #> v3 -0.01418 0.01002 -1.41474 0.1571 #> v4 -0.01685 0.00980 -1.71844 0.08572 #> Residual Variances: #> y 1.00063 0.01415 70.71068 #> v1 1.00585 0.01127 89.24163 #> v1~~v2 0.50032 0.00871 57.45759 <1e-12 #> v1~~v3 0.51051 0.00881 57.93676 <1e-12 #> v1~~v4 0.48535 0.00846 57.38060 <1e-12 #> v2 1.01341 0.01433 70.71068 #> v3 1.00394 0.01420 70.71068 #> v4 0.96123 0.01359 70.71068
## Using just sufficient statistics n <- nrow(dd) e0 <- estimate(m,data=list(S=cov(dd)*(n-1)/n,mu=colMeans(dd),n=n)) rm(dd) ## Multiple group analysis m <- lvm() regression(m) <- c(y1,y2,y3)~u regression(m) <- u~x d1 <- sim(m,100,p=c("u,u"=1,"u~x"=1)) d2 <- sim(m,100,p=c("u,u"=2,"u~x"=-1)) mm <- baptize(m) regression(mm,u~x) <- NA covariance(mm,~u) <- NA intercept(mm,~u) <- NA ee <- estimate(list(mm,mm),list(d1,d2)) ## Missing data d0 <- makemissing(d1,cols=1:2) e0 <- estimate(m,d0,missing=TRUE) e0
#> Estimate Std. Error Z value Pr(>|z|) #> Regressions: #> y1~u 0.96989 0.08121 11.94260 <1e-12 #> y2~u 1.05692 0.08958 11.79820 <1e-12 #> y3~u 0.91979 0.06465 14.22685 <1e-12 #> u~x 1.05608 0.09560 11.04693 <1e-12 #> Intercepts: #> y1 0.01209 0.11427 0.10579 0.9158 #> y2 -0.01144 0.12522 -0.09140 0.9272 #> y3 -0.14581 0.09463 -1.54088 0.1233 #> u 0.05207 0.09881 0.52698 0.5982 #> Residual Variances: #> y1 1.05371 0.16559 6.36334 #> y2 1.24226 0.19643 6.32402 #> y3 0.89270 0.12626 7.07025 #> u 0.96189 0.13605 7.07030