63 Section 8.4.Example Let {(Ri,Xi):i=1,...,n}be an i.i.d.sample of n random vectors (R,X).Here R is a response indicator and X is a covariate.We assume that logitP[R 1X]=a+BX and assume that X is normally distributed with mean u and variance o2.So,our probability model has four parameters, 0=(a,B,u,o2).Let 0o =(ao,Bo,uo,denote the true value of 0
63 Section 8.4. Example Let {(Ri, Xi) : i = 1,...,n} be an i.i.d. sample of n random vectors (R, X). Here R is a response indicator and X is a covariate. We assume that logitP[R = 1|X] = α + βX and assume that X is normally distributed with mean µ and variance σ2. So, our probability model has four parameters, θ = (α, β, µ, σ2). Let θ0 = (α0, β0, µ0, σ20) denote the true value of θ
64 a.For a given realization of the data,write out the likelihood function of 0. 42-p2点g-An+
64 a. For a given realization of the data, write out the likelihood function of θ. L(θ; x, r) = n i=1 1 σ√2π exp(− 12σ2 (xi − µ)2) exp(ri(α + βxi)) 1 + exp(α + βxi)
65 b.Find the maximum likelihood estimator of 0,0(If there exists a closed form solution,then present it.If not,indicate how a solution can be found). To find the MLE,solve the score equations.The log-likelihood for an individual is 1e:x,r)x-l1oga)-2ox-m2+r(a+m)-log(1+exp(a+x》 1 The score vector for an individual is al(0;x,r) r exp(a+Bx) 8a 1+exp(a+Bx) al(0;x,r) x(r- exp(a+Bx) b(x,T;0)= 83 1+exp(a+Bx)) ∂l(0;c,r x一业 ∂μ 8l(0;x,r) 8o2 动 2G4
65 b. Find the maximum likelihood estimator of θ, ˆ θn (If there exists a closed form solution, then present it. If not, indicate how a solution can be found). To find the MLE, solve the score equations. The log-likelihood for an individual is l(θ; x, r) ∝ − log(σ)− 12σ2 (x−µ)2 +r(α+βx)−log(1+ exp(α+βx)) The score vector for an individual is ψ(x, r; θ) = ⎡⎢ ⎢⎢⎢ ⎢⎣ ∂l(θ;x,r) ∂α ∂l(θ;x,r) ∂β ∂l(θ;x,r) ∂µ ∂l(θ;x,r) ∂σ2 ⎤⎥ ⎥⎥⎥ ⎥⎦ = ⎡⎢ ⎢⎢⎢ ⎢⎣ r − exp(α+βx) 1+exp(α+βx) x(r − exp(α+βx) 1+exp(α+βx) ) x−µ σ2 − 1 2σ2 + (x−µ)2 2σ4 ⎤⎥ ⎥⎥⎥ ⎥⎦
66 The score equations for the full sample are: m exp(a+i) ∑ exp(a+Bxi) =0 (1) i=1 -+=0 (2) i=1 x一业 =0 02 (3) i=1 n +0 2o4 (4) i=1 Note that Equations (3)and (4)can be solved explicitly to get solutions for a and 62,i.e., 在= 一 Ti ∑a:-创2 i=1 i=
66 The score equations for the full sample are: n i=1 ri − exp(α + βxi) 1 + exp(α + βxi) = 0 (1) n i=1 xi(ri − exp(α + βxi) 1 + exp(α + βxi)) = 0 (2) n i=1 xi − µ σ2 = 0 (3) n i=1 − 1 2σ2 + (xi − µ)2 2σ4 = 0 (4) Note that Equations (3) and (4) can be solved explicitly to get solutions for ˆµ and ˆσ2, i.e., µ ˆ = 1 n n i=1 xi and ˆσ2 = 1 n n i=1 (xi − µˆ)2
67 The solutions for a and B are obtained by solving Equations(1) and(2).This does not yield simple closed form solutions. Therefore,we use the Newton-Raphson algorithm.This entails computing the observed information matrix.Some of these computations will be needed later so let's compute the entire matrix of second partial derivatives.Let p() exp(a+Bxi) and qi(a,B)=1-pi(a,B),Now, 以物 where Jn1(a,)- ∑1p(a,3)g(a,3) ∑1cp(a,3)q(a,3) ∑1xp:(a,8)qa(a,3)∑=1p(a,3)q(a,)
67 The solutions for ˆα and β ˆ are obtained by solving Equations (1) and (2). This does not yield simple closed form solutions. Therefore, we use the Newton-Raphson algorithm. This entails computing the observed information matrix. Some of these computations will be needed later so let’s compute the entire matrix of second partial derivatives. Let pi(α, β) = exp(α+βxi) 1+exp(α+βxi) and qi(α, β)=1 − pi(α, β), Now, nJn(θ) = ⎡⎣ Jn1(α, β) 0 0 Jn2(µ, σ2) ⎤⎦ where Jn1(α, β) = ⎡⎣ ni=1 pi(α, β)qi(α, β) ni=1 xipi(α, β)qi(α, β) ni=1 xipi(α, β)qi(α, β) ni=1 x2i pi(α, β)qi(α, β) ⎤⎦