[Lign251] questions:log-likelihood
Roger Levy
rlevy at ucsd.edu
Thu Dec 4 08:55:20 PST 2008
Wen-Hsuan Chan wrote:
> Hi Roger,
>
> I still feel confused of joint log-likelihood of b-hat and y.
> Suppose I got b-hat from model 7.1.1 ,
>
> x<-b-hat
> y<-F1
> log.lik.x<-sum(dnorm(x,sd=11657.7))+sum(dnorm(y,mean=722,sd=1221.6))
>
> I think that the first part about P(b|sigma.b) could be considered as
> the format b~N(0,sigma.b), so
> dnorm(x,sd=sigma.b), right?
>
> For the second part P(y|theta, b), and y=mu+b+error, could I regard it
> as y~N(mu,sigma.y) ? but this
> form seems to contain nothing about b. Intuitively I don not think it is
> the right way but i can not get it.
Hi Wen,
You're pretty close. The first part is right. For the second part, the
catch is that you need to make a means vector of length equal to
length(y), since not all of the y should have their probability
calculated using the same mean. One way to do this would be to do
something along the lines of
dnorm(y, mean=mu + x[aa$speaker], sigma.y)
if aa$Speaker is a vector whose values are the speaker number for each
observation (which is what aa$Speaker is, if I'm recalling correctly at
this moment). This would use the appropriate mean for each observation.
One more tip: you would want prod() instead of sum(), and you need to
take a log. However, the overall probability will get so small that you
could have numerical underflow errors, so it's better to take the log of
the output from dnorm() and then use sum(). That is:
log.lik.x <- sum(log(dnorm(x, sd=sigma.b)), log(dnorm(y, mean=mu +
x[aa$speaker], sigma.y)))
Make sense?
Roger
--
Roger Levy Email: rlevy at ucsd.edu
Assistant Professor Phone: 858-534-7219
Department of Linguistics Fax: 858-534-4789
UC San Diego Web: http://ling.ucsd.edu/~rlevy
More information about the Lign251
mailing list