Tuesday, 28 April 2020
Abstract
on geostatistical data by adding time component that called a space-time geostatistical
data, to implement in the study case of air pollution that is measured by the concentration
of NO2 and SO2 in 12 fixed observed locations in Indonesia
in 2012 to 2015 that measured twice in a year. The frequency of air pollution
gases exceed the thresholds over the years, so air pollution cases can be used in the
implementation of SNHPP model. By utilizing prior information, we use a Bayesian
approach to inference the proposed model using Markov Chain Monte Carlo(MCMC)
method. To generate samples of the conditional density of posterior distribution, we
wield Gibbs Sampling algorithm with Metropolis-Hastings step, and we obtained that
it has good convergence in this case. Deviance Information Criterion(DIC) is used to
get a fit SNHPP model, which is widely used in Bayesian modeling. In this case,
obtaining information about the districts in Indonesia a with each
concentration which most often exceeding each threshold. In addition, in the implementation
of the SNHPP model shows that space-time geostatistical data provides a
good enough performance.
Keywords: Spatial Nonhomogeneous Poisson Process(SNHPP),Space-time
Geostatistical Data, MCMC, Gibbs Sampling Algorithm with Metropolis-Hasting
Steps, Air Pollution
Monday, 27 April 2020
Keltner Combined V5.1
Dear Friends, the following MT4 numbers in this website are approved for using the EA. This is the link.
http://www.greedyforex.com/p/mt4-licensees-of.html
Tuesday, 29 December 2015
A Geostatistical Poisson Process R Code (A Manual R Code Sample)
>
#=========================================================# > #FIX UJI AUTOKORELASI SPASIAL# >
#=========================================================# > library(sp) > library(ape) > library(geosphere) > library(pracma) > library(mvtnorm) > library(Runuran) > library(coda) > > > #PANGGILAN > > getwd() [1] "C:/Users/ALLAH/Documents" > ozone <- read.table("tij 110pb baru mei
2.csv",header=TRUE, sep=";") > rata <- read.table("rata2 baru
mei.csv",header=TRUE, sep=";") > p <- read.table("length of i each
j.csv",header=TRUE, sep=";") > > n=length(ozone[-(1:5),])*length(ozone[,1]) > n1=length(ozone[,1]) > n2=length(ozone[-(1:5),]) > n3=365 > n4=sum(p) > > datas <- ozone[,-(1:5)] > data<-list() > for (f in 1:n1) data[f]=list(datas[f,1:p[f,]]) > > tj <- NULL > for (f in 1:n1) tj[f] <- c(datas[f,p[f,]]) > rata2 <- rata$RATA.RATA > mean.data.wilayah = rata2 > koordinat <- ozone[,2:3] > > ozone.dists <- as.matrix(dist(cbind(ozone$Long,
ozone$Lat))) > ozone.dists.inv <- 1/ozone.dists > diag(ozone.dists.inv) <- 0 > ozone.dists.inv[1:5, 1:5] 1 2 3 4 5 1 0.000000 3.130749 2.868183 5.325525 2.632469 2 3.130749 0.000000 5.847130 3.557764 2.726813 3 2.868183 5.847130 0.000000 4.693390 4.740872 4 5.325525 3.557764 4.693390 0.000000 5.201668 5 2.632469 2.726813 4.740872 5.201668 0.000000 > ozone.dists.bin <- (ozone.dists > 0 &
ozone.dists <= .2) > Moran.I(rata2, ozone.dists.bin) $observed [1] 0.1730037
$expected [1] -0.04761905
$sd [1] 0.06633233
$p.value [1] 0.0008809498
> > > >
#=========================================================# > #PROPOSAL FUNCTION >
#=========================================================# > proposalfunction <- function(par){ + beta=par[1] + alpha=par[2] + psi=par[3] + W=par[4] + phi=par[5] + sigma2=par[6] + + a.phi=12 + b.phi=1 + a.sigma2=0.1*0.005 + b.sigma2=0.005 + + a=a.beta=a.alpha=0.001 + b=b.beta=b.alpha=0.001 + + + repeat { + beta=rgamma(1,a.beta,b.beta) + if (beta>0.000038&beta<0.000123) + break + } + beta + + repeat { + alpha=rgamma(1,a.alpha,b.alpha) + if (alpha>1.636 &alpha<1.865) + break + } + alpha + #if (alpha>1.01 &alpha<1.012) + + repeat { + phi=rgamma(1,a.phi,b.phi) + if (phi>6.528&phi<20.274) + break + } + phi + + repeat { + sigma2=rgamma(1,a.sigma2,b.sigma2) + if (sigma2>0.06 &sigma2<0.432) + break + } + sigma2 + + + m=0 + v=1000 + #psi0 = matrix(urnorm(1,m,v,lb=0.569,ub=65.97)) + #psi0 + + psi0 = matrix(urnorm(1,m,v,lb=10.35,ub=65.97)) + psi0 + + #psi1 = matrix(urnorm(2,m,v,lb=-5.156,ub=1.323)) + #psi1 + + psi1 = matrix(urnorm(1,m,v,lb=-5.156,ub=-0.765)) + psi1 + + psi2 = matrix(urnorm(1,m,v,lb=-2.198, ub=-0.324)) + + #psi=rbind(psi1,psi2) + #psi + + psi=rbind(psi0,psi1,psi2) + psi + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + + W=t(rmvnorm(1,mw,vw)) + mw=round(mw,3) + + lamdha=abs(exp(W-beta*(tj^alpha))) + log.lamdha=abs(W-beta*(tj^alpha)) + N=NULL + for (i in 1:n1) N[i]=rpois(1,log.lamdha[i,]) + + par=return(list(beta=beta,alpha=alpha,psi=psi,W=W,phi=phi,sigma2=sigma2,N=N)) + } > > prop.beta=proposalfunction(2) > prop.beta $beta [1] 8.449539e-05
$alpha [1] 1.859809
$psi [,1] [1,] 45.2908167 [2,] -1.5303263 [3,] -0.9955006
$W [,1] [1,] 113.5690 [2,] 113.5118 [3,] 114.4001 [4,] 113.6071 [5,] 114.1108 [6,] 114.5744 [7,] 114.1289 [8,] 114.1230 [9,] 114.2300 [10,] 114.1394 [11,] 114.1397 [12,] 114.6193 [13,] 114.2646 [14,] 114.9174 [15,] 114.5528 [16,] 113.5857 [17,] 114.2936 [18,] 114.7057 [19,] 113.6373 [20,] 114.6100 [21,] 114.0182 [22,] 113.4766
$phi [1] 7.430909
$sigma2 [1] 0.1104207
$N [1] 97 127 111
99 109 132 98 118 114 133 101
120 113 118 106 86 106 116 115 [20] 127 104 94
> > > > > > #=========================================================# > #LOG LIKELIHOOD FUNCTION >
#=========================================================# > log.likelihood<-function(par) + { + beta=par[1] + alpha=par[2] + psi=par[3] + W=par[4] + phi=par[5] + sigma2=par[6] + + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + + b1=log((beta*alpha)^n1) + b2o=NULL + for (f in 1:n1) b2o[f]=-beta*sum(data[[f]]^alpha) + b2=sum(b2o) + b3=sum(n2*W) + b4=sum(exp(W)*(1-exp(-beta*(n3^alpha)))) + b5=b2+b3-b4 + b6o=NULL + for (f in 1:n1)
b6o[f]=log(sum(dot(data[[f]]^(alpha-1),data[[f]]^(alpha-1)))) + b6=sum(b6o) + log.likelihood1=b1*b5*b6 + return(log.likelihood1) + } > > > #=========================================================# > #PRIOR DISTRIBUTION >
#=========================================================# > prior<-function(par) + { + beta=par[1] + alpha=par[2] + psi=par[3] + W=par[4] + phi=par[5] + sigma2=par[6] + + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + N=prop.beta$N + + a.phi=12 + b.phi=1 + a.sigma2=0.1*0.005 + b.sigma2=0.005 + + a=a.beta=a.alpha=0.001 + b=b.beta=b.alpha=0.001 + + beta.prior=dgamma(beta,a,b,log=TRUE) + alpha.prior=dgamma(alpha,a,b,log=TRUE) + alpha.prior + +
sigma2.prior=dgamma(sigma2,a.sigma2,b.sigma2,log=TRUE) + phi.prior=dgamma(phi,a.phi,b.phi,log=TRUE) + + m=rep(0,3) + v=diag(1000,3) + + psi.prior=dmvnorm(t(psi),m,v,log=TRUE) + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + mw=round(mw,3) + + + W.prior=dmvnorm(t(W),mw,vw,log=TRUE) + + tj=mean.data.wilayah + lamdha=abs(exp(W-beta*(tj^alpha))) + log.lamdha=abs(W-beta*(tj^alpha)) + N.prior=dpois(N,log.lamdha,log=TRUE) + +
d.prior=(beta.prior+alpha.prior+sum(psi.prior)+W.prior+phi.prior+sigma2.prior+prod(N.prior)) +
return(list(beta.prior=beta.prior,alpha.prior=alpha.prior,psi.prior=psi.prior,W.prior=W.prior,phi.prior=phi.prior,sigma2.prior=sigma2.prior,N.prior=N.prior,prior=d.prior)) + } > > prior=prior(prop.beta) > > beta.prior=prior$beta > alpha.prior=prior$alpha > psi.prior=prior$psi > W.prior=prior$W > phi.prior=prior$phi > sigma2.prior=prior$sigma2 > N.prior=prior$N.prior > > > >
#=========================================================# > #ALPHA PARAMETER >
#=========================================================# > alpha.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + N=prop.beta$N + + a=a.beta=a.alpha=0.001 + b=b.beta=b.alpha=0.001 + + #f1=alpha.prior + f1a=alpha^{a.alpha-1} + f1b=exp(-alpha*b.alpha) + f1=f1a*f1b + f2o=NULL + for (f in 1:n1)
f2o[f]=sum(dot(data[[f]]^(alpha),data[[f]]^(alpha))) + f2=sum(f2o) + f3o=NULL + for (f in 1:n1) f3o[f]=-beta*sum(data[[f]]^alpha) + f3=sum(f3o) + f4=-beta*sum(N*(tj^alpha)) + f5=alpha^{n4} + alpha.post=f5*f1*f2*exp(f3+f4) + } > > run_metropolis_MCMC_alpha <-
function(target,startvalue, iterations, burnIn, thin){ + na.alpha=1 +
chain=(list(chain.alpha=(array(dim=c(iterations+1,na.alpha))))) + sv.alpha=startvalue + + chain$chain.alpha[1,]<-sv.alpha + chain=list(chain.alpha=chain$chain.alpha) + for (i in 1:iterations){ + prop.alpha=proposalfunction(chain$chain.alpha[i]) + probab.alpha = exp(target(prop.alpha) -
target(chain$chain.alpha[i])) + if (runif(1) < probab.alpha){ +
chain$chain.alpha[i+1] = prop.alpha$alpha + }else{ +
chain$chain.alpha[i+1] = chain$chain.alpha[i] + + print(chain$chain.alpha) + } + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + } + +
return(alphapost=mcmc(matrix(chain$chain.alpha[m,]),start=burnIn+1,end=iterations,
thin=thin)) + } > > startvalue.beta=0.055 > startvalue.alpha=0.421 > startvalue.psi=c(0.4,0.3229,0.2702) > startvalue.W=rep(17,n1) > startvalue.phi= 6.361 > startvalue.sigma2= 0.183 > iterations=120 > burnIn=20 > thin=10 > > > > #=========================================================# > >
hasil.alpha1=run_metropolis_MCMC_alpha(alpha.post,startvalue.alpha,iterations,
burnIn, thin) >
hasil.alpha2=run_metropolis_MCMC_alpha(alpha.post,startvalue.alpha,iterations,
burnIn, thin) > > > >
#=========================================================# > #JOINT POSTERIOR DISTRIBUTION >
#=========================================================# > posterior<-function(par){ + return(log.likelihood(par)+prior(par)) + } > > #=========================================================# > #MARGINAL POSTERIOR DISTRIBUTION >
#=========================================================# > #BETA PARAMETER >
#=========================================================# > beta.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + N=prop.beta$N + + a=a.beta=a.alpha=0.001 + b=b.beta=b.alpha=0.001 + + c1=beta^(n1+a.beta-1) + c2o=NULL + for (f in 1:n1) c2o[f]=-beta*sum(data[[f]]^alpha) + c2=sum(c2o) + c3=-beta*b.beta + c4=-beta*sum(N*(tj^alpha)) + beta.post=c1*exp(c2+c3+c4) + } > > run_metropolis_MCMC_beta <-
function(target,startvalue, iterations, burnIn, thin){ + na.beta=1 + chain=(list(chain.beta=(array(dim=c(iterations+1,na.beta))))) + sv.beta=startvalue.beta + + chain$chain.beta[1,]<-sv.beta + chain=list(chain.beta=chain$chain.beta) + for (i in 1:iterations){ + prop.beta=proposalfunction(chain$chain.beta[i]) + probab.beta = exp(beta.post(prop.beta) - beta.post(chain$chain.beta[i])) + if (runif(1) < probab.beta){ +
chain$chain.beta[i+1] = prop.beta$beta + }else{ +
chain$chain.beta[i+1] = chain$chain.beta[i] + + print(chain$chain.beta) + } + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + } + +
return(betapost=mcmc(matrix(chain$chain.beta[m,]),start=burnIn+1,end=iterations,
thin=thin)) + } > > > >
hasil.beta1=run_metropolis_MCMC_beta(beta.post,startvalue.beta,iterations,
burnIn, thin) > hasil.beta2=run_metropolis_MCMC_beta(beta.post,startvalue.beta,iterations,
burnIn, thin) > > > > >
#=========================================================# > #PHI PARAMETER >
#=========================================================# > phi.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + mw=round(mw,3) + + a.phi=12 + b.phi=1 + + j2=(det(vw))^{-0.5} + j3=phi^{a.phi-1} +
g4=((-0.5*(t(W-mw))%*%solve(vw)%*%(W-mw))-(b.phi*phi)) + j4=exp(g4) + phi.post=j2*j4*j3 + } > >
#=========================================================# > run_metropolis_MCMC_phi <- function(target,startvalue,
iterations, burnIn, thin){ + na.phi=1 +
chain=(list(chain.phi=(array(dim=c(iterations+1,na.phi))))) + sv.phi=startvalue + + chain$chain.phi[1,]<-sv.phi + chain=list(chain.phi=chain$chain.phi) + for (i in 1:iterations){ + prop.phi=proposalfunction(chain$chain.phi[i]) + probab.phi = exp(target(prop.phi) -
target(chain$chain.phi[i])) + if (runif(1) < probab.phi){ +
chain$chain.phi[i+1] = prop.phi$phi + }else{ +
chain$chain.phi[i+1] = chain$chain.phi[i] + + print(chain$chain.phi) + } + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + + } + +
return(phipost=mcmc(matrix(chain$chain.phi[m,]),start=burnIn+1,end=iterations,
thin)) + } > >
hasil.phi1=run_metropolis_MCMC_phi(phi.post,startvalue.phi,iterations,
burnIn, thin) >
hasil.phi2=run_metropolis_MCMC_phi(phi.post,startvalue.phi,iterations,
burnIn, thin) > > >
#=========================================================# > #PSI PARAMETER > #=========================================================# > psi.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + psi.prior=prior$psi.prior + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + mw=round(mw,3) + + h2=-psi.prior + g4=-0.5*(t(W-mw))%*%solve(vw)%*%(W-mw) + psi.post=exp(h2+g4) + } > > >
#=========================================================# > run_metropolis_MCMC_psi <-
function(target,startvalue, iterations, burnIn, thin){ + na.psi=3 + chain.psi=NULL + for(i in seq(iterations+1)) chain.psi[[i]] <-
list(array(dim=c(na.psi,1))) + chain=(list(chain.psi=chain.psi)) + sv.psi= matrix((rep(startvalue.psi,1)),3) + + chain$chain.psi[[1]]<-sv.psi + chain=list(chain.psi=chain$chain.psi) + + for (i in 1:iterations){ + prop.psi = proposalfunction(chain$chain.psi[[i]]) + probab.psi = exp(target(prop.psi) -
target(chain$chain.psi[[i]])) + if (runif(1) < probab.psi){ +
chain$chain.psi[[i+1]] <- prop.psi$psi + }else{ +
chain$chain.psi[[i+1]] <- chain$chain.psi[[i]] + + print(chain$chain.psi) + } + + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + + chainpsi=(matrix(unlist(chain$chain.psi), nrow =
iterations+1,byrow=TRUE)) + } + + return(psipost=mcmc((chainpsi[m,]), start=burnIn+1,
end=(iterations), thin=thin)) + } > > >
hasil.psi1=run_metropolis_MCMC_psi(psi.post,startvalue.psi,iterations, burnIn,
thin) >
hasil.psi2=run_metropolis_MCMC_psi(psi.post,startvalue.psi,iterations,
burnIn, thin) > > >
#=========================================================# > #SIGMA2 PARAMETER >
#=========================================================# > sigma2.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + mw=round(mw,3) + + a.sigma2=0.1*0.005 + b.sigma2=0.005 + + + k2=sigma2^{a.sigma2-1} + k3=(det(vw))^{-0.5} +
g4=((-0.5*(t(W-mw))%*%solve(vw)%*%(W-mw))-(b.sigma2*sigma2)) + k4=exp(g4) + sigma2.post=k2*k3*k4 + return(sigma2.post) + } > > #=========================================================# > run_metropolis_MCMC_sigma2 <-
function(target,startvalue, iterations, burnIn, thin){ + na.sigma2=1 +
chain=(list(chain.sigma2=(array(dim=c(iterations+1,na.sigma2))))) + sv.sigma2=startvalue + + chain$chain.sigma2[1,]<-sv.sigma2 + chain=list(chain.sigma2=chain$chain.sigma2) + for (i in 1:iterations){ + prop.sigma2=proposalfunction(chain$chain.sigma2[i]) + probab.sigma2 = exp(target(prop.sigma2) -
target(chain$chain.sigma2[i])) + if (runif(1) < probab.sigma2){ +
chain$chain.sigma2[i+1] = prop.sigma2$sigma2 + }else{ +
chain$chain.sigma2[i+1] = chain$chain.sigma2[i] + + print(chain$chain.sigma2) + } + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + } + + return(sigma2post=mcmc(matrix(chain$chain.sigma2[m,]),start=burnIn+1,end=iterations,
thin)) + } > >
hasil.sigma21=run_metropolis_MCMC_sigma2(sigma2.post,startvalue.sigma2,iterations,
burnIn, thin) > hasil.sigma22=run_metropolis_MCMC_sigma2(sigma2.post,startvalue.sigma2,iterations,
burnIn, thin) > > >
#=========================================================# > #W PARAMETER >
#=========================================================# > W.post <- function(par){ + beta=prop.beta$beta + alpha=prop.beta$alpha + psi=prop.beta$psi + W=prop.beta$W + phi=prop.beta$phi + sigma2=prop.beta$sigma2 + N=prop.beta$N + + m1=rep(1,n1) + X=cbind(m1,ozone$Lat,ozone$Long) + mw=X%*%psi + vw=sigma2*exp(-phi*ozone.dists) + mw=round(mw,3) + + g1=sum(n2*W) + g2=sum((N.prior)*W) + g3=sum(exp(W)) + g4=-0.5*(t(W-mw))%*%solve(vw)%*%(W-mw) + W.post=exp(g1+g2-g3-g4) + } > > >
#=========================================================# > run_metropolis_MCMC_W <- function(target,startvalue,
iterations, burnIn, thin){ + na.W=n1 + chain.W=NULL + for(i in seq(iterations+1)) chain.W[[i]] <-
list(array(dim=c(na.W,1))) + chain=(list(chain.W=chain.W)) + sv.W= matrix((rep(startvalue.W,1)),n1) + + chain$chain.W[[1]]<-sv.W + chain=list(chain.W=chain$chain.W) + + for (i in 1:iterations){ + prop.W= proposalfunction(chain$chain.W[[i]]) + probab.W = exp(target(prop.W) -
target(chain$chain.W[[i]])) + if (runif(1) < probab.W){ +
chain$chain.W[[i+1]] <- prop.W$W + }else{ +
chain$chain.W[[i+1]] <- chain$chain.W[[i]] + + print(chain$chain.W) + } + + x=c(0:(((iterations-burnIn)/(thin))-1)) + m=burnIn+(x*thin)+1 + + chainW=(matrix(unlist(chain$chain.W), nrow =
iterations+1,byrow=TRUE)) + } + + return(Wpost=mcmc((chainW[m,]), start=burnIn+1,
end=(iterations), thin=thin)) + } > > >
hasil.W1=run_metropolis_MCMC_W(W.post,startvalue.W,iterations, burnIn, thin) >
hasil.W2=run_metropolis_MCMC_W(W.post,startvalue.W,iterations, burnIn, thin) > > > obj.beta1<-(NA); obj.beta1<-hasil.beta1 > obj.beta2<-(NA); obj.beta2<-hasil.beta2 > obj.alpha1<-(NA); obj.alpha1<-hasil.alpha1 > obj.alpha2<-(NA); obj.alpha2<-hasil.alpha2 > > obj.phi1<-(NA); obj.phi1<-hasil.phi1 > obj.phi2<-(NA); obj.phi2<-hasil.phi2 > obj.sigma2.1<-(NA);
obj.sigma2.1<-hasil.sigma21 > obj.sigma2.2<-(NA);
obj.sigma2.2<-hasil.sigma22 > obj.psi1<-(NA); obj.psi1<-hasil.psi1 > obj.psi2<-(NA); obj.psi2<-hasil.psi2 > > obj.beta=mcmc.list(obj.beta1,obj.beta2) > obj.alpha=mcmc.list(obj.alpha1,obj.alpha2) > obj.phi<-mcmc.list(obj.phi1,obj.phi2) >
obj.sigma2<-mcmc.list(obj.sigma2.1,obj.sigma2.2) > obj.psi<-mcmc.list(obj.psi1,obj.psi2) > > gelman.diag(obj.beta) Potential scale reduction factors:
Point est.
Upper C.I. [1,]
1.01 1.07
> gelman.diag(obj.alpha) Potential scale reduction factors:
Point est.
Upper C.I. [1,]
1.17 1.72
> gelman.diag(obj.phi) Potential scale reduction factors:
Point est.
Upper C.I. [1,]
1.25 2.03
> gelman.diag(obj.sigma2) Potential scale reduction factors:
Point est.
Upper C.I. [1,]
0.918 0.925
> gelman.diag(obj.psi) Potential scale reduction factors:
Point est.
Upper C.I. [1,]
0.928 0.971 [2,]
0.972 1.130 [3,]
0.935 0.964
Multivariate psrf
0.987 > > summary(hasil.beta1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
6.819e-05 1.693e-05 5.353e-06 5.353e-06
2. Quantiles for each variable:
2.5% 25% 50% 75%
97.5% 4.443e-05 5.887e-05 6.752e-05 7.478e-05 9.714e-05
> summary(hasil.beta2)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
6.871e-05 2.942e-05 9.304e-06 9.304e-06
2. Quantiles for each variable:
2.5% 25% 50% 75%
97.5% 4.089e-05 4.544e-05 5.606e-05 9.073e-05 1.177e-04
> summary(hasil.alpha1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
1.75380 0.07869 0.02488 0.02488
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 1.648 1.685 1.755 1.815 1.856
> summary(hasil.alpha2)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
1.7591 0.0721 0.0228 0.0228
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 1.654 1.698 1.780 1.814 1.854
> summary(hasil.phi1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
12.7246 3.0773 0.9731 0.9731
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 8.993 10.358 13.047
13.890 18.018
> summary(hasil.phi2)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD
Naive SE Time-series SE
10.1635 2.0377 0.6444 0.6444
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 6.926 8.877 10.374 11.326 12.983
> summary(hasil.sigma21)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
0.22619 0.12347 0.03904 0.03904
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 0.0671 0.1036 0.2470 0.3254 0.3906
> summary(hasil.sigma22)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE
0.1995 0.1075 0.0340 0.0340
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% 0.07402 0.11909 0.15245 0.29801 0.34810
> summary(hasil.psi1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE [1,] 36.293 16.7734
5.3042 5.3042 [2,] -3.607
1.3263 0.4194 0.4194 [3,] -1.274
0.5864 0.1854 0.2101
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% var1 14.339 27.161 30.984 46.8584 63.7397 var2 -5.102 -4.330 -3.995 -3.2466 -1.2284 var3 -1.921 -1.782 -1.469 -0.7025 -0.4585
> summary(hasil.psi1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE [1,] 36.293 16.7734
5.3042 5.3042 [2,] -3.607
1.3263 0.4194 0.4194 [3,] -1.274
0.5864 0.1854 0.2101
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% var1 14.339 27.161 30.984 46.8584 63.7397 var2 -5.102 -4.330 -3.995 -3.2466 -1.2284 var3 -1.921 -1.782 -1.469 -0.7025 -0.4585
> summary(hasil.W1)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE [1,] 110.2
44.91 14.20 14.20 [2,] 111.3
44.93 14.21 14.21 [3,] 111.5
45.08 14.26 14.26 [4,] 110.9
45.01 14.23 14.23 [5,] 111.5
45.02 14.24 14.24 [6,] 111.9
45.13 14.27 14.27 [7,] 111.7
45.02 14.24 14.24 [8,] 111.3
44.83 14.18 14.18 [9,] 111.3
45.03 14.24 14.24 [10,] 111.4 44.92
14.21 14.21 [11,] 111.3 44.88
14.19 14.19 [12,] 111.9 44.98
14.22 14.22 [13,] 111.8 45.29
14.32 14.32 [14,] 111.8 44.98
14.22 14.22 [15,] 111.9 45.23
14.30 14.30 [16,] 111.1 44.85
14.18 14.18 [17,] 111.6 44.91
14.20 14.20 [18,] 111.8 45.13
14.27 14.27 [19,] 111.0 44.90
14.20 14.20 [20,] 112.0 44.86
14.19 14.19 [21,] 111.4 45.08
14.26 14.26 [22,] 111.0 45.13
14.27 14.27
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% var1 32.14 92.05
121.2 140.4 161.6 var2 33.14 93.91
122.0 141.6 162.6 var3 33.01 94.09
122.1 141.7 163.3 var4 32.45 93.32
122.0 141.0 162.3 var5 32.92 94.68
122.5 142.2 162.4 var6 33.48 94.64
122.5 142.5 163.7 var7 33.69 93.95
122.7 142.3 163.2 var8 33.27 93.93
121.9 141.5 162.5 var9 32.94 94.24
121.9 141.3 163.2 var10 33.19 94.03 122.0 141.5 162.8 var11 33.06 93.93 122.0 141.5 162.2 var12 33.93 94.57 122.3 142.3 163.7 var13 32.92 94.62 122.6 142.3 163.5 var14 33.76 94.44 122.4 142.3 163.3 var15 33.44 94.55 122.5 142.5 164.0 var16 33.18 93.57 121.7 141.5 162.4 var17 33.35 94.69 122.2 141.7 162.9 var18 33.47 94.22 122.4 142.4 163.7 var19 32.68 93.75 122.1 141.2 162.3 var20 34.35 94.56 122.6 142.9 163.4 var21 32.93 94.07 122.0 141.6 163.3 var22 32.31 93.42 121.8 141.1 162.8
> summary(hasil.W2)
Iterations = 21:111 Thinning interval = 10 Number of chains = 1 Sample size per chain = 10
1. Empirical mean and standard deviation for each
variable, plus standard
error of the mean:
Mean SD Naive SE Time-series SE [1,] 130.8
69.01 21.82 21.82 [2,] 131.6
68.73 21.74 21.74 [3,] 131.9
68.65 21.71 21.71 [4,] 131.2
68.69 21.72 21.72 [5,] 131.8
68.57 21.68 21.68 [6,] 132.0
68.64 21.70 21.70 [7,] 132.1
68.98 21.81 21.81 [8,] 131.8
68.74 21.74 21.74 [9,] 132.0
68.59 21.69 21.69 [10,] 131.7 68.72
21.73 21.73 [11,] 131.8 68.57
21.68 21.68 [12,] 132.2 68.84
21.77 21.77 [13,] 131.9 68.77
21.75 21.75 [14,] 132.0 68.76
21.74 21.74 [15,] 132.3 68.73
21.74 21.74 [16,] 131.6 68.86
21.78 21.78 [17,] 132.0 68.72
21.73 21.73 [18,] 131.9 68.66
21.71 21.71 [19,] 131.7 68.87
21.78 21.78 [20,] 132.5 68.87
21.78 21.78 [21,] 131.9 68.77
21.75 21.75 [22,] 131.4 68.97
21.81 21.81
2. Quantiles for each variable:
2.5% 25%
50% 75% 97.5% var1 51.58 78.52
114.5 193.1 231.8 var2 53.21 79.51
115.0 194.1 231.8 var3 53.28 80.00
115.4 194.6 231.7 var4 52.56 79.17
114.8 193.5 231.3 var5 53.00 79.84
115.6 193.5 232.1 var6 53.54 80.07
115.4 194.0 232.3 var7 53.01 80.05
115.5 195.0 232.5 var8 53.01 80.00
114.9 194.5 232.0 var9 53.25 80.10
115.4 194.5 232.0 var10 53.10 79.92 114.8 194.3 232.0 var11 53.53 80.15 114.9 194.2 231.9 var12 53.35 80.10 115.7 195.0 232.4 var13 53.48 80.10 115.1 194.3 232.4 var14 53.44 80.16 115.2 194.8 232.2 var15 53.80 80.07 115.8 194.8 232.5 var16 52.98 79.60 114.7 194.2 232.1 var17 53.31 79.71 115.6 194.4 232.0 var18 53.30 80.11 115.3 194.0 232.1 var19 53.00 79.51 115.0 194.7 232.0 var20 53.48 80.17 116.3 195.8 232.2 var21 53.11 79.91 115.5 194.9 231.9 var22 52.22 79.07 115.0 193.9 232.1
> > > log.likelihood(1) [1] 4.777355e+55 > log.likelihood=log.likelihood(1) > likelihood=exp(log.likelihood) > D.bar<- (-2*mean(log.likelihood)) > D.bar [1] -9.55471e+55 > > mean.beta1=7.241e-05 > mean.beta2=7.253e-05 > mean.alpha1=1.7475165 > mean.alpha2=1.7473720 > mean.phi1=12.03708 > mean.phi2= 12.05122 > mean.sigma2.1=0.189456 > mean.sigma2.2=0.187189 > > > x=c(0:(((iterations-burnIn)/(thin))-1)) > m=burnIn+(x*thin)+1 > > beta1=hasil.beta1[length(m),] > beta2=hasil.beta2[length(m),] > alpha1=hasil.alpha1[length(m),] > alpha2=hasil.alpha2[length(m),] > phi1=hasil.phi1[length(m),] > phi2=hasil.phi2[length(m),] > sigma2.1=hasil.sigma21[length(m),] > sigma2.2=hasil.sigma22[length(m),] > psi1=hasil.psi1[length(m),] > psi2=hasil.psi2[length(m),] > > > D.hat.beta1=-2*beta1 > D.hat.beta1 [1] -8.565151e-05 > D.hat.beta2=-2*beta2 > D.hat.beta2 [1] -0.0002152555 > D.hat.alpha1=-2*alpha1 > D.hat.alpha1 [1] -3.491078 > D.hat.alpha2=-2*alpha2 > D.hat.alpha2 [1] -3.370302 > D.hat.phi1=-2*phi1 > D.hat.phi1 [1] -21.29575 > D.hat.phi2=-2*phi2 > D.hat.phi2 [1] -22.8782 > D.hat.sigma2.1=-2*sigma2.1 > D.hat.sigma2.1 [1] -0.6151423 > D.hat.sigma2.2=-2*sigma2.2 > D.hat.sigma2.2 [1] -0.7036207 > > pD.beta1 <- D.bar - D.hat.beta1 > pD.beta2 <- D.bar - D.hat.beta2 > pD.alpha1 <- D.bar - D.hat.alpha1 > pD.alpha2 <- D.bar - D.hat.alpha2 > pD.phi1 <- D.bar - D.hat.phi1 > pD.phi2 <- D.bar - D.hat.phi2 > pD.sigma2.1 <- D.bar - D.hat.sigma2.1 > pD.sigma2.2 <- D.bar - D.hat.sigma2.2 > pD.beta1 [1] -9.55471e+55 > pD.beta2 [1] -9.55471e+55 > pD.alpha1 [1] -9.55471e+55 > pD.alpha2 [1] -9.55471e+55 > pD.phi1 [1] -9.55471e+55 > pD.phi2 [1] -9.55471e+55 > pD.sigma2.1 [1] -9.55471e+55 > pD.sigma2.2 [1] -9.55471e+55 > > > > DIC.beta1<-pD.beta1+D.bar > DIC.beta2<-pD.beta2+D.bar > DIC.alpha1<-pD.alpha1+D.bar > DIC.alpha2<-pD.alpha2+D.bar > DIC.phi1<-pD.phi1+D.bar > DIC.phi2<-pD.phi2+D.bar > DIC.sigma2.1<-pD.sigma2.1+D.bar > DIC.sigma2.2<-pD.sigma2.2+D.bar > > DIC<- rbind("DIC.beta1"=DIC.beta1,DIC.beta2=DIC.beta2,DIC.alpha1=DIC.alpha1 +
,DIC.alpha2=DIC.alpha2,DIC.phi1=DIC.phi1,DIC.phi2=DIC.phi2 +
,"DIC.sigma2.1"=DIC.sigma2.1,DIC.sigma2.2=DIC.sigma2.2) > DIC
[,1] DIC.beta1
-1.910942e+56 DIC.beta2
-1.910942e+56 DIC.alpha1
-1.910942e+56 DIC.alpha2
-1.910942e+56 DIC.phi1
-1.910942e+56 DIC.phi2
-1.910942e+56 DIC.sigma2.1 -1.910942e+56 DIC.sigma2.2 -1.910942e+56 > DIC.beta1-DIC.beta2 [1] 0 > DIC.alpha1-DIC.alpha2 [1] 0 > DIC.phi1-DIC.phi2 [1] 0 > DIC.sigma2.1-DIC.sigma2.2 [1] 0 > > beta=beta1 > alpha=alpha1 > sigma2=sigma2.2 > phi=phi2 > > W=hasil.W1[length(m),] > p1=t(exp(W)) > p2o=NULL > for (f in 1:n1)
p2o[f]=beta*alpha*data[[f]]^{alpha-1} There were 22 warnings (use warnings() to see them) > p2=p2o > p3o=NULL > for (f in 1:n1)
p3o[f]=exp(-beta*(data[[f]]^{alpha})) There were 22 warnings (use warnings() to see them) > p3=p3o > > > rata2.rank <- read.table("jumlah baru mei
dirank.csv",header=TRUE, sep=";") > rata2.rank1 <- read.table("rata2 baru mei dirank.csv",header=TRUE,
sep=";") > > mean.snhpp=p1*(1-p3) > log.mean.snhpp=log(mean.snhpp) > log.mean.snhpp [,1] [,2]
[,3] [,4] [,5]
[,6] [,7] [,8] [1,] 160.856 161.8093 162.8429 161.7383 164.3655
163.0207 162.4406 161.8066
[,9] [,10] [,11]
[,12] [,13] [,14]
[,15] [,16] [1,] 162.8325 163.7615 161.6297 162.9942 162.7554
162.5355 163.4167 161.5743
[,17] [,18] [,19]
[,20] [,21] [,22] [1,] 162.4133 163.0269 161.7859 163.0163 162.5874
162.5492 > > SNHPP1=p1*p2*p3 > SNHPP1
[,1] [,2] [,3] [,4] [,5] [1,] 5.455587e+68 1.480252e+69 3.515418e+69
1.318314e+69 4.126419e+69
[,6] [,7] [,8] [,9] [,10] [1,] 4.971097e+69 2.783076e+69 1.352266e+69
3.478919e+69 3.872945e+69
[,11] [,12] [,13] [,14] [,15] [1,] 1.044928e+69 4.840973e+69 3.81267e+69 3.059999e+69
7.386176e+69
[,16] [,17] [,18] [,19] [,20] [1,] 1.17026e+69 2.287816e+69 5.001907e+69 1.324565e+69
4.180939e+69
[,21] [,22] [1,] 3.222895e+69 2.620884e+69 > SNHPP=log(SNHPP1) > >
cbind(t(SNHPP),sort(t(SNHPP)),order(t(SNHPP)),rata2.rank) t(SNHPP)
sort(t(SNHPP)) order(t(SNHPP))
LOCR WILAYAHR JUMLAHR 1 158.2724 158.2724 1
CHO 5
12 2 159.2706 158.9223 11
ACO 1
17 3 160.1355 159.0356 16
MON 4
23 4 159.1547 159.1547 4
XAL 22
23 5 160.2958 159.1595 19
CES 3
25 6 160.4820 159.1802 8
TAX 18
28 7 159.9019 159.2706 2
TLA 19
32 8 159.1802 159.7060 17
LAG 10
36 9 160.1251 159.8419 22
SAG 14
44 10 160.2324
159.9019 7
MER 11
45 11 158.9223
159.9968 14
TAH 17
47 12 160.4555
160.0487 21
FAC 8
48 13 160.2167
160.1251 9
UIZ 21
51 14 159.9968 160.1355 3
AZC 2
52 15 160.8780
160.2167 13
CUA 7
58 16 159.0356
160.2324 10
IZT 9
63 17 159.7060
160.2958 5
TAC 16
65 18 160.4882
160.3089 20
PLA 13
68 19 159.1595
160.4555 12
TPN 20
72 20 160.3089
160.4820 6
PED 12
79 21 160.0487
160.4882 18
COY 6
85 22 159.8419
160.8780 15
SUR 15
97 LOC
WILAYAH JUMLAH 1
ACO 1
17 2
AZC 2
52 3
CES 3
25 4
MON 4
23 5
CHO 5
12 6
COY 6
85 7
CUA 7
58 8
FAC 8
48 9
IZT 9
63 10
LAG 10
36 11
MER 11
45 12
PED 12
79 13
PLA 13
68 14
SAG 14
44 15
SUR 15
97 16
TAC 16
65 17
TAH 17
47 18
TAX 18
28 19
TLA 19
32 20
TPN 20
72 21
UIZ 21
51 22
XAL 22
23 >
cbind(NHPPS=order(t(SNHPP)),JUMLAH=rata2.rank$WILAYAHR) NHPPS
JUMLAH [1,] 1
5 [2,] 11
1 [3,] 16
4 [4,] 4
22 [5,] 19
3 [6,] 8
18 [7,] 2
19 [8,] 17
10 [9,] 22
14 [10,] 7 11 [11,] 14 17 [12,] 21 8 [13,] 9 21 [14,] 3 2 [15,] 13 7 [16,] 10 9 [17,] 5 16 [18,] 20 13 [19,] 12 20 [20,] 6 12 [21,] 18 6 [22,] 15 15 > > >
|
-
Function - Definition The function makes program code reusable, that is only defined once and then can be used repeatedly Modularit...
-
Daily Online Survey : Daily Online Survey