Since two weeks before I published this post, I have started read the book Statistical Rethinking: A Bayesian Course with Examples in R and Stan written by Richard McElreath. Richard is the evolutionary anthropologist at Max Planck Institute. He wrote this textbook for the PhD students who will use the Bayesian statistics in their research projects. Compared to the textbooks written by statisians and data scientistst, Richard’s book explain and demonstrate the statistical methods with examples instead of equations. His intention is to help who are not staticians but used to use statistics realize one fact: we rely on the statistical models as the representations of our answers rather than answer the questions by the raw data or naked truth. Many non-statisticians are used to find and learn what kind of methods or apps to deal with their data, but few are interested to know the models under the mentods and apps they are using. The trouble and danger is that they thought their jobs are done when the program printed the tables and figures but these outputs are from the statistical model is unable to answer their question. This situation is originated from many non-statisticans consider themselves the end-users of statistical models. Like any user of a packaged software, non-statisticans have no time to understand how the tools in their hands designed and conducted by statisticans.

Richard introduced the story of golem to raise the non-statsiticans’ attention to the troubles they had made and will make. A statistical model, like a golem, has the power beyond human to finish the work the human are unable to do, for example, trace the passengers’ track from the trillion of camera. Its power could be misused or out of control if we do not understand what is the root of its action. A user of ststistical method, no matter you are or are not stistician, have to keep the awareness of engineer when you are dealing with your data. Today everyone has many easier ways than a decade ago to keep the awareness of engineer. One advantage is that the learning curve for being an part-time hacker is getting smooth. Increasing R apps are opening many windows for who are want to outlook and modify the statistical models.

Since this post, every post listed in the category `Rethinking`

is one of the summaries and feedbacks to Statistical Rethinking: A Bayesian Course with Examples in R and Stan. At first I have to check my toolkits for create and manipulate the statistical models. They are R core and the packages. Years ago I have used to use the packages in my data processing. Now I show them for who start to use R after read this post.

`install.packages(c("rpart","chron","Hmisc","Design","Matrix","stringr","lme4","coda","e1071","zipfR","ape","languageR","multcomp","contrast","shiny","ggplot2", "dplyr"))`

Some of the packages are learned from I participated in Cousera Data Science. Now I used to use `dplyr`

to process the raw data, and I am learning how to draw the figures I need in use of `ggplot2`

. When this post is published, I have updated my R to R version 3.4.3 (2017-11-30). Through the codes of Heuristic Andrew, here are my installed packages.

```
ip <- as.data.frame(installed.packages()[,c(1,3:4)])
rownames(ip) <- NULL
ip <- ip[is.na(ip$Priority),1:2,drop=FALSE]
print(ip, row.names=FALSE)
```

```
## Package Version
## acepack 1.4.1
## ape 5.0
## assertthat 0.2.0
## backports 1.1.2
## base64enc 0.1-3
## beginr 0.1.0
## BH 1.65.0-1
## bindr 0.1
## bindrcpp 0.2
## binom 1.1-1
## bitops 1.0-6
## blogdown 0.4
## bookdown 0.5
## bookdownplus 1.3.2
## car 2.1-6
## caTools 1.17.1
## checkmate 1.8.5
## chron 2.3-52
## cli 1.0.0
## coda 0.19-1
## colorspace 1.3-2
## contrast 0.21
## cranlogs 2.1.0
## crayon 1.3.4
## curl 3.1
## data.table 1.10.4-3
## devtools 1.13.4
## dichromat 2.0-0
## digest 0.6.13
## dplyr 0.7.4
## e1071 1.6-8
## evaluate 0.10.1
## Formula 1.2-2
## geepack 1.2-1
## ggplot2 2.2.1
## git2r 0.21.0
## glue 1.2.0
## gridExtra 2.3
## gtable 0.2.0
## highr 0.6
## Hmisc 4.1-1
## htmlTable 1.11.1
## htmltools 0.3.6
## htmlwidgets 0.9
## httpuv 1.3.5
## httr 1.3.1
## iterators 1.0.9
## jsonlite 1.5
## knitr 1.18
## labeling 0.3
## languageR 1.4.1
## later 0.6
## latticeExtra 0.6-28
## lazyeval 0.2.1
## lme4 1.1-15
## magrittr 1.5
## markdown 0.8
## MatrixModels 0.4-1
## memoise 1.1.0
## mime 0.5
## miniUI 0.1.1
## minqa 1.2.4
## multcomp 1.4-8
## munsell 0.4.3
## mvtnorm 1.0-6
## nloptr 1.0.4
## openssl 0.9.9
## pbkrtest 0.4-7
## pBrackets 1.0
## pillar 1.0.1
## pkgconfig 2.0.1
## plogr 0.1-1
## plotrix 3.7
## plyr 1.8.4
## polspline 1.1.12
## quantreg 5.34
## R6 2.2.2
## RColorBrewer 1.1-2
## Rcpp 0.12.14
## RcppEigen 0.3.3.3.1
## reshape2 1.4.3
## rlang 0.1.6
## RLRsim 3.1-3
## rmarkdown 1.8.6
## rms 5.1-2
## rprojroot 1.3-2
## rstudioapi 0.7
## rticles 0.4.1
## sandwich 2.4-0
## scales 0.5.0
## servr 0.8
## shiny 1.0.5
## simr 1.0.3
## sourcetools 0.1.6
## SparseM 1.77
## stringi 1.1.6
## stringr 1.2.0
## TH.data 1.0-8
## tibble 1.4.1
## tinytex 0.2.8
## utf8 1.1.3
## viridis 0.4.1
## viridisLite 0.2.0
## whisker 0.3-2
## withr 2.1.1
## xtable 1.8-2
## yaml 2.1.16
## zipfR 0.6-10
## zoo 1.8-1
```

`print(paste0("There are ",dim(ip)[1], " packages installed in my laptop."))`

`## [1] "There are 109 packages installed in my laptop."`

Richard’s book inspired me help people control their golems/ststistical models in the process of coding. In his book, literature and codes are separated. Readers who are not familiar with coding skill might hardly follow his literature. Literatural coding might be the best way to impliment the `Rethinking`

. I am going to accumulating the codes of Bayesian statistics and thake notes of his and others literatures in the coming posts.