
Título da msg explica
sessionInfo()
R version 3.2.3 (2015-12-10) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows >= 8 x64 (build 9200) locale: [1] LC_COLLATE=Portuguese_Brazil.1252 LC_CTYPE=Portuguese_Brazil.1252 [3] LC_MONETARY=Portuguese_Brazil.1252 LC_NUMERIC=C [5] LC_TIME=Portuguese_Brazil.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] nnet_7.3-11 caret_6.0-64 ggplot2_2.0.0.9001 lattice_0.20-33 [5] mlbench_2.1-1 RevoUtilsMath_3.2.3 loaded via a namespace (and not attached): [1] Rcpp_0.12.3 compiler_3.2.3 nloptr_1.0.4 plyr_1.8.3 [5] iterators_1.0.8 tools_3.2.3 digest_0.6.9 lme4_1.1-10 [9] nlme_3.1-124 gtable_0.1.2 mgcv_1.8-10 Matrix_1.2-3 [13] foreach_1.4.3 yaml_2.1.13 parallel_3.2.3 SparseM_1.7 [17] stringr_1.0.0.9000 knitr_1.12 MatrixModels_0.4-2 stats4_3.2.3 [21] grid_3.2.3 rmarkdown_0.9.4 minqa_1.2.4 reshape2_1.4.1.9000 [25] car_2.1-1 magrittr_1.5 scales_0.3.0 codetools_0.2-14 [29] htmltools_0.3 MASS_7.3-45 splines_3.2.3 rsconnect_0.4.1.4 [33] pbkrtest_0.4-5 colorspace_1.2-6 quantreg_5.19 stringi_1.0-1 [37] munsell_0.4.2
____ código --- title: "Neural Network example" author: "Leonard Mendonça de Assis" date: "21 de janeiro de 2016" output: pdf_document: highlight: pygments number_sections: yes html_document: highlight: pygments number_sections: yes word_document: highlight: pygments --- # Session start ```{r} require(MASS) require(neuralnet) require(plyr) require(boot) require(knitr) data <- Boston kable(head(data)) kable( apply(data,2,function(x) sum(is.na(x))), caption='NAs per variables' ) ``` # Adjusting a linear model ```{r} set.seed(500) index <- sample(1:nrow(data),round(0.75*nrow(data))) train <- data[index,] test <- data[-index,] lm.fit <- glm(medv~., data=train) summary(lm.fit) pr.lm <- predict(lm.fit,test) MSE.lm <- sum((pr.lm - test$medv)^2)/nrow(test) ``` # Adjusting a Neural Net As a first step, we are going to address data preprocessing. It is good practice to normalize your data before training a neural network. I cannot emphasize enough how important this step is: depending on your dataset, avoiding normalization may lead to useless results or to a very difficult training process (most of the times the algorithm will not converge before the number of maximum iterations allowed). You can choose different methods to scale the data (z-normalization, min-max scale, etc ). I chose to use the min-max method and scale the data in the interval [0,1]. Usually scaling in the intervals [0,1] or [-1,1] tends to give better results. We therefore scale and split the data before moving on: ```{r} maxs <- apply(data, 2, max) mins <- apply(data, 2, min) scaled <- as.data.frame(scale(data, center = mins, scale = maxs - mins)) train_ <- scaled[index,] test_ <- scaled[-index,] ``` Note that scale returns a matrix that needs to be coerced into a data.frame. ```{r} n <- names(train_) f <- as.formula(paste("medv ~", paste(n[!n %in% "medv"], collapse = " + "))) nn <- neuralnet(f,data=train_,hidden=c(5,3),linear.output=T) ``` And the Neural Nework looks like this: ```{r} plot(nn) ``` ```{r} sessionInfo() ```