r - Workflow for statistical analysis and report writing

ID : 20423

viewed : 15

Tags : rstatisticsdata-visualizationr

Top 5 Answer for r - Workflow for statistical analysis and report writing

vote vote

99

I generally break my projects into 4 pieces:

  1. load.R
  2. clean.R
  3. func.R
  4. do.R

load.R: Takes care of loading in all the data required. Typically this is a short file, reading in data from files, URLs and/or ODBC. Depending on the project at this point I'll either write out the workspace using save() or just keep things in memory for the next step.

clean.R: This is where all the ugly stuff lives - taking care of missing values, merging data frames, handling outliers.

func.R: Contains all of the functions needed to perform the actual analysis. source()'ing this file should have no side effects other than loading up the function definitions. This means that you can modify this file and reload it without having to go back an repeat steps 1 & 2 which can take a long time to run for large data sets.

do.R: Calls the functions defined in func.R to perform the analysis and produce charts and tables.

The main motivation for this set up is for working with large data whereby you don't want to have to reload the data each time you make a change to a subsequent step. Also, keeping my code compartmentalized like this means I can come back to a long forgotten project and quickly read load.R and work out what data I need to update, and then look at do.R to work out what analysis was performed.

vote vote

86

If you'd like to see some examples, I have a few small (and not so small) data cleaning and analysis projects available online. In most, you'll find a script to download the data, one to clean it up, and a few to do exploration and analysis:

Recently I have started numbering the scripts, so it's completely obvious in which order they should be run. (If I'm feeling really fancy I'll sometimes make it so that the exploration script will call the cleaning script which in turn calls the download script, each doing the minimal work necessary - usually by checking for the presence of output files with file.exists. However, most times this seems like overkill).

I use git for all my projects (a source code management system) so its easy to collaborate with others, see what is changing and easily roll back to previous versions.

If I do a formal report, I usually keep R and latex separate, but I always make sure that I can source my R code to produce all the code and output that I need for the report. For the sorts of reports that I do, I find this easier and cleaner than working with latex.

vote vote

74

I agree with the other responders: Sweave is excellent for report writing with R. And rebuilding the report with updated results is as simple as re-calling the Sweave function. It's completely self-contained, including all the analysis, data, etc. And you can version control the whole file.

I use the StatET plugin for Eclipse for developing the reports, and Sweave is integrated (Eclipse recognizes latex formating, etc). On Windows, it's easy to use MikTEX.

I would also add, that you can create beautiful reports with Beamer. Creating a normal report is just as simple. I included an example below that pulls data from Yahoo! and creates a chart and a table (using quantmod). You can build this report like so:

Sweave(file = "test.Rnw") 

Here's the Beamer document itself:

%  \documentclass[compress]{beamer} \usepackage{Sweave} \usetheme{PaloAlto}  \begin{document}  \title{test report} \author{john doe} \date{September 3, 2009}   \maketitle  \begin{frame}[fragile]\frametitle{Page 1: chart}  <<echo=FALSE,fig=TRUE,height=4, width=7>>= library(quantmod) getSymbols("PFE", from="2009-06-01") chartSeries(PFE) @  \end{frame}   \begin{frame}[fragile]\frametitle{Page 2: table}  <<echo=FALSE,results=tex>>= library(xtable) xtable(PFE[1:10,1:4], caption = "PFE") @  \end{frame}  \end{document} 
vote vote

68

I just wanted to add, in case anyone missed it, that there's a great post on the learnr blog about creating repetitive reports with Jeffrey Horner's brew package. Matt and Kevin both mentioned brew above. I haven't actually used it much myself.

The entries follows a nice workflow, so it's well worth a read:

  1. Prepare the data.
  2. Prepare the report template.
  3. Produce the report.

Actually producing the report once the first two steps are complete is very simple:

library(tools) library(brew) brew("population.brew", "population.tex") texi2dvi("population.tex", pdf = TRUE) 
vote vote

50

For creating custom reports, I've found it useful to incorporate many of the existing tips suggested here.

Generating reports: A good strategy for generating reports involves the combination of Sweave, make, and R.

Editor: Good editors for preparing Sweave documents include:

  • StatET and Eclipse
  • Emacs and ESS
  • Vim and Vim-R
  • R Studio

Code organisation: In terms of code organisation, I find two strategies useful:

Top 3 video Explaining r - Workflow for statistical analysis and report writing

Related QUESTION?