Testing code in RMarkdown documents with knitr

Over the last few months, Literate Programming  has proved to be a huge help to me in documenting my exploratory code in R. Writing Rmarkdown documents and building them with knitr not only provides me a greater opportunity to clarify my code in plain English, it also allows me to rationalise why I did something in the first place.

While this is really useful, this has come at the expense of writing careful, well unit tested code. For instance, last week I discovered that a relatively simple function that I wrote to take the average values from multiple data frames was completely wrong.

As such, I wanted to find a way that let me continue to write Rmarkdown while also testing my code directly  by using a common unit testing framework like testthat.

Here is one solution to that problem: if we isolate our key functions from our Rmarkdown document and place them in a separate R file, we can test them with testthat and include them in our Rmarkdown document by using knitr’s get_chunk function.

Prime Numbers

As an example, let’s create a document which shows our function for finding out if a number if prime or not:

This Rmarkdown document lists the function and tests to see whether 1,000 is prime or not. After rendering the HTML document, I am relieved to see is.prime does indeed yield FALSE for is.prime(1000). If I wanted to introduce more tests, I could simply add more lines to the test-is-prime chunk and, if they passed, comment that chunk out of the file.

This isn’t ideal for a number of reasons, one reason being that in this case, I’m not using a testing framework which would allow me to automatically check if I had broken my code.

I solved this by moving the is-prime-function chunk into a separate R file called is_prime.R:

Knitr interprets ‘## —-‘ at the start of a line as an indicator for a label for a chunk of code that can be reused later. The chunk of code associated with this label goes until the end of the file or until another label is encountered.

To include this code, as before,I just have to change a few small things:

You will note the extra line to the setup chunk which includes a read_chunk function in knitr which includes the path to the newly created R file. To include the function is.prime in the document again, an empty chunk with the same name as the label in is_prime.R has to be created. When I use knitr to create the document, knitr will inject the is.prime function into the external-code chunk and is.prime(1000) will execute successfully.

Now, testing is.prime with testthat is relatively easy by just creating test/test_is_prime.R and writing a few test cases:

And to run our tests in RStudio, we just have to type this into the console:

It’s fairly simple, clean and common sense. An added bonus is that I can now inject is_prime into any other Rmarkdown document by following the same method.

The accompanying source code for this blog can be found at https://github.com/hiraethus/How-to-unit-test-RMarkdown.

Bioconductor Tip: Use affycoretools to get Gene Symbols for your ExpressionSet

For whatever reason, following on from my despair with normalizing gene expression data from earlier in the week, my most recent challenge has been to take a Bioconductor ExpressionSet of gene expression data measured using an Affymetrix GeneChip® Human Transcriptome Array 2.0 but instead of labeling each row with its probe ID having it mapped to its corresponding gene symbol.

I have seen a lot of code samples that suggest using variations on a theme of using the biomaRt package or querying a SQL database of annotation data directly:  in the former I gave up trying; in the latter, I ran away to hide, having only interacted with a SQL database through Java’s JPA abstraction layer recently.

It turns out to be very easy to do this using the affycoretools package by James W MacDonald which contains ‘various wrapper functions that have been written to streamline the more common analyses that a core Biostatistician might see.’

As you can see below, you can very easily extract a vector of gene symbols for each of your probe IDs and assign it as the rownames to your gene expression data.frame.

I hope this will save you the trouble of finding this gem of a package.

Be pragmatic about your choice of laptop in Bioinformatics

Recently I have been familiarising myself with analysing microarray data in R.  Statistics and Analysis for Microarrays Using R and Bioconductor by Sorin Draghici is proving to be indispensible in guiding me through retrieving microarray data from the Gene Expression Omnibus (GEO), performing typical quality control on samples and  normalizing expression data over multiple samples.

As an example, I wanted to examine the gene expression profiles from GSE76250 which is a comprehensive set of 165 Triple-Negative Breast Cancer samples. In order to perform the quality control on this dataset as detailed by the book, I needed to download the Affymetrix .CEL files and then load them into R as an AffyBatch object:

The raw.data AffyBatch object representing these data when loaded into R takes over 4 gigabytes of memory. When you then perform normalization on this data using rma(raw.data) (Robust Multi-Array Average), this creates an ExpressionSet that effectively doubles that.

This is where I come a bit unstuck. My laptop is an Asus Zenbook 13-inch UX303A which comes with (what I thought to be) a whopping 11 gigabytes of RAM. This meant that after loading and transforming all the data onto my laptop, I had effectively maxed out my RAM. The obvious answer would be to upgrade my RAM. Unfortunately, due to the small form factor of my laptop, I only have one accessible RAM slot meaning my options are limited.

So, I have concluded that I have three other options to resolve this issue.

  1. Firstly, I could buy a machine that has significantly more memory capacity at the expense of portability. Ideally, I don’t want to do this because it is the most expensive approach to tackling the problem.
  2. Another option would be to rent a Virtual Private Server (VPS) with lots of RAM and to install RStudio Webserver on it. I’m quite tempted by the idea of this but I don’t like the idea of my programming environment being exposed to the internet. Having said this, the data I am analysing is not sensitive data and, any code that I write could be safely committed to a private Bitbucket or Github repository.
  3. Or, I could invest the time in approaching the above problem in a less naive way! This would mean reading the documentation for the various R and Bioconductor packages to uncover a more memory restricted method or, it could mean scoping my data tactically so that, for instance, the AffyBatch project will be garbage collected, thereby freeing up memory once I no longer need it.

In any case, I have learned to be reluctant to follow the final path unless it is  absolutely necessary. I don’t particularly want to risk obfuscating my code by adding extra layers of indirection while, at the same time, leaving myself open to making more mistakes by making my code more convoluted.

The moral of the story is not to buy a laptop for its form factor if your plan is to do some real work on it. Go for the clunkier option that has more than one RAM slot.

Either that or I could Download More Ram.

R XML Package

I’ve spent a number of years programming in Java so, during my MSc in
Bioinformatics, it took me a while to become acquainted with the nuances and
the idioms of writing code in R. It has been discussed extensively elsewhere,
little better than John Cook’s lecture R: The Good, The Bad and The Ugly.
While at first I was frustrated with the language, I am starting to become fond
of the language, if not only because of the increasingly rich tooling (such as
RStudio) as well as the packaging system. While unrelated to the field of
Bioinformatics, I have started to write some sample R code for pleasure and
because of the brevity of the code that I can write. I have been working
towards creating a Shiny web app that can visualise exercise data that is
stored in an XML format that is validated against an XML schema. You can see
the code at http://github.com/hiraethus/workout.tracker. For this I have been
using the XML package available from CRAN (kindly authored and maintained by
Duncan Temple Lang) which contains a really useful method

which will take an XML document with a fairly flat structure containing and create a data frame from them. As an example, the following:

would be rendered as a data.frame of the form

Foo Bar Baz
12 2.1 First
16 1.1 Not first
20 3.3 Last

Each of these columns will be interpreted as strings of characters. The
colClasses attribute of the xmlToDataFrame function allows the classes to be
specified as a vector, for instance c(“integer”, “numeric”, “character”).

This is great! Unfortunately, each of foo, bar and bar elements must be present
in at least one of the foobar elements. If we were to assume that this XML
document could optionally have a foobaz element of the type Boolean and we
specified our colClasses vector as such c(“integer”, “numeric”, “character”,
“boolean”) then if foobaz were not present in our document then xmlToDataFrame
would fail.

The only solution I have come up with to overcome this is to use xmlToDataFrame
without the colClasses argument and then replace each column with another
column that is of the specified type was read in from the XML document. I
currently do this in the

I am more than happy with the time savings the XML package has provided me in
converting my XML document into a Data.Frame in R. My solution to providing
types to the columns of my data frame, while probably very inefficient, is
ample for the few hundred entries I will have (or not depending how well I keep
to my fitness regime).

In the future I will reimplement this application in the Gosu programming
to show how we can use its type loader
system to use an xsd to statically generate objects directly from the xml