Book Review: Risk and Reward by Stephen Catlin

2016 was one of my most productive years for reading books, While working at Admiral Insurance, I commuted to Cardiff in south Wales from my small home town on train, taking around 80 minutes each way. A large portion of my commute was taken up in riding the tired pacer trains operated by Arriva Trains Wales into the heart of the city’s capital. Among the jostling of rat-race tired bodies, thrashing of the engines and thrum of the incessant heating system which ran through summer sun and winter frost, I found time to read more books than I have accomplished in any other year of my life.

Risk and Reward by Stephen Catlin

In 2017, this habit changed markedly with the advent of a short lived venture into the world of academia. No longer spurred on by the dullness of the daily commute, my reading regimen fell off a cliff. It was only in August of last year that I rekindled my enthusiasm for reading until I began to read this book. Make no mistake, it has taken me five months to complete this modest book of 308 pages. I lost interest in reading this book on more than one occasion.

In the meantime I have changed my career’s direction, from the world of academia and back to software development but back to the insurance industry with renewed vigour. In part I give thanks for this to Admiral Insurance which has a really friendly atmosphere which has impressed on me great appreciation of the industry. The other thing that prompted this return was Stephen Catlin’s book, Risk and Reward: An Inside View of the Property/Casualty Insurance Business.

As a disclaimer, let me explain that I did not have a great background in Insurance prior to this book. My perspective is of a software developer with, at present, very limited understanding of Insurance.

Stephen Catlin has provides his unique perspective on the insurance industry, a leader in the industry who has worked his way up from being an underwriter; to CEO of his World-reaching company; to conducting a significant merger, resulting in a merger resulting in XL Catlin.

The strength of this book lies in that it initially covers basic concepts in the Insurance industry, with a particular emphasis on Lloyd’s of London and its unique syndicate system. While Lloyd’s is no longer the largest insurance market in the World, it plays a significant role in the history of insurance. Catlin expands on the shortcomings and inadequacies felt in the industry since the 1970s and how things have changed as a result from pressures brought on by inadequate handling of reinsurance and poor sources of money backing syndicates.

Catlin then relays the history of his insurance company and goes into detail about his thought process as his company developed, moving their headquarters to Bermuda as well as expanding into other markets and ultimately joining the XL Group to become XL Catlin.

In the final section of the book, Catlin wrestles with his experience and discusses what he learned about being a good leader. Here are a couple of gems that I’ve gleaned from the book:

You don’t have to fight about everything you disagree with. Fight when it matters, and when you do fight, make certain you can win.

In board meetings:

Once I stopped talking so much, my level of influence among fellow directors rose incredibly.

All in all, this book was really interesting to read from someone who is outside the Insurance industry to gain a perspective of how things have been and continue to change. From Catlin’s perspective, the industry has matured significantly in the course of his career; insurance serves a real good in society in spreading the cost of risk. Going forward, raising sufficient and good capital and providing a better customer experience through better software are two issues that need greater attention.

Unsubscribe from all your YouTube channels with one weird trick

Here’s a short one for you. I have wanted to clear my list of subscribed channels in YouTube for a long time. Unfortunately, it seems that in recent years, there’s no automated way of doing this. If, like me you are subscribed to a lot of channels on YouTube, this means that you need to click on each channel you are subscribed to and selecting the ‘Unsubscribe’ button.

In my case, it was quicker to write a short snippet of JavaScript that manipulates the DOM in order to achieve this. I’m sharing this here to save someone from repeating my efforts, as long as YouTube doesn’t change the layout of their subscriptions page any time soon.

To unsubscribe from all your YouTube channels, open chrome and visit

Open the console by typing F12 and copying and pasting the following code:


Calculate average ranking in R

Here is short post to describe how to calculate the average rank from a set of multiple ranks listed in different
data.frames in R. This is a fairly straightforward procedure, however, it took me more time than I anticipated to make this

To begin with, let's create a set of data.frames with and randomly assign them rank values from 1 to 5 for the letters

For A, B, C, D and E, we can quite easily calculate the average ranks. To do this, using the sapply command, we can create a
matrix of all the rankings for each data frame with a column for each of the five sets of rankings and a row for each of A
through E:

Next, we calculate the mean for each of A, B, C, D and E using the built-in R function rowMeans:

Finally we use the order function to get the final rank values and convert the vector back into a data.frame which is of
the same format as the original rank data.frames:

I hope this was of use to someone, even if that person happens to a forgetful future me. I'm more than certain that
there is an R function or an R package which will perform this for you but it is nonetheless at most interesting and at
least fun to implement. If anyone has an alternative, more elegant solution, I would really appreciate hearing from you.
Happy hacking!

Be pragmatic about your choice of laptop in Bioinformatics

Recently I have been familiarising myself with analysing microarray data in R.  Statistics and Analysis for Microarrays Using R and Bioconductor by Sorin Draghici is proving to be indispensible in guiding me through retrieving microarray data from the Gene Expression Omnibus (GEO), performing typical quality control on samples and  normalizing expression data over multiple samples.

As an example, I wanted to examine the gene expression profiles from GSE76250 which is a comprehensive set of 165 Triple-Negative Breast Cancer samples. In order to perform the quality control on this dataset as detailed by the book, I needed to download the Affymetrix .CEL files and then load them into R as an AffyBatch object:

The AffyBatch object representing these data when loaded into R takes over 4 gigabytes of memory. When you then perform normalization on this data using rma( (Robust Multi-Array Average), this creates an ExpressionSet that effectively doubles that.

This is where I come a bit unstuck. My laptop is an Asus Zenbook 13-inch UX303A which comes with (what I thought to be) a whopping 11 gigabytes of RAM. This meant that after loading and transforming all the data onto my laptop, I had effectively maxed out my RAM. The obvious answer would be to upgrade my RAM. Unfortunately, due to the small form factor of my laptop, I only have one accessible RAM slot meaning my options are limited.

So, I have concluded that I have three other options to resolve this issue.

  1. Firstly, I could buy a machine that has significantly more memory capacity at the expense of portability. Ideally, I don’t want to do this because it is the most expensive approach to tackling the problem.
  2. Another option would be to rent a Virtual Private Server (VPS) with lots of RAM and to install RStudio Webserver on it. I’m quite tempted by the idea of this but I don’t like the idea of my programming environment being exposed to the internet. Having said this, the data I am analysing is not sensitive data and, any code that I write could be safely committed to a private Bitbucket or Github repository.
  3. Or, I could invest the time in approaching the above problem in a less naive way! This would mean reading the documentation for the various R and Bioconductor packages to uncover a more memory restricted method or, it could mean scoping my data tactically so that, for instance, the AffyBatch project will be garbage collected, thereby freeing up memory once I no longer need it.

In any case, I have learned to be reluctant to follow the final path unless it is  absolutely necessary. I don’t particularly want to risk obfuscating my code by adding extra layers of indirection while, at the same time, leaving myself open to making more mistakes by making my code more convoluted.

The moral of the story is not to buy a laptop for its form factor if your plan is to do some real work on it. Go for the clunkier option that has more than one RAM slot.

Either that or I could Download More Ram.