Whether working on genomic datasets or conducting a simulation study, research projects often require identical or nearly identical jobs to be replicated across vast numbers of datasets.
The front-end of RevBayes is an interpretted language, which provides users with an agile and powerful interface for scripting.
Let’s set up a little scenario to make it easy on the imagination.
Suppose your task is to assess the sensitivity of posterior tree probabilities for one simple and one complex model—say, a Jukes-Cantor model, where all transition rates are equal, and the Felsenstein 81 model, where base frequencies are free parameters to be estimated.
You’ve stored multiple sequence alignments for all the genes of interest in the folder genes in NEXUS format.
You want to use RevBayes to estimate the posterior density for each gene, once assuming the rate matrix is fnJC and a second time assuming it is fnF81.
Instead of repeatedly modifying and running your RevBayes script, rb_gene_model.Rev, you can automate the job in bash using the echo command and pipes (|).
To follow along or download the scripts, issue these commands in the shell.
First, we’ll create a RevBayes script called rb_gene_model.Rev that expects three pre-defined variables: data_file gives the name of the gene alignment, job_id gives the analysis an identity that matches the filename, and type_Q gives the type of rate matrix to use.
Next we’ll create a bash script called rb_gene_job.sh.
When called, RevBayes treats arguments as files to be sourced, which means we can pipe (|) the stdout file from echo as a source file into RevBayes.
Perfect for scripting!
Third, we’ll create rb_gene_batch.sh to repeatedly call rb_gene_job.sh with different combinations of arguments.
Note that debug=0 by default, which will cause RevBayes to be run in the background (&) and without being hung up upon closing the terminal (nohup).
These are useful features when running jobs on clusters.
Everything is in place, and we can run the script.
…and, in time, the results will appear in the output directory.
You bet heads on the coin flip, and lose again.
Your host wonders aloud what the chances of losing five games in a row might be.
Most would reply the odds for heads over tails is fifty-fifty, which comes to for five trials, assuming a fair coin.
But, then again, the hour is late, the carnival is closed, and your host with the pointed mustache pours you more wine.
You begin to question whether it is a fair coin.
This is to ask how are the data distributed for what parameters.
A coin flip may be represented as a random variable, , which can either be heads, , or tails, .
The Bernoulli distribution says a random variable takes the value with probability .
That is, is a parameter that controls how fair the coin is.
Mathematically, states that the value of is distributed by a Bernoulli distribution, where and each occur with probability .
The naive and trusting model, , is programmed in RevBayes like so
Alternatively, a model anticipating a con game, , seeks to represent that the fairness of the coin is unknown, but biased towards flipping tails (i.e. is expected).
is , where might take on any possible amount of unfairness, .
This is described mathematically with the three statements, , , and .
The corresponding RevBayes code for is
In the example for , we see that the probability the coin is heads, h, depends on the value of p.
p itself depends on the value of q, which, in turn, is drawn from a uniform distribution with bounds imposed by lower and upper.
Probabilistic graphical models
A pattern is emerging where model complexity grows in response to the number of parameters and the dependencies among those variables.
When a model has very few variables, like with our coin-flipping example, these parameter interactions are easy to summarize.
The dependencies between all random variables in the model can be written explicitly in terms of equations.
For large and complex models, like phylogenetic models, the incredible number of parameter interactions can be overwhelming.
Any model with a strong and separable dependency structure may be equivalently represented as a probabilistic graphical model, and visualized as a directed acyclic graph.
RevBayes fundamentally treats models as probabilistic graphs, a perspective that offers several advantages.
First, it offers a power visualization tool, useful both for summarizing and reducing model complexity, and for teaching.
The dependency structure lends itself to various computational efficiencies, including marginalization techniques, Markov chain Monte Carlo, generative simulations.
Graphs also modular, in that they can be decomposed into subgraphs while retaining their internal topologies.
This is improves code design and reusability both when specifying models using the Rev language, and when developing the back-end architecture powering RevBayes.
These sentiments are summarized in Hoehna et al. (2014) and will be reinforced throughout this series of posts.
A brief overview of specifying graphical models in RevBayes is given below.
Constant nodes: solid squares
A model variable that is known is a constant node.
Constant nodes are treated as known variables and are not estimated during inference.
Creating a constant node is done using the <- assignment operator.
Stochastic nodes: solid circles
Probabilistic inference is primarily interested in learning the values of the unknown parameters that summarize the data.
Each of these parameters is a stochastic node.
Stochastic nodes are created using the ~ operator, where the right-hand side gives the distribution (and parameters) that measure the probability of the random variable on the left-hand side.
RevBayes works with generative models, which allows for an exact match between the simulation model and the inference model.
This greatly simplifies workflows involving simulation analyses, such as model validation and statistical power analyses.
Deterministic nodes: dotted circles
A variable whose value is exactly determined by other model variables is a deterministic node.
Deterministic nodes have a value equal to a function of any number and combination of constant, deterministic, and stochastic nodes, which is defined using the := operator.
This affords great flexibility in parameterizing the model.
Deterministic nodes are immediately updated when any of their parental nodes’ values are changed.
The probability of a stochastic node changes whenever its distribution’s parameters change.
Observed (stochastic) nodes: shaded circles
During inference, some random variables are observed to have particular values.
For example, an individual coin (the variable) might show heads or tails (the values) after being flipped.
Once the coin is observed to be, say, tails (), the question of interest becomes what is the likelihood that the coin flip resulted in tails under its distribution.
That is, the unobserved random variable might take any number of values, but the observed random variable is “clamped” to the actual value of the trial.
To clamp a stochastic node to be observed for some value
In effect, calling clamp sets the node’s value and distinguishes it to be included in the model likelihood (rather than part of the prior).
Future posts will discuss how this distinction affects certain parts of Bayesian analyses, such as computing Bayes factors.
Plates (replicated subgraphs)
Models often contain repetitive structures, which may be represented compactly.
Graphical models use plates.
In RevBayes for loops are the simplest way to represent.
They are also useful for succinctly imposing a Markov property between model variables.
The structure (or str) function is useful to learn about how the nodes relate to one another. For example, the node q is the parent of p and the child of lower and upper.
Assigning a new value to p will cause q to lose p as a child.
Of course, phylogenetic models are more complex than coin flipping experiments.
The basic building blocks to compose richer models are essentially the same.
That’s the beauty of graphical models: that simple relationships can generate scalable complexity.
In the long run, the goal is not only to build models that formalize our understanding of nature, but to learn about the behavior of the processes that generate biodiversity.
Next time we’ll look into RevBayes data structures.
This is the first of a series of posts to outline how to infer a molecular phylogeny in RevBayes.
My goal is to demonstrate the flexibility and power of RevBayes using simple, modifiable code snippets.
These posts will additionally serve as a foundation to explore advanced techniques and models in the future – which is the fun part!
Topic-specific tutorials are also available online.
But what is RevBayes?
RevBayes is an open source software package for Bayesian phylogenetic inference specified through probabilistic graphical models and an interactive language.
This design allows researchers to estimate species relationships using a modeling interface that is simple, flexible, and efficient.
The result is that phylogenetic model space becomes more compact (without shrinking), so it may be explored with ease by empirical and theoretical biologists, who may have great ideas but do not have the time or skills to translate them into computer code.
Let’s concretely examine what this means for phylogenetic models.
The most widely adopted phylogenetic models rely on four submodel components (or modules):
the diversification process, the molecular substitution process, the branch-rate variation model (or the relaxed clock model), and site-rate variation models.
Each of these modules comes in a variety of flavors.
For example, there are a vast number of substitution processes depending on what evolutionary features you wish to model – e.g. Do transitions and transversions occur at equal rates? Do all bases occur at equal frequencies? Should the process be time-reversible?
Each of these models is fully described by an instantaneous rate matrix, and only differ in how the rate matrix elements are parameterized: are all rates fixed to be equal, all different, or somewhere in between?
Ideally, a researcher should be able to compose her phylogenetic model not only from canonical modules described in the literature, but also to apply new types within any given class of modules that she imagines.
The following posts will explore how these modules interact and how they may be customized in RevBayes.
As mentioned earlier, RevBayes specifies models through a programming language.
Learning a new language begins with exposure, so a natural place to start is with a boilerplate phylogenetic model of molecular substitution.
For now, just give the code a glance and keep it in mind for the future.
If you are well-versed in phylogenetic models, the structure of some variables should be familiar.
The anatomy of the code will be covered in detail throughout the imminent series of posts.
By the end, you’ll be comfortable reading and modifying the code, tailored to your datset and interests.
Below is a tentative outline, which will be updated with post links as they’re completed.