augur filter
Filter and subsample a sequence set.
usage: augur filter [-h] --metadata FILE [--sequences SEQUENCES]
[--sequence-index SEQUENCE_INDEX]
[--metadata-chunk-size METADATA_CHUNK_SIZE]
[--metadata-id-columns METADATA_ID_COLUMNS [METADATA_ID_COLUMNS ...]]
[--metadata-delimiters METADATA_DELIMITERS [METADATA_DELIMITERS ...]]
[--query QUERY]
[--query-columns QUERY_COLUMNS [QUERY_COLUMNS ...]]
[--min-date MIN_DATE] [--max-date MAX_DATE]
[--exclude-ambiguous-dates-by {any,day,month,year}]
[--exclude EXCLUDE [EXCLUDE ...]]
[--exclude-where EXCLUDE_WHERE [EXCLUDE_WHERE ...]]
[--exclude-all] [--include INCLUDE [INCLUDE ...]]
[--include-where INCLUDE_WHERE [INCLUDE_WHERE ...]]
[--min-length MIN_LENGTH] [--max-length MAX_LENGTH]
[--non-nucleotide] [--group-by GROUP_BY [GROUP_BY ...]]
[--sequences-per-group SEQUENCES_PER_GROUP | --subsample-max-sequences SUBSAMPLE_MAX_SEQUENCES]
[--probabilistic-sampling | --no-probabilistic-sampling]
[--priority PRIORITY] [--subsample-seed SUBSAMPLE_SEED]
[--output OUTPUT] [--output-metadata OUTPUT_METADATA]
[--output-strains OUTPUT_STRAINS]
[--output-log OUTPUT_LOG]
[--empty-output-reporting {error,warn,silent}]
inputs
metadata and sequences to be filtered
- --metadata
sequence metadata
- --sequences, -s
sequences in FASTA or VCF format
- --sequence-index
sequence composition report generated by augur index. If not provided, an index will be created on the fly.
- --metadata-chunk-size
maximum number of metadata records to read into memory at a time. Increasing this number can speed up filtering at the cost of more memory used.
Default: 100000
- --metadata-id-columns
names of possible metadata columns containing identifier information, ordered by priority. Only one ID column will be inferred.
Default: (‘strain’, ‘name’)
- --metadata-delimiters
delimiters to accept when reading a metadata file. Only one delimiter will be inferred.
Default: (‘,’, ‘t’)
metadata filters
filters to apply to metadata
- --query
- Filter samples by attribute.
Uses Pandas Dataframe querying, see https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-query for syntax. (e.g., –query “country == ‘Colombia’” or –query “(country == ‘USA’ & (division == ‘Washington’))”)
- --query-columns
Use alongside –query to specify columns and data types in the format ‘column:type’, where type is one of (float,str,int,bool). Automatic type inference will be attempted on all unspecified columns used in the query. Example: region:str coverage:float.
- --min-date
minimal cutoff for date, the cutoff date is inclusive; may be specified as: 1. an Augur-style numeric date with the year as the integer part (e.g. 2020.42) or 2. a date in ISO 8601 date format (i.e. YYYY-MM-DD) (e.g. ‘2020-06-04’) or 3. a backwards-looking relative date in ISO 8601 duration format with optional P prefix (e.g. ‘1W’, ‘P1W’)
- --max-date
maximal cutoff for date, the cutoff date is inclusive; may be specified as: 1. an Augur-style numeric date with the year as the integer part (e.g. 2020.42) or 2. a date in ISO 8601 date format (i.e. YYYY-MM-DD) (e.g. ‘2020-06-04’) or 3. a backwards-looking relative date in ISO 8601 duration format with optional P prefix (e.g. ‘1W’, ‘P1W’)
- --exclude-ambiguous-dates-by
Possible choices: any, day, month, year
Exclude ambiguous dates by day (e.g., 2020-09-XX), month (e.g., 2020-XX-XX), year (e.g., 200X-10-01), or any date fields. An ambiguous year makes the corresponding month and day ambiguous, too, even if those fields have unambiguous values (e.g., “201X-10-01”). Similarly, an ambiguous month makes the corresponding day ambiguous (e.g., “2010-XX-01”).
- --exclude
file(s) with list of strains to exclude
- --exclude-where
Exclude samples matching these conditions. Ex: “host=rat” or “host!=rat”. Multiple values are processed as OR (matching any of those specified will be excluded), not AND
- --exclude-all
exclude all strains by default. Use this with the include arguments to select a specific subset of strains.
Default: False
- --include
file(s) with list of strains to include regardless of priorities, subsampling, or absence of an entry in –sequences.
- --include-where
Include samples with these values. ex: host=rat. Multiple values are processed as OR (having any of those specified will be included), not AND. This rule is applied last and ensures any strains matching these rules will be included regardless of priorities, subsampling, or absence of an entry in –sequences.
sequence filters
filters to apply to sequence data
- --min-length
minimal length of the sequences, only counting standard nucleotide characters A, C, G, or T (case-insensitive)
- --max-length
maximum length of the sequences, only counting standard nucleotide characters A, C, G, or T (case-insensitive)
- --non-nucleotide
exclude sequences that contain illegal characters
Default: False
subsampling
options to subsample filtered data
- --group-by
categories with respect to subsample. Notes: (1) Grouping by [‘month’, ‘week’, ‘year’] is only supported when there is a ‘date’ column in the metadata. (2) ‘week’ uses the ISO week numbering system, where a week starts on a Monday and ends on a Sunday. (3) ‘month’ and ‘week’ grouping cannot be used together. (4) Custom columns [‘month’, ‘week’, ‘year’] in the metadata are ignored for grouping. Please rename them if you want to use their values for grouping.
- --sequences-per-group
subsample to no more than this number of sequences per category
- --subsample-max-sequences
subsample to no more than this number of sequences; can be used without the group_by argument
- --probabilistic-sampling
Allow probabilistic sampling during subsampling. This is useful when there are more groups than requested sequences. This option only applies when –subsample-max-sequences is provided.
Default: True
- --no-probabilistic-sampling
Default: True
- --priority
- tab-delimited file with list of priority scores for strains (e.g., “<strain>t<priority>”) and no header.
When scores are provided, Augur converts scores to floating point values, sorts strains within each subsampling group from highest to lowest priority, and selects the top N strains per group where N is the calculated or requested number of strains per group. Higher numbers indicate higher priority. Since priorities represent relative values between strains, these values can be arbitrary.
- --subsample-seed
random number generator seed to allow reproducible subsampling (with same input data).
outputs
options related to outputs, at least one of the possible representations of filtered data (–output, –output-metadata, –output-strains) is required
- --output, --output-sequences, -o
filtered sequences in FASTA format
- --output-metadata
metadata for strains that passed filters
- --output-strains
list of strains that passed filters (no header)
- --output-log
tab-delimited file with one row for each filtered strain and the reason it was filtered. Keyword arguments used for a given filter are reported in JSON format in a kwargs column.
- --empty-output-reporting
Possible choices: error, warn, silent
How should empty outputs be reported when no strains pass filtering and/or subsampling.
Default: error
Guides
Below are some examples of using augur filter
to sample data.
Filtering
The filter command allows you to select various subsets of your input data for different types of analysis. A simple example use of this command would be
augur filter \
--sequences data/sequences.fasta \
--metadata data/metadata.tsv \
--min-date 2012 \
--output-sequences filtered_sequences.fasta \
--output-metadata filtered_metadata.tsv
This command will select all sequences with collection date in 2012 or later.
The filter command has a large number of options that allow flexible filtering for many common situations.
One such use-case is the exclusion of sequences that are known to be outliers (e.g. because of sequencing errors, cell-culture adaptation, …).
These can be specified in a separate text file (e.g. exclude.txt
):
BRA/2016/FC_DQ75D1
COL/FLR_00034/2015
...
To drop such strains, you can pass the filename to --exclude
:
augur filter \
--sequences data/sequences.fasta \
--metadata data/metadata.tsv \
--min-date 2012 \
--exclude exclude.txt \
--output-sequences filtered_sequences.fasta \
--output-metadata filtered_metadata.tsv
Subsampling within augur filter
Another common filtering operation is subsetting of data to a achieve a more even spatio-temporal distribution or to cut-down data set size to more manageable numbers. The filter command allows you to select a specific number of sequences from specific groups, for example one sequence per month from each country:
augur filter \
--sequences data/sequences.fasta \
--metadata data/metadata.tsv \
--min-date 2012 \
--exclude exclude.txt \
--group-by country year month \
--sequences-per-group 1 \
--output-sequences subsampled_sequences.fasta \
--output-metadata subsampled_metadata.tsv
Subsampling using multiple augur filter
commands
There are some subsampling strategies in which a single call to augur filter
does not suffice. One such strategy is “tiered subsampling”. In this strategy,
mutually exclusive sets of filters, each representing a “tier”, are sampled with
different subsampling rules. This is commonly used to create geographic tiers.
Consider this subsampling scheme:
Sample 100 sequences from Washington state and 50 sequences from the rest of the United States.
This cannot be done in a single call to augur filter
. Instead, it can be
decomposed into multiple schemes, each handled by a single call to augur
filter
. Additionally, there is an extra step to combine the intermediate
samples.
Sample 100 sequences from Washington state.
Sample 50 sequences from the rest of the United States.
Combine the samples.
Calling augur filter
multiple times
A basic approach is to run the augur filter
commands directly. This works
well for ad-hoc analyses.
# 1. Sample 100 sequences from Washington state
augur filter \
--sequences sequences.fasta \
--metadata metadata.tsv \
--query "state == 'WA'" \
--subsample-max-sequences 100 \
--output-strains sample_strains_state.txt
# 2. Sample 50 sequences from the rest of the United States
augur filter \
--sequences sequences.fasta \
--metadata metadata.tsv \
--query "state != 'WA' & country == 'USA'" \
--subsample-max-sequences 50 \
--output-strains sample_strains_country.txt
# 3. Combine using augur filter
augur filter \
--sequences sequences.fasta \
--metadata metadata.tsv \
--exclude-all \
--include sample_strains_state.txt \
sample_strains_country.txt \
--output-sequences subsampled_sequences.fasta \
--output-metadata subsampled_metadata.tsv
Each intermediate sample is represented by a strain list file obtained from
--output-strains
. The final step uses augur filter
with --exclude-all
and --include
to sample the data based on the intermediate strain list
files. If the same strain appears in both files, augur filter
will only
write it once in each of the final outputs.
Generalizing subsampling in a workflow
The approach above can be cumbersome with more intermediate samples. To generalize this process and allow for more flexibility, a workflow management system can be used. The following examples use Snakemake.
Add a section in the config file.
subsampling: state: --query "state == 'WA'" --subsample-max-sequences 100 country: --query "state != 'WA' & country == 'USA'" --subsample-max-sequences 50
Add two rules in a Snakefile. If you are building a standard Nextstrain workflow, the output files should be used as input to sequence alignment. See Parts of a whole to learn more about the placement of this step within a workflow.
# 1. Sample 100 sequences from Washington state # 2. Sample 50 sequences from the rest of the United States rule intermediate_sample: input: metadata = "data/metadata.tsv", output: strains = "results/sample_strains_{sample_name}.txt", params: augur_filter_args = lambda wildcards: config.get("subsampling", {}).get(wildcards.sample_name, "") shell: """ augur filter \ --metadata {input.metadata} \ {params.augur_filter_args} \ --output-strains {output.strains} """ # 3. Combine using augur filter rule combine_intermediate_samples: input: sequences = "data/sequences.fasta", metadata = "data/metadata.tsv", intermediate_sample_strains = expand("results/sample_strains_{sample_name}.txt", sample_name=list(config.get("subsampling", {}).keys())) output: sequences = "results/subsampled_sequences.fasta", metadata = "results/subsampled_metadata.tsv", shell: """ augur filter \ --sequences {input.sequences} \ --metadata {input.metadata} \ --exclude-all \ --include {input.intermediate_sample_strains} \ --output-sequences {output.sequences} \ --output-metadata {output.metadata} """
Run Snakemake targeting the second rule.
snakemake combine_intermediate_samples
Explanation:
The configuration section consists of one entry per intermediate sample in the format
sample_name: <augur filter arguments>
.The first rule is run once per intermediate sample using wildcards and an input function. The output of each run is the sampled strain list.
The second rule uses expand() to define input as all the intermediate sampled strain lists, which are passed directly to
--include
as done in the previous example.
It is easy to add or remove intermediate samples. The configuration above can be updated to add another tier in between state and country:
subsampling: state: --query "state == 'WA'" --subsample-max-sequences 100 neighboring_states: --query "state in {'CA', 'ID', 'OR', 'NV'}" --subsample-max-sequences 75 country: --query "country == 'USA' & state not in {'WA', 'CA', 'ID', 'OR', 'NV'}" --subsample-max-sequences 50