Skip to content

a pipeline for splicing analysis using short reads

License

Notifications You must be signed in to change notification settings

wtsi-hgi/nf_splicing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Splicing Analysis Pipeline

A nextflow pipeline of identifying and quantifying splicing events

Table of Contents

[Show or Hide]
  1. Dependencies
  2. File Format
  3. Usage

Dependencies

  • nextflow
  • bwa
  • hisat2
  • samtools
  • bamtools
  • R packages
    • Rsamtools

File Format

Structure of input directories

example

Sample Sheet -- csv

sample replicate directory read1 read2 reference
s1 rep1 /path/of/directory/ s1rep1_r1.fastq.gz s1rep1_r2.fastq.gz ref.fa
s1 rep2 /path/of/directory/ s1rep2_r1.fastq.gz s1rep2_r2.fastq.gz ref.fa
s1 rep3 /path/of/directory/ s1rep3_r1.fastq.gz s1rep3_r2.fastq.gz ref.fa

Note

  1. The sample sheet must be a csv file and the header must be like below in the example

Usage

Run job

submit the bash script below

#!/bin/bash
#BSUB -o %J.o
#BSUB -e %J.e
#BSUB -R "select[mem>1000] rusage[mem=1000]"
#BSUB -M 1000
#BSUB -q normal

# modules
module load HGI/common/nextflow/23.10.0
module load HGI/softpack/users/fs18/nf_longread

#--------------#
# user specify #
#--------------#
# LSF group
export LSB_DEFAULT_USERGROUP=hgi

# Paths
export INPUTSAMPLE=$PWD/inputs/samplesheet.csv
export OUTPUTRES=$PWD/outputs

#-----------#
# pipelines #
#-----------#
nextflow run -resume nf_longread/main.nf --sample_sheet $INPUTSAMPLE \
                                         --protocol DNA \
                                         --platform hifi \
                                         --outdir $OUTPUTRES \
                                         --skip_snvcov

Usage options

About

a pipeline for splicing analysis using short reads

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published