diff --git a/docs/hello_nextflow/01_orientation.md b/docs/hello_nextflow/01_orientation.md
new file mode 100644
index 000000000..071cee23e
--- /dev/null
+++ b/docs/hello_nextflow/01_orientation.md
@@ -0,0 +1,70 @@
+# Orientation
+
+## Tour of Gitpod
+
+If you haven't yet, log into the [![Nextflow Training GitPod](https://img.shields.io/badge/Gitpod-%20Open%20in%20Gitpod-908a85?logo=gitpod)](https://gitpod.io/#https://github.com/nextflow-io/training), which provides a virtual machine with everything already set up for you.
+
+In the Gitpod window, you'll see a terminal. Type the following command to switch to the folder of this training material:
+
+```bash
+cd /workspace/gitpod/hello-nextflow
+```
+
+Take a few minutes to familiarize yourself with the gitpod environment, especially the file explorer, file browser and terminal.
+
+## Pipeline data and scripts
+
+We provide all test data, code and accessory needed to work through this training module. To view a full list, run the following command in the Gitpod terminal:
+
+```bash
+tree /workspace/gitpod/hello-nextflow
+```
+
+You should see the following output:
+
+```bash
+hello-nextflow
+├── data
+│   ├── bam
+│       ├── reads_father.bam
+│       ├── reads_mother.bam
+│       └── reads_son.bam
+│   ├── intervals.list
+│   ├── ref.tar.gz
+│   ├── sample_bams.txt
+│   └── samplesheet.csv
+├── scripts
+│   ├── hello-gatk-1.nf
+│   ├── hello-gatk-2.nf
+│   ├── hello-gatk-3.nf
+│   ├── hello-gatk-4.nf
+│   ├── hello-gatk-5.nf
+│   ├── hello-gatk-6.nf
+│   ├── hello-world-1.nf
+│   ├── hello-world-2.nf
+│   ├── hello-world-3.nf
+│   ├── hello-world-4.nf
+│   ├── hello-world-5.nf
+│   ├── hello-world-6.nf
+│   ├── hello-world-7.nf
+│   └── hello-world-8.nf
+├── greetings.txt
+├── hello-gatk.nf
+├── hello-world.nf
+└── nextflow.config
+
+```
+
+### Description of contents
+
+**The `data` directory** contains the input data we'll use in Part 2: Hello GATK, which uses an example from genomics to demonstrate how to build a simple analysis pipeline. The data is described in detail in that section of the training.
+
+**The `scripts` directory** contains the completed workflow scripts that result from each step of the tutorial and are intended to be used as a reference to check your work. The name and number in the filename correspond to the step of the relevant tutorial. For example, the file `hello-world-4.nf` is the expected result of completing steps 1 through 4 of Part 1: Hello World.
+
+**The file `greetings.txt`** is a plain text file used to provide inputs in Part 1: Hello World.
+
+**The file `hello-gatk.nf`** is a stub that serves as a starting point to Part 2: Hello GATK. In its initial state, it is NOT a functional workflow script.
+
+**The file `hello-world.nf`** is a simple but fully functional workflow script that serves as a starting point to Part 1: Hello World.
+
+**The file `nextflow.config`** is a configuration file that sets minimal environment properties.
diff --git a/docs/hello_nextflow/02_hello_world.md b/docs/hello_nextflow/02_hello_world.md
new file mode 100644
index 000000000..184b7fce0
--- /dev/null
+++ b/docs/hello_nextflow/02_hello_world.md
@@ -0,0 +1,789 @@
+# Part 1: Hello World
+
+A "Hello, World!" example is a minimalist example that is meant to demonstrate the basic syntax and structure of a programming language or software framework. The example typically consists of printing the phrase "Hello, World!" to the output device, such as the console or terminal, or writing it to a file.
+
+---
+
+## 0. Warmup: Run Hello World directly
+
+Let's demonstrate this with a simple command that we run directly in the terminal, to show what it does before we wrap it in Nextflow.
+
+#### 1. Make the terminal say hello
+
+```
+echo 'Hello World!'
+```
+
+#### 2. Now make it write the text output to a file
+
+```
+echo 'Hello World!' > output.txt
+```
+
+#### 3. Verify that the output file is there using the `ls` command
+
+```
+ls
+```
+
+#### 4. Show the file contents
+
+```
+cat output.txt
+```
+
+!!! tip
+
+    In the Gitpod environment, you can also find the output file in the file explorer, and view its contents by clicking on it.
+
+### Takeaway
+
+You know how to run a simple command in the terminal that outputs some text, and optionally, how to make it write the output to a file.
+
+### What's next?
+
+Learn how to turn that into a step in a Nextflow pipeline.
+
+---
+
+## 1. Very first Nextflow run
+
+Now we're going to run a script (named `hello-world.nf`) that does the same thing as before (write 'Hello World!' to a file) but with Nextflow.
+
+!!! info
+
+    We're intentionally not looking at the script yet. Understanding what is the result _before_ we look into the machine will help us understand what the parts do.
+
+#### 1. Run the workflow
+
+```
+nextflow run hello-world.nf
+```
+
+You should see something like this:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [mighty_murdock] DSL2 - revision: 80e92a677c
+executor >  local (1)
+[4e/6ba912] process > sayHello [100%] 1 of 1 ✔
+```
+
+Congratulations, you ran your first Nextflow pipeline!
+
+The most important thing here is the last line, which reports that the `sayHello` process was executed once, successfully. At the start of the line, you can find the name of the work directory that was created for the process execution.
+
+Browse the work directory in the file explorer to find the log files and any outputs created by the process. You should find the following files:
+
+-   **`.command.begin`**: Metadata related to the beginning of the execution of the process
+-   **`.command.err`**: Error messages emitted by the process (stderr)
+-   **`.command.log`**: Complete log output emitted by the process
+-   **`.command.out`**: Regular output by the process (stdout)
+-   **`.command.sh`**: The command that was run by the process call
+-   **`.exitcode`**: The exit code resulting from the command
+
+In this case, look for your output in the `.command.out` file.
+
+!!! tip
+
+    Some of the specifics will be different in your log output. For example, here `[mighty_murdock]` and `[4e/6ba912]` are randomly generated names, so those will be different every time.
+
+### Takeaway
+
+You know how to run a simple Nextflow script and navigate the outputs.
+
+### What's next?
+
+Learn how to interpret the Nextflow code.
+
+---
+
+## 2. Interpret the Hello World script
+
+Let's open the script and look at how it's structured.
+
+#### 1. Double click on the file in the file explorer to open it in the editor pane
+
+The first block of code describes a **process** called `sayHello` that writes its output to `stdout`:
+
+```
+process sayHello {
+
+    output:
+        stdout
+
+    """
+    echo 'Hello World!'
+    """
+}
+```
+
+The second block of code describes the **workflow** itself, which consists of one call to the `sayHello` process.
+
+```
+workflow {
+    sayHello()
+}
+```
+
+#### 2. Add a comment block above the process to document what it does in plain English
+
+```
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+```
+
+#### 3. Add an in-line comment above the process call
+
+```
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
+```
+
+### Takeaway
+
+You know how to interpret the simplest possible Nextflow script and add comments to document it.
+
+### What's next?
+
+Learn how to make it output a named file.
+
+---
+
+## 3. Send the output to a file
+
+It's the same thing we did when just running in the terminal. In a real-world pipeline, this is like having a command that specifies an output file as part of its normal syntax. We'll see examples of that later.
+
+#### 1. Change the process command to output a named file
+
+_Before:_
+
+```
+echo 'Hello World!'
+```
+
+_After:_
+
+```
+echo 'Hello World!' > output.txt
+```
+
+#### 2. Change the output declaration in the process
+
+_Before:_
+
+```
+    output:
+        stdout
+```
+
+_After:_
+
+```
+    output:
+        path 'output.txt'
+```
+
+#### 3. Run the workflow again
+
+```
+nextflow run hello-world.nf
+```
+
+The log output should be very similar to the first time your ran the workflow:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `scripts/hello-world.nf` [disturbed_cajal] DSL2 - revision: 9512241567
+executor >  local (1)
+[ab/c61321] process > sayHello [100%] 1 of 1 ✔
+```
+
+Like you did before, find the work directory in the file explorer. Find the `output.txt` output file and click on it to open it and verify that it contains the greeting as expected.
+
+!!! warning
+
+    This example is brittle because we hardcoded the output filename in two separate places. If we change one but not the other, the script will break.
+
+### Takeaway
+
+You know how to send outputs to a specific named file.
+
+### What's next?
+
+Learn how to pass parameters to the workflow from the command line.
+
+---
+
+## 4. Use a command line parameter for naming the output file
+
+Here we introduce `params` (short for 'parameters') as the construct that holds command line arguments. This is useful because there will be many parameters such as filenames and processing options that you want to decide at the time you run the pipeline, and you don't want to have to edit the script itself every time.
+
+#### 1. Change the output declaration in the process to use a parameter
+
+_Before:_
+
+```
+    output:
+        path 'output.txt'
+```
+
+_After:_
+
+```
+    output:
+        path params.output_file
+```
+
+#### 2. Change the process command to use the parameter too
+
+_Before:_
+
+```
+echo 'Hello World!' > output.txt
+```
+
+_After:_
+
+```
+    echo 'Hello World!' > $params.output_file
+```
+
+#### 3. Run the workflow again with the `--output_file` parameter
+
+```
+nextflow run hello-world.nf --output_file 'output.txt'
+```
+
+The log output should start looking very familiar:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [evil_bose] DSL2 - revision: 6907ac9da2
+executor >  local (1)
+[46/e4ff05] process > sayHello [100%] 1 of 1 ✔
+```
+
+Follow the same procedure as before to find the `output.txt` output file. If you want to convince yourself that the parameter is working as intended, feel free to repeat this step with a different output filename.
+
+!!! warning
+
+    If you forget to add the output filename parameter, you get a warning and the output file is called `null`. If you add it but don't give it a value, the output file is called `true`.
+
+!!! tip
+
+    Command-line arguments take one dash (-) for Nextflow options, two dashes (--) for pipeline parameters.
+
+### Takeaway
+
+You know how to use a command line parameter to set the output filename.
+
+### What's next?
+
+Learn how to set a default value in case we leave out the parameter.
+
+---
+
+## 5. Set a default value for a command line parameter
+
+In many cases, it makes sense to supply a default value for a given parameter, so that you don't have to specify it for every run of the workflow. Let's initialize the `output_file` parameter with a default value.
+
+#### 1. Add the parameter declaration at the top of the script (with a comment block as a free bonus)
+
+```
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+```
+
+#### 2. Run the workflow again without specifying the parameter
+
+```
+nextflow run hello-world.nf
+```
+
+Still looking pretty much the same...
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [tiny_elion] DSL2 - revision: 7ad1cd6bfe
+executor >  local (1)
+[8b/1f9ded] process > sayHello [100%] 1 of 1 ✔
+```
+
+Check the output in the work directory, and... Tadaa! It works, Nextflow used the default value to name the output. But wait, what happens now if we provide the parameter in the command line?
+
+#### 3. Run the workflow again with the `--output_file` parameter on the command line using a DIFFERENT filename
+
+```
+nextflow run hello-world.nf --output_file 'output-cli.txt'
+```
+
+Nextflow's not complaining, that's a good sign:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [exotic_lichterman] DSL2 - revision: 7ad1cd6bfe
+executor >  local (1)
+[36/47354a] process > sayHello [100%] 1 of 1 ✔
+```
+
+Check the output directory and look for the output with the new filename. Tadaa again! The value of the parameter we passed on the command line overrode the value we gave the variable in the script. In fact, parameters can be set in several different ways; if the same parameter is set in multiple places, its value is determined based on the order of precedence described [here](https://www.nextflow.io/docs/latest/config.html).
+
+!!! tip
+
+    You can put the parameter declaration inside the workflow block if you prefer. Whatever you choose, try to group similar things in the same place so you don't end up with declarations all over the place.
+
+### Takeaway
+
+You know how to handle command line parameters and set default values.
+
+### What's next?
+
+Learn how to add in variable inputs.
+
+---
+
+## 6. Add in variable inputs
+
+So far, we've been emitting a greeting hardcoded into the process command. Now we're going to add some flexibility by introducing channels as the construct that holds the data we want to feed as input to a process. We're going to start with the simplest kind of channel, a value channel.
+
+!!! tip
+
+    You can build [different kinds of channels](https://www.nextflow.io/docs/latest/channel.html#channel-types) depending on the shape of the input data; we'll cover how to deal with other kinds of fairly simple inputs later, but more complex input channel types are out of scope for this training.
+
+#### 1. Create an input channel (with a bonus in-line comment)
+
+_Before:_
+
+```
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
+```
+
+_After:_
+
+```
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello world!')
+
+    // emit a greeting
+    sayHello()
+}
+```
+
+#### 2. Add the channel as input to the process call
+
+_Before:_
+
+```
+    // emit a greeting
+    sayHello()
+```
+
+_After:_
+
+```
+    // emit a greeting
+    sayHello(greeting_ch)
+```
+
+#### 3. Add an input definition to the process block
+
+_Before:_
+
+```
+process sayHello {
+
+    output:
+        path params.output_file
+```
+
+_After:_
+
+```
+process sayHello {
+
+    input:
+        val greeting
+
+    output:
+        path params.output_file
+```
+
+#### 4. Edit the process command to use the input variable
+
+_Before:_
+
+```
+    """
+    echo 'Hello World!' > $params.output_file
+    """
+```
+
+_After:_
+
+```
+    """
+    echo '$greeting' > $params.output_file
+    """
+```
+
+#### 5. Run the workflow command again
+
+```
+nextflow run hello-world.nf
+```
+
+If you made all four edits correctly, you should get another successful execution:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [maniac_euler] DSL2 - revision: 73bfbe197f
+executor >  local (1)
+[57/aee130] process > sayHello (1) [100%] 1 of 1 ✔
+```
+
+The result is still the same as previously; so far we're just progressively tweaking the internal plumbing to increase the flexibility of our workflow while achieving the same end result.
+
+### Takeaway
+
+You know how to use a simple channel to provide an input to a process.
+
+### What's next?
+
+Learn how to pass inputs from the command line.
+
+---
+
+## 7. Use params for inputs too
+
+We want to be able to specify the input from the command line because that is the piece that will almost always be different in subsequent runs of the pipeline. Good news: we can use the same `params` construct we used for the output filename.
+
+#### 1. Edit the input channel declaration to use a parameter
+
+_Before:_
+
+```
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello world!')
+```
+
+_After:_
+
+```
+    // create a channel for inputs
+    greeting_ch = Channel.of(params.greeting)
+```
+
+#### 2. Run the workflow again with the `--greeting` parameter
+
+```
+nextflow run hello-world.nf --greeting 'Bonjour le monde!'
+```
+
+In case you're wondering, yes it's normal to have dreams where the Nextflow log output scrolls endlessly in front of you after running through a training session... Or is that just me?
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [hopeful_laplace] DSL2 - revision: a8ed9a6202
+executor >  local (1)
+[83/dfbbbc] process > sayHello (1) [100%] 1 of 1 ✔
+```
+
+Be sure to open up the output file to check that you now have the new version of the greeting. Voilà!
+
+!!! note
+
+    The current form of the script doesn't have a variable declaration for `greeting` so that parameter is REQUIRED to be included in the command line. If we wanted, we could put in a default value by adding for example `params.greeting = 'Holà el mundo!'` at the top of the script (just like we did for the output filename). But it's less common to want to have a default value set for the input data.
+
+### Takeaway
+
+You know how to set up an input variable for a process and supply a value in the command line.
+
+### What's next?
+
+Learn how to add in a second process and chain them together.
+
+---
+
+## 8. Add a second step to the workflow
+
+Most real-world workflows involve more than one step. Here we introduce a second process that converts the text to uppercase (all-caps), using the classic UNIX one-liner `tr '[a-z]' '[A-Z]'`.
+
+We're going to run the command by itself in the terminal first to verify that it works as expected without any of the workflow code getting in the way of clarity, just like we did at the start with the Hello World. Then we'll write a process that does the same thing, and finally we'll connect the two processes so the output of the first serves as input to the second.
+
+#### 1. Run the command in the terminal by itself
+
+```
+echo 'Hello World' | tr '[a-z]' '[A-Z]'
+```
+
+The output is simply the uppercase version of the text string:
+
+```
+HELLO WORLD
+```
+
+#### 2. Make the command take a file as input and write the output to a file
+
+```
+cat output.txt | tr '[a-z]' '[A-Z]' > UPPER-output.txt
+```
+
+Now the `HELLO WORLD` output is in the new output file, `UPPER-output.txt`.
+
+#### 3. Turn that into a process definition (documented with a comment block)
+
+```
+/*
+ * Use a text replace utility to convert the greeting to uppercase
+ */
+process convertToUpper {
+    input:
+        path input_file
+
+    output:
+        path "UPPER-${input_file}"
+
+    """
+    cat $input_file | tr '[a-z]' '[A-Z]' > UPPER-${input_file}
+    """
+}
+```
+
+#### 4. Add a call to the new process in the workflow block
+
+```
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of(params.greeting)
+
+    // emit a greeting
+    sayHello(greeting_ch)
+
+    // convert the greeting to uppercase
+    convertToUpper()
+}
+```
+
+#### 5. Pass the output of the first process to the second process
+
+```
+    // convert the greeting to uppercase
+    convertToUpper(sayHello.out)
+```
+
+#### 6. Run the same workflow command as before
+
+```
+nextflow run hello-world.nf --greeting 'Hello World!'
+```
+
+Oh, how exciting! There is now an extra line in the log output, which corresponds to the second process we've added:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [kickass_pasteur] DSL2 - revision: d15b2c482c
+executor >  local (2)
+[da/8d9221] process > sayHello (1)       [100%] 1 of 1 ✔
+[01/2b32ee] process > convertToUpper (1) [100%] 1 of 1 ✔
+```
+
+This time the workflow produced two work directories; one per process. Check out the work directory of the second process, where you should find two different output files listed. If you look carefully, you'll notice one of them (the output of the first process) has a little arrow icon on the right; that signifies it's a symbolic link. It points to the location where that file lives in the work directory of the first process.
+
+!!! note
+
+    As a little bonus, we composed the second output filename based on the first one. Very important to remember: you have to use double quotes around the filename expression (NOT single quotes) or it will fail.
+
+### Takeaway
+
+You know how to add a second step that takes the output of the first as input.
+
+### What's next?
+
+Learn how to make the workflow run on a list of input values.
+
+---
+
+## 9. Modify the workflow to run on a list of inputs
+
+Workflows typically run on batches of inputs that we want to process in bulk. Here we upgrade the workflow to accept a list of inputs. For simplicity, we go back to hardcoding the greetings instead of using a parameter for the input.
+
+#### 1. Modify the channel to be a list of greetings (hardcoded for now)
+
+_Before:_
+
+```
+    // create a channel for inputs
+    greeting_ch = Channel.of(params.greeting)
+```
+
+_After:_
+
+```
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello','Bonjour','Holà')
+```
+
+#### 2. Modify the first process to generate dynamic filenames so the final filenames will be unique
+
+_Before:_
+
+```
+process sayHello {
+    input:
+        val greeting
+
+    output:
+        path params.output_file
+
+    """
+    echo '$greeting' > $params.output_file
+    """
+}
+```
+
+_After:_
+
+```
+process sayHello {
+    input:
+        val greeting
+
+    output:
+        path "${greeting}-${params.output_file}"
+
+    """
+    echo '$greeting' > '$greeting-$params.output_file'
+    """
+}
+```
+
+!!! note
+
+    In practice, naming files based on the data input itself is almost always impractical; the better way to generate dynamic filenames is to use a samplesheet and create a map of metadata (aka metamap) from which we can grab an appropriate identifier to generate the filenames. We'll show how to do that later in this training.
+
+#### 3. Run the command and look at the log output
+
+```
+nextflow run hello-world.nf
+```
+
+How many log lines do you expect to see in the terminal? And how many do you actually see?
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [cranky_hypatia] DSL2 - revision: 719dae218c
+executor >  local (6)
+[6c/91aa50] process > sayHello (3)       [100%] 3 of 3 ✔
+[90/80111c] process > convertToUpper (3) [100%] 3 of 3 ✔
+```
+
+Something's wrong! The log lines seem to indicate each process was executed three times (corresponding to the three input elements we provided) but we're only seeing two work directories instead of six.
+
+This is because by default, the ANSI logging system writes the logging from multiple calls to the same process on the same line. Fortunately, we can disable that behavior.
+
+#### 4. Run the command again with the `-ansi-log false` option
+
+```
+nextflow run hello-world.nf -ansi-log false
+```
+
+This time it works fine, we see six work directories in the terminal:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [disturbed_panini] DSL2 - revision: 719dae218c
+[8c/77b534] Submitted process > sayHello (1)
+[b5/f0bf7e] Submitted process > sayHello (2)
+[a8/457f9b] Submitted process > sayHello (3)
+[3d/1bb4e6] Submitted process > convertToUpper (2)
+[fa/58fbb1] Submitted process > convertToUpper (1)
+[90/e88919] Submitted process > convertToUpper (3)
+```
+
+That's much better; at least for this number of processes. For a complex pipeline, or a large list of inputs, having the full list output to the terminal might get a bit overwhelming.
+
+!!! tip
+
+    Another way to show that all six calls are happening is to delete all the work directories before you run again. Then you'll see the six new ones pop up.
+
+### Takeaway
+
+You know how to feed multiple inputs through a value channel.
+
+### What's next?
+
+Learn how to make the workflow take a file that contains the list of input values.
+
+---
+
+## 10. Modify the workflow to run on a file that contains a list of input values
+
+In most cases, when we run on multiple inputs, the input values are contained in a file. Here we're going to use a file where each value is on a new line.
+
+#### 1. Modify the channel declaration to take an input file (through a parameter) instead of hardcoded values
+
+_Before:_
+
+```
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello','Bonjour','Holà')
+```
+
+_After:_
+
+```
+    // create a channel for inputs from a file
+    greeting_ch = Channel.fromPath(params.input_file).splitText() { it.trim() }
+```
+
+#### 2. Run the workflow with the `-ansi-log false` option and an `--input_file` parameter
+
+```
+nextflow run hello-world.nf -ansi-log false --input_file greetings.txt
+```
+
+Once again we see each process get executed three times:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-world.nf` [small_albattani] DSL2 - revision: 5cea973c3c
+[45/18d159] Submitted process > sayHello (1)
+[cf/094ea1] Submitted process > sayHello (3)
+[27/e3ea5b] Submitted process > sayHello (2)
+[7d/63672f] Submitted process > convertToUpper (1)
+[62/3184ed] Submitted process > convertToUpper (2)
+[02/f0ff38] Submitted process > convertToUpper (3)
+```
+
+Looking at the outputs, we see each greeting was correctly extracted and processed through the workflow. We've achieved the same result as the previous step, but now we have a lot more flexibility to add more elements to the list of greetings we want to process.
+
+!!! tip
+
+    Nextflow offers a variety of predefined operators and functions for reading data in from common file formats and applying text transformations to it. In this example, we used the `fromPath()` channel factory with the `splitText()` operator to read each line as a separate value, then we used a closure to apply the `trim()` function to strip the newline (`\n`) character from each element.
+
+!!! tip
+
+    But don't worry if this feels like a lot to grapple with all of a sudden! This is just meant to be a little peek at the kind of things you will learn in later training modules.
+
+### Takeaway
+
+You know how to provide inputs in a file.
+
+### What's next?
+
+Celebrate your success and take a break! Then, move on to Part 2 of this training to learn how to apply what you've learned to an actual data analysis use case.
diff --git a/docs/hello_nextflow/03_hello_gatk.md b/docs/hello_nextflow/03_hello_gatk.md
new file mode 100644
index 000000000..c293da026
--- /dev/null
+++ b/docs/hello_nextflow/03_hello_gatk.md
@@ -0,0 +1,787 @@
+# Part 2: Hello GATK
+
+The [GATK](https://gatk.broadinstitute.org/) (Genome Analysis Toolkit) is a widely used software package developed by the Broad Institute to analyze high-throughput sequencing data. We're going to use GATK and a related tool, [Samtools](https://www.htslib.org/), in a very basic pipeline that identifies genomic variants through a method called **variant calling**.
+
+![GATK pipeline](img/gatk-pipeline.png)
+
+!!! note
+
+    Don't worry if you're not familiar with GATK or genomics in general. We'll summarize the necessary concepts as we go, and the workflow implementation principles we demonstrate here apply broadly to any command line tool that takes in some input files and produce some output files.
+
+A full variant calling pipeline typically involves a lot of steps. For simplicity, we are only going to look at the core variant calling steps.
+
+### Method overview
+
+1. Generate an index file for each BAM input file using Samtools
+2. Run the GATK HaplotypeCaller on each BAM input file to generate per-sample variant calls in GVCF (Genomic Variant Call Format)
+
+![Variant calling](img/haplotype-caller.png)
+
+### Dataset
+
+-   **A reference genome** consisting of the human chromosome 20 (from hg19/b37) and its accessory files (index and sequence dictionary). The reference files are compressed to keep the Gitpod size small so we'll have to decompress them in order to use them.
+-   **Three whole genome sequencing samples** corresponding to a family trio (mother, father and son), which have been subset to a small portion on chromosome 20 to keep the file sizes small. The sequencing data is in [BAM](https://samtools.github.io/hts-specs/SAMv1.pdf) (Binary Alignment Map) format, i.e. genome sequencing reads that have already been mapped to the reference genome.
+-   **A list of genomic intervals**, i.e. coordinates on the genome where our samples have data suitable for calling variants.
+
+---
+
+## 0. Warmup: Run Samtools and GATK directly
+
+Just like in the Hello World example, we want to try out the commands manually before we attempt to wrap them in a workflow. The difference here is that we're going to use Docker containers to obtain and run the tools.
+
+### 0.1. Index a BAM input file with Samtools
+
+#### 0.1.1. Pull the samtools container
+
+```
+docker pull quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1
+```
+
+#### 0.1.2. Spin up the container interactively
+
+```
+docker run -it -v ./data:/data quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1
+```
+
+#### 0.1.3. Run the indexing command
+
+```
+samtools index data/bam/reads_mother.bam
+```
+
+#### 0.1.4. Check that the BAM index has been produced
+
+```
+ls data/bam/
+```
+
+This should show:
+
+```
+reads_father.bam      reads_mother.bam      reads_mother.bam.bai  reads_son.bam
+```
+
+Where `reads_mother.bam.bai` has been created as an index to `reads_mother.bam`.
+
+#### 0.1.5. Exit the container
+
+```
+exit
+```
+
+### 0.2. Call variants with GATK HaplotypeCaller
+
+#### 0.2.1. Decompress the reference genome files
+
+```
+tar -zxvf data/ref.tar.gz -C data/
+```
+
+#### 0.2.2. Pull the GATK container
+
+```
+docker pull broadinstitute/gatk:4.5.0.0
+```
+
+#### 0.2.3. Spin up the container interactively
+
+```
+docker run -it -v ./data:/data broadinstitute/gatk:4.5.0.0
+```
+
+#### 0.2.4. Run the variant calling command
+
+```
+gatk HaplotypeCaller \
+        -R /data/ref/ref.fasta \
+        -I /data/bam/reads_mother.bam \
+        -O reads_mother.g.vcf \
+        -L /data/intervals.list \
+        -ERC GVCF
+```
+
+#### 0.2.5. Check the contents of the output file
+
+```
+cat reads_mother.g.vcf
+```
+
+---
+
+## 1. Write a single-stage workflow that runs Samtools index on a BAM file
+
+#### 1.1. Define the indexing process
+
+```
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1'
+
+    input:
+        path input_bam
+
+    output:
+        path "${input_bam}.bai"
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+```
+
+#### 1.2. Add parameter declarations up top
+
+```
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/hello-nextflow"
+$baseDir = params.baseDir
+
+// Primary input
+params.reads_bam = "${baseDir}/data/bam/reads_mother.bam"
+```
+
+#### 1.3. Add workflow block to run SAMTOOLS_INDEX
+
+```
+workflow {
+
+    // Create input channel (single file via CLI parameter)
+    reads_ch = Channel.from(params.reads_bam)
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+}
+```
+
+#### 1.4. Run it to verify you can run the indexing step
+
+```
+nextflow run hello-gatk.nf
+```
+
+Should produce something like:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [compassionate_cray] DSL2 - revision: 9b97744397
+executor >  local (1)
+[bf/072bd7] process > SAMTOOLS_INDEX (1) [100%] 1 of 1 ✔
+
+```
+
+### Takeaway
+
+You know how to wrap a real bioinformatics tool in a single-step Nextflow workflow.
+
+### What's next?
+
+Add a second step that consumes the output of the first.
+
+---
+
+## 2. Add a second step that runs GATK HaplotypeCaller on the indexed BAM file
+
+#### 2.1. Define the variant calling process
+
+```
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        path input_bam
+        path input_bam_index
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+```
+
+#### 2.2. Add accessory inputs up top
+
+```
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+```
+
+#### 2.3. Add a call to the workflow block to run GATK_HAPLOTYPECALLER
+
+```
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        reads_ch,
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+```
+
+#### 2.4. Run the workflow to verify that the variant calling step works
+
+```
+nextflow run hello-gatk.nf
+```
+
+Now we see the two processes being run:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [lethal_keller] DSL2 - revision: 30a64b9325
+executor >  local (2)
+[97/0f85bf] process > SAMTOOLS_INDEX (1)       [100%] 1 of 1 ✔
+[2d/43c247] process > GATK_HAPLOTYPECALLER (1) [100%] 1 of 1 ✔
+```
+
+If you check the work directory, you'll find the output file `reads_mother.bam.g.vcf`. Because this is a small test file, you can click on it to open it and view the contents, which consist of 92 lines of header metadata followed by a list of genomic variant calls, one per line.
+
+!!! note
+
+    A GVCF is a special kind of VCF that contains non-variant records as well as variant calls. The first actual variant call in this file occurs at line 325:
+
+    ```
+    20	10040772	.	C	CT,<NON_REF>	473.03	.	DP=22;ExcessHet=0.0000;MLEAC=2,0;MLEAF=1.00,0.00;RAW_MQandDP=79200,22	GT:AD:DP:GQ:PL:SB	1/1:0,17,0:17:51:487,51,0,488,51,488:0,0,7,10
+    ```
+
+### Takeaway
+
+You know how to make a very basic two-step variant calling workflow.
+
+### What's next?
+
+Make the workflow handle multiple samples in bulk.
+
+---
+
+## 3. Adapt the workflow to run on a batch of samples
+
+#### 3.1. Turn the input param declaration into a list of the three samples
+
+```
+// Primary input
+params.reads_bam = ["${baseDir}/data/gatk/bam/reads_mother.bam",
+                    "${baseDir}/data/gatk/bam/reads_father.bam",
+                    "${baseDir}/data/gatk/bam/reads_son.bam"]
+```
+
+#### 3.2. Run the workflow to verify that it runs on all three samples
+
+```
+nextflow run hello-gatk.nf
+```
+
+Uh-oh! Sometimes it works, but sometimes, some of the runs fail with an error like this:
+
+> executor > local (6)
+> [f3/80670d] process > SAMTOOLS_INDEX (1) [100%] 3 of 3 ✔
+> [27/78b83d] process > GATK_HAPLOTYPECALLER (3) [100%] 1 of 1, failed: 1
+> ERROR ~ Error executing process > 'GATK_HAPLOTYPECALLER (1)'
+>
+> Caused by:
+> Process `GATK_HAPLOTYPECALLER (1)` terminated with an error exit status (2)
+>
+> Command executed:
+>
+> gatk HaplotypeCaller -R ref.fasta -I reads_mother.bam -O reads_mother.bam.g.vcf -L intervals-min.list -ERC GVCF
+>
+> Command exit status:
+> 2
+>
+> Command output:
+> (empty)
+>
+> Command error:
+> 04:52:05.954 INFO HaplotypeCaller - Java runtime: OpenJDK 64-Bit Server VM v17.0.9+9-Ubuntu-122.04
+> 04:52:05.955 INFO HaplotypeCaller - Start Date/Time: March 15, 2024 at 4:52:05 AM GMT
+> 04:52:05.955 INFO HaplotypeCaller - ------------------------------------------------------------
+> 04:52:05.955 INFO HaplotypeCaller - ------------------------------------------------------------
+> 04:52:05.956 INFO HaplotypeCaller - HTSJDK Version: 4.1.0
+> 04:52:05.956 INFO HaplotypeCaller - Picard Version: 3.1.1
+> 04:52:05.956 INFO HaplotypeCaller - Built for Spark Version: 3.5.0
+> 04:52:05.957 INFO HaplotypeCaller - HTSJDK Defaults.COMPRESSION_LEVEL : 2
+> 04:52:05.957 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false
+> 04:52:05.957 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true
+> 04:52:05.957 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false
+> 04:52:05.958 INFO HaplotypeCaller - Deflater: IntelDeflater
+> 04:52:05.958 INFO HaplotypeCaller - Inflater: IntelInflater
+> 04:52:05.958 INFO HaplotypeCaller - GCS max retries/reopens: 20
+> 04:52:05.958 INFO HaplotypeCaller - Requester pays: disabled
+> 04:52:05.959 INFO HaplotypeCaller - Initializing engine
+> 04:52:06.563 INFO IntervalArgumentCollection - Processing 20000 bp from intervals
+> 04:52:06.572 INFO HaplotypeCaller - Done initializing engine
+> 04:52:06.575 INFO HaplotypeCallerEngine - Tool is in reference confidence mode and the annotation, the following changes will be made to any specified annotations: 'StrandBiasBySample' will be enabled. 'ChromosomeCounts', 'FisherStrand', 'StrandOddsRatio' and 'QualByDepth' annotations have been disabled
+> 04:52:06.653 INFO NativeLibraryLoader - Loading libgkl_utils.so from jar:file:/gatk/gatk-package-4.5.0.0-local.jar!/com/intel/gkl/native/libgkl_utils.so
+> 04:52:06.656 INFO NativeLibraryLoader - Loading libgkl_smithwaterman.so from jar:file:/gatk/gatk-package-4.5.0.0-local.jar!/com/intel/gkl/native/libgkl_smithwaterman.so
+> 04:52:06.657 INFO SmithWatermanAligner - Using AVX accelerated SmithWaterman implementation
+> 04:52:06.662 INFO HaplotypeCallerEngine - Standard Emitting and Calling confidence set to -0.0 for reference-model confidence output
+> 04:52:06.663 INFO HaplotypeCallerEngine - All sites annotated with PLs forced to true for reference-model confidence output
+> 04:52:06.676 INFO NativeLibraryLoader - Loading libgkl_pairhmm_omp.so from jar:file:/gatk/gatk-package-4.5.0.0-local.jar!/com/intel/gkl/native/libgkl_pairhmm_omp.so
+> 04:52:06.756 INFO IntelPairHmm - Flush-to-zero (FTZ) is enabled when running PairHMM
+> 04:52:06.757 INFO IntelPairHmm - Available threads: 16
+> 04:52:06.757 INFO IntelPairHmm - Requested threads: 4
+> 04:52:06.757 INFO PairHMM - Using the OpenMP multi-threaded AVX-accelerated native PairHMM implementation
+> 04:52:06.954 INFO ProgressMeter - Starting traversal
+> 04:52:06.955 INFO ProgressMeter - Current Locus Elapsed Minutes Regions Processed Regions/Minute
+> 04:52:06.967 INFO VectorLoglessPairHMM - Time spent in setup for JNI call : 0.0
+> 04:52:06.968 INFO PairHMM - Total compute time in PairHMM computeLogLikelihoods() : 0.0
+> 04:52:06.969 INFO SmithWatermanAligner - Total compute time in native Smith-Waterman : 0.00 sec
+> 04:52:06.971 INFO HaplotypeCaller - Shutting down engine
+> [March 15, 2024 at 4:52:06 AM GMT] org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCaller done. Elapsed time: 0.03 minutes.
+> Runtime.totalMemory()=629145600
+>
+> ---
+>
+> A USER ERROR has occurred: Traversal by intervals was requested but some input files are not indexed.
+> Please index all input files:
+>
+> samtools index reads_mother.bam
+>
+> ---
+>
+> Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace.
+> Using GATK jar /gatk/gatk-package-4.5.0.0-local.jar
+> Running:
+> java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /gatk/gatk-package-4.5.0.0-local.jar HaplotypeCaller -R ref.fasta -I reads_mother.bam -O reads_mother.bam.g.vcf -L intervals-min.list -ERC GVCF
+>
+> Work dir:
+> /workspace/gitpod/nf-training/work/22/611b8c5703daaf459188d79cd68db0
+>
+> Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
+>
+> -- Check '.nextflow.log' file for details
+
+**Why does this happen?** Because the order of outputs is not guaranteed, so the script as written so far is not safe for running on multiple samples!
+
+#### 3.3. Change the output of the SAMTOOLS_INDEX process into a tuple that keeps the input file and its index together
+
+_Before:_
+
+```
+    output:
+        path "${input_bam}.bai"
+```
+
+_After:_
+
+```
+    output:
+        tuple path(input_bam), path("${input_bam}.bai")
+```
+
+#### 3.4. Change the input to the GATK_HAPLOTYPECALLER process to be a tuple
+
+_Before:_
+
+```
+    input:
+        path input_bam
+        path input_bam_index
+```
+
+_After:_
+
+```
+    input:
+        tuple path(input_bam), path(input_bam_index)
+```
+
+#### 3.5. Update the call to GATK_HAPLOTYPECALLER in the workflow block
+
+_Before:_
+
+```
+    GATK_HAPLOTYPECALLER(
+        reads_ch,
+        SAMTOOLS_INDEX.out,
+```
+
+_After:_
+
+```
+    GATK_HAPLOTYPECALLER(
+        SAMTOOLS_INDEX.out,
+```
+
+#### 3.6. Run the workflow to verify it works correctly on all three samples now
+
+```
+nextflow run hello-gatk.nf -ansi-log false
+```
+
+This time everything should run correctly:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [adoring_hopper] DSL2 - revision: 8cad21ea51
+[e0/bbd6ef] Submitted process > SAMTOOLS_INDEX (3)
+[71/d26b2c] Submitted process > SAMTOOLS_INDEX (2)
+[e6/6cad6d] Submitted process > SAMTOOLS_INDEX (1)
+[26/73dac1] Submitted process > GATK_HAPLOTYPECALLER (1)
+[23/12ed10] Submitted process > GATK_HAPLOTYPECALLER (2)
+[be/c4a067] Submitted process > GATK_HAPLOTYPECALLER (3)
+```
+
+### Takeaway
+
+You know how to make a variant calling workflow run on multiple samples (independently).
+
+### What's next?
+
+Make it easier to handle samples in bulk.
+
+---
+
+## 4. Make it nicer to run on arbitrary samples by using a list of files as input
+
+#### 4.1. Create a text file listing the input paths
+
+_sample_bams.txt:_
+
+```
+/workspace/gitpod/hello-nextflow/data/bam/reads_mother.bam
+/workspace/gitpod/hello-nextflow/data/bam/reads_father.bam
+/workspace/gitpod/hello-nextflow/data/bam/reads_son.bam
+```
+
+#### 4.2. Update the parameter default
+
+_Before:_
+
+```
+// Primary input
+params.reads_bam = ["${baseDir}/data/bam/reads_mother.bam",
+                    "${baseDir}/data/bam/reads_father.bam",
+                    "${baseDir}/data/bam/reads_son.bam"]
+```
+
+_After:_
+
+```
+// Primary input (list of input files, one per line)
+params.reads_bam = "${baseDir}/data/bam/sample_bams.txt"
+```
+
+#### 4.3. Update the channel factory to read lines from a file
+
+_Before:_
+
+```
+    // Create input channel
+    reads_ch = Channel.from(params.reads_bam)
+```
+
+_After:_
+
+```
+    // Create input channel from list of input files in plain text
+    reads_ch = Channel.fromPath(params.reads_bam).splitText
+```
+
+#### 4.4. Run the workflow to verify that it works correctly
+
+```
+nextflow run hello-gatk.nf -ansi-log false
+```
+
+This should produce essentially the same result as before:
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [kickass_faggin] DSL2 - revision: dcfa9f34e3
+[ff/0c08e6] Submitted process > SAMTOOLS_INDEX (2)
+[75/bcae76] Submitted process > SAMTOOLS_INDEX (1)
+[df/75d25a] Submitted process > SAMTOOLS_INDEX (3)
+[00/295d75] Submitted process > GATK_HAPLOTYPECALLER (1)
+[06/89c1d1] Submitted process > GATK_HAPLOTYPECALLER (2)
+[58/866482] Submitted process > GATK_HAPLOTYPECALLER (3)
+```
+
+### Takeaway
+
+You know how to make a variant calling workflow handle a list of input samples.
+
+### What's next?
+
+Turn the list of input files into a samplesheet by including some metadata.
+
+---
+
+## 5. Upgrade to using a (primitive) samplesheet
+
+This is a very common pattern in Nextflow pipelines.
+
+#### 5.1. Add a header line and the sample IDs to a copy of the sample list, in CSV format
+
+_samplesheet.csv:_
+
+```
+ID,reads_bam
+NA12878,/workspace/gitpod/hello-nextflow/data/bam/reads_mother.bam
+NA12877,/workspace/gitpod/hello-nextflow/data/bam/reads_father.bam
+NA12882,/workspace/gitpod/hello-nextflow/data/bam/reads_son.bam
+```
+
+#### 5.2. Update the parameter default
+
+_Before:_
+
+```
+// Primary input (list of input files, one sample per line)
+params.reads_bam = "${baseDir}/data/bam/sample_bams.txt"
+```
+
+_After:_
+
+```
+// Primary input (samplesheet in CSV format with ID and file path, one sample per line)
+params.reads_bam = "${baseDir}/data/samplesheet.csv"
+```
+
+#### 5.3. Update the channel factory to parse a CSV file
+
+_Before:_
+
+```
+    // Create input channel from list of input files in plain text
+    reads_ch = Channel.fromPath(params.reads_bam).splitText()
+```
+
+_After:_
+
+```
+    // Create input channel from samplesheet in CSV format
+    reads_ch = Channel.fromPath(params.reads_bam)
+                        .splitCsv(header: true)
+                        .map{row -> [row.id, file(row.reads_bam)]}
+```
+
+#### 5.4. Add the sample ID to the SAMTOOLS_INDEX input definition
+
+_Before:_
+
+```
+    input:
+        path input_bam
+```
+
+_After:_
+
+```
+    input:
+        tuple val(id), path(input_bam)
+```
+
+#### 5.5. Run the workflow to verify that it works
+
+```
+nextflow run hello-gatk.nf -ansi-log false
+```
+
+If everything is wired up correctly, it should produce essentially the same result.
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [extravagant_panini] DSL2 - revision: 56accbf948
+[19/00f4a5] Submitted process > SAMTOOLS_INDEX (3)
+[4d/532d60] Submitted process > SAMTOOLS_INDEX (1)
+[08/5628d6] Submitted process > SAMTOOLS_INDEX (2)
+[18/21a0ae] Submitted process > GATK_HAPLOTYPECALLER (1)
+[f0/4e8155] Submitted process > GATK_HAPLOTYPECALLER (2)
+[d5/73e1c4] Submitted process > GATK_HAPLOTYPECALLER (3)
+```
+
+### Takeaway
+
+You know how to make a variant calling workflow handle a basic samplesheet.
+
+### What's next?
+
+Add a joint genotyping step that combines the data from all the samples.
+
+---
+
+## 6. Stretch goal: Add joint genotyping step
+
+To complicate matters a little, the GATK variant calling method calls for a consolidation step where we combine and re-analyze the variant calls obtained per sample in order to obtain definitive 'joint' variant calls for a group or _cohort_ of samples (in this case, the family trio).
+
+![Joint analysis](img/joint-calling.png)
+
+This involves using a GATK tool called GenomicsDBImport that combines the per-sample calls into a sort of mini-database, followed by another GATK tool, GenotypeGVCFs, which performs the actual 'joint genotyping' analysis. These two tools can be run in series within the same process.
+
+One slight complication is that these tools require the use of a sample map that lists per-sample GVCF files, which is different enough from a samplesheet that we need to generate it separately. And for that, we need to pass the sample ID between processes.
+
+!!! tip
+
+    For a more sophisticated and efficient method of metadata propagation, see the topic of [meta maps](https://training.nextflow.io/advanced/metadata/).
+
+#### 6.2. Add the sample ID to the tuple emitted by SAMTOOLS_INDEX
+
+_Before:_
+
+```
+    output:
+        tuple path(input_bam), path("${input_bam}.bai")
+```
+
+_After:_
+
+```
+    output:
+        tuple val(id), path(input_bam), path("${input_bam}.bai")
+```
+
+#### 6.3. Add the sample ID to the GATK_HAPLOTYPECALLER process input and output definitions
+
+_Before:_
+
+```
+    input:
+        tuple path(input_bam), path(input_bam_index)
+        ...
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+```
+
+_After:_
+
+```
+    input:
+        tuple val(id), path(input_bam), path(input_bam_index)
+        ...
+
+    output:
+        tuple val(id), path("${input_bam}.g.vcf"), path("${input_bam}.g.vcf.idx")
+```
+
+#### 6.4. Generate a sample map based on the output of GATK_HAPLOTYPECALLER
+
+```
+    // Create a sample map of the output GVCFs
+    sample_map = GATK_HAPLOTYPECALLER.out.collectFile(){ id, gvcf, idx ->
+            ["${params.cohort_name}_map.tsv", "${id}\t${gvcf}\t${idx}\n"]
+    }
+```
+
+#### 6.5. Write a process that wraps GenomicsDBImport and GenotypeGVCFs called GATK_JOINTGENOTYPING
+
+```
+/*
+ * Consolidate GVCFs and apply joint genotyping analysis
+ */
+process GATK_JOINTGENOTYPING {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        path(sample_map)
+        val(cohort_name)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${cohort_name}.joint.vcf"
+        path "${cohort_name}.joint.vcf.idx"
+
+    """
+    gatk GenomicsDBImport \
+        --sample-name-map ${sample_map} \
+        --genomicsdb-workspace-path ${cohort_name}_gdb \
+        -L ${interval_list}
+
+    gatk GenotypeGVCFs \
+        -R ${ref_fasta} \
+        -V gendb://${cohort_name}_gdb \
+        -O ${cohort_name}.joint.vcf \
+        -L ${interval_list}
+    """
+}
+```
+
+#### 6.6. Add call to workflow block to run GATK_JOINTGENOTYPING
+
+```
+    // Consolidate GVCFs and apply joint genotyping analysis
+    GATK_JOINTGENOTYPING(
+        sample_map,
+        params.cohort_name,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+```
+
+#### 6.7. Add default value for the cohort name parameter up top
+
+```
+// Base name for final output file
+params.cohort_name = "family_trio"
+```
+
+#### 6.8. Run the workflow to verify that it generates the final VCF output as expected
+
+```
+nextflow run hello-gatk.nf
+```
+
+Now we see the additional process show up in the log output (showing the compact view):
+
+```
+N E X T F L O W  ~  version 23.10.1
+Launching `hello-gatk.nf` [nauseous_thompson] DSL2 - revision: b346a53aae
+executor >  local (7)
+[d1/43979a] process > SAMTOOLS_INDEX (2)       [100%] 3 of 3 ✔
+[20/247592] process > GATK_HAPLOTYPECALLER (3) [100%] 3 of 3 ✔
+[14/7145b6] process > GATK_JOINTGENOTYPING (1) [100%] 1 of 1 ✔
+```
+
+You can find the final output file, `family_trio.joint.vcf`, in the work directory for the last process. Click on it to open it and you'll see 40 lines of metadata header followed by just under 30 jointly genotyped variant records (meaning at least one of the family members has a variant genotype at each genomic position listed).
+
+!!! tip
+
+    Keep in mind the data files covered only a tiny portion of chromosome 20; the real size of a variant callset would be counted in millions of variants. That's why we use only tiny subsets of data for training purposes!
+
+### Takeaway
+
+You know how to make a joint variant calling workflow that outputs a cohort VCF.
+
+### What's next?
+
+Celebrate your success and take an extra long break! This was tough and you deserve it.
+
+In future trainings, you'll learn more sophisticated methods for managing inputs and outputs (including using the publishDir directive to save the outputs you care about to a storage directory).
+
+Good luck!
diff --git a/docs/hello_nextflow/img/gatk-pipeline.png b/docs/hello_nextflow/img/gatk-pipeline.png
new file mode 100644
index 000000000..895196ef1
Binary files /dev/null and b/docs/hello_nextflow/img/gatk-pipeline.png differ
diff --git a/docs/hello_nextflow/img/haplotype-caller.png b/docs/hello_nextflow/img/haplotype-caller.png
new file mode 100644
index 000000000..e8eaf45a4
Binary files /dev/null and b/docs/hello_nextflow/img/haplotype-caller.png differ
diff --git a/docs/hello_nextflow/img/joint-calling.png b/docs/hello_nextflow/img/joint-calling.png
new file mode 100644
index 000000000..32d17304d
Binary files /dev/null and b/docs/hello_nextflow/img/joint-calling.png differ
diff --git a/docs/hello_nextflow/img/variants.png b/docs/hello_nextflow/img/variants.png
new file mode 100644
index 000000000..a6effe1bd
Binary files /dev/null and b/docs/hello_nextflow/img/variants.png differ
diff --git a/docs/hello_nextflow/index.md b/docs/hello_nextflow/index.md
new file mode 100644
index 000000000..5dc90e0ee
--- /dev/null
+++ b/docs/hello_nextflow/index.md
@@ -0,0 +1,36 @@
+---
+title: Introduction
+hide:
+    - toc
+---
+
+# Hello Nextflow
+
+### Audience & prerequisites
+
+-   Beginners with Nextflow
+-   Required: basic experience with command line and scripting
+-   Some bioinformatics and genomics concepts will be introduced
+
+### Learning objectives
+
+This training module aims to build basic proficiency in the following areas:
+
+-   Nextflow language:
+
+    -   practical use of core components (sufficient to build a simple multi-step workflow)
+    -   awareness of next-step concepts such as operators and channel factories
+
+-   CLI execution:
+    -   launch a Nextflow workflow locally
+    -   find outputs (results)
+    -   interpret log outputs
+    -   troubleshoot basic issues
+
+## Run it in Gitpod
+
+To make it easier to run this tutorial, we prepared a Gitpod environment with everything you need to follow it. Gitpod provides a virtual machine with everything already set up for you, accessible from your web browser or your code editor (eg. VSCode). To start, click on the button below.
+
+[![Open in GitPod](https://img.shields.io/badge/Gitpod-%20Open%20in%20Gitpod-908a85?logo=gitpod)](https://gitpod.io/#https://github.com/nextflow-io/training)
+
+From there, follow the step-by-step instructions in the following pages.
diff --git a/docs/index.md b/docs/index.md
index bb0792872..e2c6e5f78 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -45,6 +45,16 @@ We have several workshops available on this website - find the one that's right
 
 ## Applied Training
 
+!!! exercise "Hello Nextflow"
+
+    !!! quote inline end ""
+
+        :material-run-fast: This course is short workshop to introduce you to Nextflow.
+
+    A "learn by doing" tutorial that will take you from running tools on the command line into running your first Nextflow pipelines.
+
+    [Launch the Hello Nextflow :material-arrow-right:](hello_nextflow/index.md){ .md-button }
+
 !!! exercise "RNA-seq Variant Calling Training"
 
     !!! quote inline end ""
diff --git a/hello-nextflow/data/bam/reads_father.bam b/hello-nextflow/data/bam/reads_father.bam
new file mode 100644
index 000000000..75a9d874c
Binary files /dev/null and b/hello-nextflow/data/bam/reads_father.bam differ
diff --git a/hello-nextflow/data/bam/reads_mother.bam b/hello-nextflow/data/bam/reads_mother.bam
new file mode 100644
index 000000000..14f0d5cc6
Binary files /dev/null and b/hello-nextflow/data/bam/reads_mother.bam differ
diff --git a/hello-nextflow/data/bam/reads_son.bam b/hello-nextflow/data/bam/reads_son.bam
new file mode 100644
index 000000000..4460a0c23
Binary files /dev/null and b/hello-nextflow/data/bam/reads_son.bam differ
diff --git a/hello-nextflow/data/greetings.txt b/hello-nextflow/data/greetings.txt
new file mode 100644
index 000000000..aca81a681
--- /dev/null
+++ b/hello-nextflow/data/greetings.txt
@@ -0,0 +1,3 @@
+Hello
+Bonjour
+Holà
\ No newline at end of file
diff --git a/hello-nextflow/data/intervals.list b/hello-nextflow/data/intervals.list
new file mode 100644
index 000000000..cf13e9395
--- /dev/null
+++ b/hello-nextflow/data/intervals.list
@@ -0,0 +1,4 @@
+20:10,040,001-10,045,000
+20:10,045,001-10,050,000
+20:10,050,001-10,055,000
+20:10,055,001-10,060,000
\ No newline at end of file
diff --git a/hello-nextflow/data/ref.tar.gz b/hello-nextflow/data/ref.tar.gz
new file mode 100644
index 000000000..57476241e
Binary files /dev/null and b/hello-nextflow/data/ref.tar.gz differ
diff --git a/hello-nextflow/data/sample_bams.txt b/hello-nextflow/data/sample_bams.txt
new file mode 100644
index 000000000..f9e550db6
--- /dev/null
+++ b/hello-nextflow/data/sample_bams.txt
@@ -0,0 +1,3 @@
+/workspace/gitpod/nf-training/data/gatk/bam/reads_mother.bam
+/workspace/gitpod/nf-training/data/gatk/bam/reads_father.bam
+/workspace/gitpod/nf-training/data/gatk/bam/reads_son.bam
\ No newline at end of file
diff --git a/hello-nextflow/data/samplesheet.csv b/hello-nextflow/data/samplesheet.csv
new file mode 100644
index 000000000..f019170db
--- /dev/null
+++ b/hello-nextflow/data/samplesheet.csv
@@ -0,0 +1,4 @@
+id,reads_bam
+NA12878,/workspace/gitpod/nf-training/data/gatk/bam/reads_mother.bam
+NA12877,/workspace/gitpod/nf-training/data/gatk/bam/reads_father.bam
+NA12882,/workspace/gitpod/nf-training/data/gatk/bam/reads_son.bam
\ No newline at end of file
diff --git a/hello-nextflow/hello-gatk.nf b/hello-nextflow/hello-gatk.nf
new file mode 100644
index 000000000..fe6b4d2a6
--- /dev/null
+++ b/hello-nextflow/hello-gatk.nf
@@ -0,0 +1,30 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+
+// Primary input
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 
+
+    input:
+
+    output:
+
+    """
+
+    """
+}
+
+workflow {
+
+    // Create input channel
+
+    // Create index file for input BAM file
+}
\ No newline at end of file
diff --git a/hello-nextflow/hello-world.nf b/hello-nextflow/hello-world.nf
new file mode 100644
index 000000000..680ef79a0
--- /dev/null
+++ b/hello-nextflow/hello-world.nf
@@ -0,0 +1,13 @@
+workflow {
+    sayHello()
+}
+
+process sayHello {
+
+    output: 
+        stdout
+    
+    """
+    echo 'Hello World!'
+    """
+}
\ No newline at end of file
diff --git a/hello-nextflow/nextflow.config b/hello-nextflow/nextflow.config
new file mode 100644
index 000000000..d3af3eaae
--- /dev/null
+++ b/hello-nextflow/nextflow.config
@@ -0,0 +1 @@
+docker.enabled = true
diff --git a/hello-nextflow/scripts/hello-gatk-1.nf b/hello-nextflow/scripts/hello-gatk-1.nf
new file mode 100644
index 000000000..3a5397af2
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-1.nf
@@ -0,0 +1,38 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input
+params.reads_bam = "${baseDir}/data/bam/reads_mother.bam"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        path input_bam
+
+    output:
+        path "${input_bam}.bai"
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+workflow {
+
+    // Create input channel
+    reads_ch = Channel.from(params.reads_bam)
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-gatk-2.nf b/hello-nextflow/scripts/hello-gatk-2.nf
new file mode 100644
index 000000000..55f2e0b7a
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-2.nf
@@ -0,0 +1,83 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input
+params.reads_bam = "${baseDir}/data/bam/reads_mother.bam"
+
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        path input_bam
+
+    output:
+        path "${input_bam}.bai"
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        path input_bam
+        path input_bam_index
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+
+workflow {
+
+    // Create input channel
+    reads_ch = Channel.from(params.reads_bam)
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        reads_ch,
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-gatk-3.nf b/hello-nextflow/scripts/hello-gatk-3.nf
new file mode 100644
index 000000000..75a570df9
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-3.nf
@@ -0,0 +1,83 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input
+params.reads_bam = ["${baseDir}/data/bam/reads_mother.bam",
+                    "${baseDir}/data/bam/reads_father.bam",
+                    "${baseDir}/data/bam/reads_son.bam"]
+
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        path input_bam
+
+    output:
+        tuple path(input_bam), path("${input_bam}.bai")
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        tuple path(input_bam), path(input_bam_index)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+
+workflow {
+
+    // Create input channel
+    reads_ch = Channel.from(params.reads_bam)
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-gatk-4.nf b/hello-nextflow/scripts/hello-gatk-4.nf
new file mode 100644
index 000000000..923fe9b5f
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-4.nf
@@ -0,0 +1,81 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input (list of input files, one per line)
+params.reads_bam = "${baseDir}/data/sample_bams.txt"
+
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        path input_bam
+
+    output:
+        tuple path(input_bam), path("${input_bam}.bai")
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        tuple path(input_bam), path(input_bam_index)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+
+workflow {
+
+    // Create input channel from list of input files in plain text 
+    reads_ch = Channel.fromPath(params.reads_bam).splitText()
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+}
diff --git a/hello-nextflow/scripts/hello-gatk-5.nf b/hello-nextflow/scripts/hello-gatk-5.nf
new file mode 100644
index 000000000..c6c82e440
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-5.nf
@@ -0,0 +1,83 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input (samplesheet in CSV format with ID and file path, one sample per line)
+params.reads_bam = "${baseDir}/data/samplesheet.csv"
+
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        tuple val(id), path(input_bam)
+
+    output:
+        tuple path(input_bam), path("${input_bam}.bai")
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        tuple path(input_bam), path(input_bam_index)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${input_bam}.g.vcf"
+        path "${input_bam}.g.vcf.idx"
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+
+workflow {
+
+    // Create input channel from samplesheet in CSV format
+    reads_ch = Channel.fromPath(params.reads_bam)
+                        .splitCsv(header: true)
+                        .map{row -> [row.id, file(row.reads_bam)]}
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+}
diff --git a/hello-nextflow/scripts/hello-gatk-6.nf b/hello-nextflow/scripts/hello-gatk-6.nf
new file mode 100644
index 000000000..64681be54
--- /dev/null
+++ b/hello-nextflow/scripts/hello-gatk-6.nf
@@ -0,0 +1,133 @@
+/*
+ * Pipeline parameters
+ */
+
+// Execution environment setup
+params.baseDir = "/workspace/gitpod/nf-training/hello-nextflow" 
+$baseDir = params.baseDir
+
+// Primary input
+params.reads_bam = "${baseDir}/data/samplesheet.csv"
+
+// Accessory files
+params.genome_reference = "${baseDir}/data/ref/ref.fasta"
+params.genome_reference_index = "${baseDir}/data/ref/ref.fasta.fai"
+params.genome_reference_dict = "${baseDir}/data/ref/ref.dict"
+params.calling_intervals = "${baseDir}/data/intervals.list"
+
+// Base name for final output file
+params.cohort_name = "family_trio"
+
+/*
+ * Generate BAM index file
+ */
+process SAMTOOLS_INDEX {
+
+    container 'quay.io/biocontainers/samtools:1.19.2--h50ea8bc_1' 
+
+    input:
+        tuple val(id), path(input_bam)
+
+    output:
+        tuple val(id), path(input_bam), path("${input_bam}.bai")
+
+    """
+    samtools index '$input_bam'
+
+    """
+}
+
+/*
+ * Call variants with GATK HapolotypeCaller in GVCF mode
+ */
+process GATK_HAPLOTYPECALLER {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        tuple val(id), path(input_bam), path(input_bam_index)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        tuple val(id), path("${input_bam}.g.vcf"), path("${input_bam}.g.vcf.idx")
+
+    """
+    gatk HaplotypeCaller \
+        -R ${ref_fasta} \
+        -I ${input_bam} \
+        -O ${input_bam}.g.vcf \
+        -L ${interval_list} \
+        -ERC GVCF
+    """
+}
+
+/*
+ * Consolidate GVCFs and apply joint genotyping analysis
+ */
+process GATK_JOINTGENOTYPING {
+
+    container "broadinstitute/gatk:4.5.0.0"
+
+    input:
+        path(sample_map)
+        val(cohort_name)
+        path ref_fasta
+        path ref_index
+        path ref_dict
+        path interval_list
+
+    output:
+        path "${cohort_name}.joint.vcf"
+        path "${cohort_name}.joint.vcf.idx"
+
+    """
+    gatk GenomicsDBImport \
+        --sample-name-map ${sample_map} \
+        --genomicsdb-workspace-path ${cohort_name}_gdb \
+        -L ${interval_list}
+
+    gatk GenotypeGVCFs \
+        -R ${ref_fasta} \
+        -V gendb://${cohort_name}_gdb \
+        -O ${cohort_name}.joint.vcf \
+        -L ${interval_list}
+    """
+}
+
+workflow {
+
+    // Create input channel from samplesheet in CSV format (via CLI parameter)
+    reads_ch = Channel.fromPath(params.reads_bam)
+                        .splitCsv(header: true)
+                        .map{row -> [row.id, file(row.reads_bam)]}
+
+    // Create index file for input BAM file
+    SAMTOOLS_INDEX(reads_ch)
+
+    // Call variants from the indexed BAM file
+    GATK_HAPLOTYPECALLER(
+        SAMTOOLS_INDEX.out,
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+
+    // Create a sample map of the output GVCFs
+    sample_map = GATK_HAPLOTYPECALLER.out.collectFile(){ id, gvcf, idx ->
+            ["${params.cohort_name}_map.tsv", "${id}\t${gvcf}\t${idx}\n"]
+    }
+
+    // Consolidate GVCFs and apply joint genotyping analysis
+    GATK_JOINTGENOTYPING(
+        sample_map, 
+        params.cohort_name, 
+        params.genome_reference,
+        params.genome_reference_index,
+        params.genome_reference_dict,
+        params.calling_intervals
+    )
+}
diff --git a/hello-nextflow/scripts/hello-world-1.nf b/hello-nextflow/scripts/hello-world-1.nf
new file mode 100644
index 000000000..b1e03749a
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-1.nf
@@ -0,0 +1,13 @@
+process sayHello {
+
+    output: 
+        stdout
+    
+    """
+    echo 'Hello World!'
+    """
+}
+
+workflow {
+    sayHello()
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-10.nf b/hello-nextflow/scripts/hello-world-10.nf
new file mode 100644
index 000000000..6db03afa2
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-10.nf
@@ -0,0 +1,46 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+    input:
+        val greeting  
+
+    output: 
+        path "${greeting}-${params.output_file}"
+    
+    """
+    echo '$greeting' > '$greeting-$params.output_file'
+    """
+}
+
+/*
+ * Use a text replace utility to convert the greeting to uppercase
+ */
+process convertToUpper {
+    input:
+        path input_file
+
+    output:
+        path "UPPER-${input_file}"
+
+    """
+    cat $input_file | tr '[a-z]' '[A-Z]' > 'UPPER-${input_file}'
+    """
+}
+
+workflow {
+
+    // create a channel for inputs from a file
+    greeting_ch = Channel.fromPath(params.input_file)
+
+    // emit a greeting
+    sayHello(greeting_ch)
+
+    // convert the greeting to uppercase
+    convertToUpper(sayHello.out)
+}
diff --git a/hello-nextflow/scripts/hello-world-2.nf b/hello-nextflow/scripts/hello-world-2.nf
new file mode 100644
index 000000000..b6d31cc62
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-2.nf
@@ -0,0 +1,18 @@
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+
+    output: 
+        stdout
+    
+    """
+    echo 'Hello World!'
+    """
+}
+
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-3.nf b/hello-nextflow/scripts/hello-world-3.nf
new file mode 100644
index 000000000..4f35f6612
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-3.nf
@@ -0,0 +1,18 @@
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+
+    output: 
+        path 'output.txt'
+    
+    """
+    echo 'Hello World!' > output.txt
+    """
+}
+
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-4.nf b/hello-nextflow/scripts/hello-world-4.nf
new file mode 100644
index 000000000..1055188ec
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-4.nf
@@ -0,0 +1,18 @@
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+
+    output: 
+        path params.output_file
+    
+    """
+    echo 'Hello World!' > $params.output_file
+    """
+}
+
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-5.nf b/hello-nextflow/scripts/hello-world-5.nf
new file mode 100644
index 000000000..b021e53b3
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-5.nf
@@ -0,0 +1,23 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+
+    output: 
+        path params.output_file
+    
+    """
+    echo 'Hello World!' > $params.output_file
+    """
+}
+
+workflow {
+
+    // emit a greeting
+    sayHello()
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-6.nf b/hello-nextflow/scripts/hello-world-6.nf
new file mode 100644
index 000000000..67ca37987
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-6.nf
@@ -0,0 +1,28 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+    input:
+        val greeting  
+
+    output: 
+        path params.output_file
+    
+    """
+    echo '$greeting' > $params.output_file
+    """
+}
+
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello world!')
+
+    // emit a greeting
+    sayHello(greeting_ch)
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-7.nf b/hello-nextflow/scripts/hello-world-7.nf
new file mode 100644
index 000000000..d7c855f9d
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-7.nf
@@ -0,0 +1,28 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+    input:
+        val greeting  
+
+    output: 
+        path params.output_file
+    
+    """
+    echo '$greeting' > $params.output_file
+    """
+}
+
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of(params.greeting)
+
+    // emit a greeting
+    sayHello(greeting_ch)
+}
\ No newline at end of file
diff --git a/hello-nextflow/scripts/hello-world-8.nf b/hello-nextflow/scripts/hello-world-8.nf
new file mode 100644
index 000000000..6c9716074
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-8.nf
@@ -0,0 +1,46 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+    input:
+        val greeting  
+
+    output: 
+        path params.output_file
+    
+    """
+    echo '$greeting' > $params.output_file
+    """
+}
+
+/*
+ * Use a text replace utility to convert the greeting to uppercase
+ */
+process convertToUpper {
+    input:
+        path input_file
+
+    output:
+        path "UPPER-${input_file}"
+
+    """
+    cat $input_file | tr '[a-z]' '[A-Z]' > 'UPPER-${input_file}'
+    """
+}
+
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of(params.greeting)
+
+    // emit a greeting
+    sayHello(greeting_ch)
+
+    // convert the greeting to uppercase
+    convertToUpper(sayHello.out)
+}
diff --git a/hello-nextflow/scripts/hello-world-9.nf b/hello-nextflow/scripts/hello-world-9.nf
new file mode 100644
index 000000000..b7d8d01cb
--- /dev/null
+++ b/hello-nextflow/scripts/hello-world-9.nf
@@ -0,0 +1,46 @@
+/*
+ * Pipeline parameters
+ */
+params.output_file = 'output.txt'
+
+/*
+ * Use echo to print 'Hello World!' to standard out
+ */
+process sayHello {
+    input:
+        val greeting  
+
+    output: 
+        path "${greeting}-${params.output_file}"
+    
+    """
+    echo '$greeting' > '$greeting-$params.output_file'
+    """
+}
+
+/*
+ * Use a text replace utility to convert the greeting to uppercase
+ */
+process convertToUpper {
+    input:
+        path input_file
+
+    output:
+        path "UPPER-${input_file}"
+
+    """
+    cat $input_file | tr '[a-z]' '[A-Z]' > 'UPPER-${input_file}'
+    """
+}
+
+workflow {
+
+    // create a channel for inputs
+    greeting_ch = Channel.of('Hello','Bonjour','Holà')
+
+    // emit a greeting
+    sayHello(greeting_ch)
+
+    // convert the greeting to uppercase
+    convertToUpper(sayHello.out)
+}
diff --git a/mkdocs.yml b/mkdocs.yml
index 68b491360..1b1a34876 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -38,6 +38,11 @@ nav:
           - hands_on/02_workflow.md
           - hands_on/03_setup.md
           - hands_on/04_implementation.md
+    - Hello Nextflow:
+          - hello_nextflow/index.md
+          - hello_nextflow/01_orientation.md
+          - hello_nextflow/02_hello_world.md
+          - hello_nextflow/03_hello_gatk.md
     - help.md
 
 theme:
@@ -136,12 +141,14 @@ plugins:
           restart_increment_after:
               - hands_on/01_datasets.md
               - advanced/operators.md
+              - hello_nextflow/01_orientation.md
           exclude:
               - index.md
               - help.md
               - basic_training/index.md
               - hands_on/index.md
               - hands_on/solutions/*md
+              - hello_nextflow/*.md
     - i18n:
           docs_structure: suffix
           fallback_to_default: true