Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Updating workflows/scRNAseq/pseudobulk-worflow-decoupler-edger from 0.1.1 to 0.1.2 #660

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

gxydevbot
Copy link
Contributor

Hello! This is an automated update of the following workflow: workflows/scRNAseq/pseudobulk-worflow-decoupler-edger. I created this PR because I think one or more of the component tools are out of date, i.e. there is a newer version available on the ToolShed.

By comparing with the latest versions available on the ToolShed, it seems the following tools are outdated:

  • toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/0.0.6 should be updated to toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/0.0.7

The workflow release number has been updated from 0.1.1 to 0.1.2.

If you want to skip this change, close this PR without deleting the branch. It will be reopened if another change is detected.
Any commit from another author than 'planemo-autoupdate' will prevent more auto-updates.
To ignore manual changes and allow autoupdates, delete the branch.

Copy link

github-actions bot commented Feb 3, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ pseudo-bulk_edgeR.ga_0

    Execution Problem:

    • Final state of invocation b8611f776925d626 is [failed]
      

    Workflow invocation details

    • Invocation Messages

      • Invocation scheduling failed because an unexpected failure occurred at step 21: 'Failed to create 1 job(s) for workflow step 21: Error executing tool with id 'toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/0.0.7': 'shape_col''
    • Steps
      • Step 1: Source AnnData file:

        • step_state: scheduled
      • Step 2: Pseudo-bulk: Fields to merge:

        • step_state: scheduled
      • Step 11: Sanitize factors:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • sed -r --sandbox -e 's/[ --+*^]+/_/g' '/tmp/tmpi12dzmqi/files/2/7/9/dataset_27981267-1656-4980-b2bc-8e931c738eb8.dat' > '/tmp/tmpi12dzmqi/job_working_directory/000/4/outputs/dataset_be758055-c29c-42ca-9449-3a5ce8aefc87.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              replacements [{"__index__": 0, "find_pattern": "[ --+*^]+", "replace_pattern": "_"}]
      • Step 12: Remove start, end, width:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/iuc/column_remove_by_header/2040e4c2750a/column_remove_by_header/column_remove_by_header.py' -i '/tmp/tmpi12dzmqi/files/e/5/b/dataset_e5b2a2d9-2458-4974-8731-a7df87fb0cc6.dat' -o '/tmp/tmpi12dzmqi/job_working_directory/000/5/outputs/dataset_579dd841-41a8-40e4-964b-56c2c8078bbc.dat' -d '	'  -s '#' --unicode-escaped-cols --columns 'start' 'end' 'width'

            Exit Code:

            • 0

            Standard Output:

            • Kept 10 of 10 columns.
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              headers [{"__index__": 0, "name": "start"}, {"__index__": 1, "name": "end"}, {"__index__": 2, "name": "width"}]
              keep_columns false
              strip_characters "#"
      • Step 13: Sanitize first factor for leading digits:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • awk -v OFS="\t" -v FS="\t" --re-interval --sandbox '{ $2 = gensub( /^([0-9])(.+)/, "GG_\\1\\2", "g", $2 ) ; print $0 ; }' '/tmp/tmpi12dzmqi/files/b/e/7/dataset_be758055-c29c-42ca-9449-3a5ce8aefc87.dat' > '/tmp/tmpi12dzmqi/job_working_directory/000/6/outputs/dataset_9f15b088-566a-4869-b41e-633ad3751a2b.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              replacements [{"__index__": 0, "column": "2", "find_pattern": "^([0-9])(.+)", "replace_pattern": "GG_\\\\1\\\\2"}]
      • Step 14: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_awk_tool/9.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • env -i $(which awk) --sandbox -v FS='	' -v OFS='	' --re-interval -f '/tmp/tmpi12dzmqi/job_working_directory/000/7/configs/tmpdo8e5vlc' '/tmp/tmpi12dzmqi/files/9/f/1/dataset_9f15b088-566a-4869-b41e-633ad3751a2b.dat' > '/tmp/tmpi12dzmqi/job_working_directory/000/7/outputs/dataset_8a1c9465-4956-4fe8-9ec4-fbddaae89f5f.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              code "BEGIN { print \"header\" } NR > 1 { if (!seen[$2]++) words[++count]=$2 } END { for (i=1; i<=count; i++) for (j=i+1; j<=count; j++) print words[i]\"-\"words[j] }"
              dbkey "?"
      • Step 15: toolshed.g2.bx.psu.edu/repos/iuc/edger/edger/3.36.0+galaxy5:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • Rscript '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/iuc/edger/ae2aad0a6d50/edger/edger.R'  -R '/tmp/tmpi12dzmqi/job_working_directory/000/8/outputs/dataset_8c5cd1b7-8dcf-44a6-95f5-90edca1bdf50.dat' -o '/tmp/tmpi12dzmqi/job_working_directory/000/8/outputs/dataset_8c5cd1b7-8dcf-44a6-95f5-90edca1bdf50_files'  -m '/tmp/tmpi12dzmqi/files/3/f/1/dataset_3f1e38f7-65c2-442c-a40c-05fba7a19729.dat' -f '/tmp/tmpi12dzmqi/files/9/f/1/dataset_9f15b088-566a-4869-b41e-633ad3751a2b.dat'  -a '/tmp/tmpi12dzmqi/files/5/7/9/dataset_579dd841-41a8-40e4-964b-56c2c8078bbc.dat'  -F '~ 0 + disease'  -C '/tmp/tmpi12dzmqi/files/8/a/1/dataset_8a1c9465-4956-4fe8-9ec4-fbddaae89f5f.dat'    -l '0.0' -p '0.05' -d 'BH' -n 'TMM' -b  && mkdir ./output_dir  && cp '/tmp/tmpi12dzmqi/job_working_directory/000/8/outputs/dataset_8c5cd1b7-8dcf-44a6-95f5-90edca1bdf50_files'/*.tsv output_dir/

            Exit Code:

            • 0

            Standard Error:

            • Warning message:
              In Sys.setlocale("LC_MESSAGES", "en_US.UTF-8") :
                OS reports request to set locale to "en_US.UTF-8" cannot be honored
              Warning message:
              In plot.xy(xy, type, ...) : NAs introduced by coercion
              Warning message:
              In plot.xy(xy, type, ...) : NAs introduced by coercion
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              adv {"lfc": "0.0", "lrtOption": false, "normalisationOption": "TMM", "pAdjust": "BH", "pVal": "0.05", "robOption": true}
              anno {"__current_case__": 0, "annoOpt": "yes", "geneanno": {"values": [{"id": 10, "src": "hda"}]}}
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              contrasts {"__current_case__": 1, "cinfo": {"values": [{"id": 12, "src": "hda"}]}, "contrastOpt": "file"}
              dbkey "?"
              f {"filt": {"__current_case__": 1, "filt_select": "no"}}
              formula "~ 0 + disease"
              input {"__current_case__": 1, "counts": {"values": [{"id": 8, "src": "hda"}]}, "fact": {"__current_case__": 0, "ffile": "yes", "finfo": {"values": [{"id": 11, "src": "hda"}]}}, "format": "matrix"}
              out {"normCounts": false, "rdaOption": false, "rscript": false}
      • Step 16: Get contrast labels:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mv '/tmp/tmpi12dzmqi/job_working_directory/000/9/configs/tmpm4aw1vd0' '/tmp/tmpi12dzmqi/job_working_directory/000/9/outputs/dataset_66ab622d-e096-46a3-8bcb-bffa9b95ffd6.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              input_collection {"values": [{"id": 1, "src": "hdca"}]}
      • Step 17: Select gene symbols, logFC, PValue and FDR:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/iuc/column_remove_by_header/2040e4c2750a/column_remove_by_header/column_remove_by_header.py' -i '/tmp/tmpi12dzmqi/files/9/9/3/dataset_993f186b-f6a8-4c97-b2ae-f33264c8a2f4.dat' -o '/tmp/tmpi12dzmqi/job_working_directory/000/10/outputs/dataset_30a5cbb5-3a92-4e0a-a3eb-30ed10cf2f95.dat' -d '	' --keep -s '#' --unicode-escaped-cols --columns 'gene_symbol' 'logFC' 'PValue' 'FDR'

            Exit Code:

            • 0

            Standard Output:

            • Kept 4 of 15 columns.
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "tabular"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              headers [{"__index__": 0, "name": "gene_symbol"}, {"__index__": 1, "name": "logFC"}, {"__index__": 2, "name": "PValue"}, {"__index__": 3, "name": "FDR"}]
              keep_columns true
              strip_characters "#"
      • Step 18: toolshed.g2.bx.psu.edu/repos/bgruening/text_processing/tp_replace_in_line/9.3+galaxy1:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • sed -r --sandbox -e 's/edgeR_//g' '/tmp/tmpi12dzmqi/files/6/6/a/dataset_66ab622d-e096-46a3-8bcb-bffa9b95ffd6.dat' > '/tmp/tmpi12dzmqi/job_working_directory/000/11/outputs/dataset_e2544aa0-df7b-405e-a0d9-87dfa968a23f.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              replacements [{"__index__": 0, "find_pattern": "edgeR_", "replace_pattern": ""}]
      • Step 19: Split contrasts:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir ./out && python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/bgruening/split_file_to_collection/2dae863c8f42/split_file_to_collection/split_file_to_collection.py' --out ./out --in '/tmp/tmpi12dzmqi/files/e/2/5/dataset_e2544aa0-df7b-405e-a0d9-87dfa968a23f.dat' --ftype 'txt' --chunksize 1 --batch --file_names 'split_file' --file_ext 'txt'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "txt"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              split_parms {"__current_case__": 5, "input": {"values": [{"id": 17, "src": "hda"}]}, "newfilenames": "split_file", "select_allocate": {"__current_case__": 1, "allocate": "batch"}, "select_ftype": "txt", "select_mode": {"__current_case__": 0, "chunksize": "1", "mode": "chunk"}}
      • Step 20: Contrast as parameters:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • cd ../; python _evaluate_expression_.py

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "txt"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              param_type "text"
              remove_newlines true
      • Step 3: Group by column:

        • step_state: scheduled
      • Step 21: Unlabelled step:

        • step_state: new
      • Step 4: Sample key column:

        • step_state: scheduled
      • Step 5: Name Your Raw Counts Layer:

        • step_state: scheduled
      • Step 6: Factor fields:

        • step_state: scheduled
      • Step 7: Formula:

        • step_state: scheduled
      • Step 8: Gene symbol column:

        • step_state: scheduled
      • Step 9: toolshed.g2.bx.psu.edu/repos/ebi-gxa/decoupler_pseudobulk/decoupler_pseudobulk/1.4.0+galaxy8:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • mkdir deseq_output_dir && mkdir plots_output_dir && python '/tmp/shed_dir/toolshed.g2.bx.psu.edu/repos/ebi-gxa/decoupler_pseudobulk/09c833d9b03b/decoupler_pseudobulk/decoupler_pseudobulk.py' '/tmp/tmpi12dzmqi/files/0/0/e/dataset_00ef2a77-fa5a-4b64-af4c-04d9bdfd33b6.dat' --groupby 'cell_type' --sample_key 'individual' --layer 'counts' --mode 'sum' --min_cells 10 --save_path plots_output_dir --min_counts 10 --min_counts_per_sample_marking 20 --min_total_counts 1000 --filter_expr --factor_fields 'disease' --deseq2_output_path deseq_output_dir --plot_samples_figsize 13 13 --plot_filtering_figsize 13 13

            Exit Code:

            • 0

            Standard Output:

            • Using mode: sum
              Created pseudo-bulk AnnData, checking if fields still make sense.
              If this fails this check, it might mean that you asked for factors         that are not compatible with you sample identifiers (ie. asked for         phase in the factors, but each sample contains more than one phase,        try joining fields).
              Factors requested are adequate for the pseudo-bulked AnnData!
              

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              adata_obs_fields_to_merge None
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              factor_fields "disease"
              filter_expr true
              filter_per_contrast {"__current_case__": 1, "filter": "no"}
              groupby "cell_type"
              layer "counts"
              min_cells "10"
              min_counts "10"
              min_counts_per_sample "20"
              min_total_counts "1000"
              mode "sum"
              plot_filtering_figsize "13 13"
              plot_samples_figsize "13 13"
              produce_anndata false
              produce_plots true
              sample_key "individual"
              use_raw false
      • Step 10: Sanitize matrix:

        • step_state: scheduled

        • Jobs
          • Job 1:

            • Job state is ok

            Command Line:

            • sed -r --sandbox -e 's/[ --+*^]+/_/g' '/tmp/tmpi12dzmqi/files/0/d/6/dataset_0d65b43c-581e-4f16-be44-d6f3f4a458a0.dat' > '/tmp/tmpi12dzmqi/job_working_directory/000/3/outputs/dataset_3f1e38f7-65c2-442c-a40c-05fba7a19729.dat'

            Exit Code:

            • 0

            Traceback:

            Job Parameters:

            • Job parameter Parameter value
              __input_ext "input"
              __workflow_invocation_uuid__ "c1d2dc4ce1f111efb9ab00224805eb1b"
              chromInfo "/tmp/tmpi12dzmqi/galaxy-dev/tool-data/shared/ucsc/chrom/?.len"
              dbkey "?"
              replacements [{"__index__": 0, "find_pattern": "[ --+*^]+", "replace_pattern": "_"}]
    • Other invocation details
      • error_message

        • Final state of invocation b8611f776925d626 is [failed]
      • history_id

        • b8611f776925d626
      • history_state

        • ok
      • invocation_id

        • b8611f776925d626
      • invocation_state

        • failed
      • messages

        • [{'details': "Failed to create 1 job(s) for workflow step 21: Error executing tool with id 'toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/0.0.7': 'shape_col'", 'reason': 'unexpected_failure', 'workflow_step_id': 20}]
      • workflow_id

        • b8611f776925d626

Copy link

github-actions bot commented Feb 6, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ pseudo-bulk_edgeR.ga_0

    Execution Problem:

    • Unexpected HTTP status code: 400: {"err_msg":"Workflow was not invoked; the following required tools are not installed: toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/ (version 0.0.7)","err_code":0}
      

1 similar comment
Copy link

github-actions bot commented Feb 7, 2025

Test Results (powered by Planemo)

Test Summary

Test State Count
Total 1
Passed 0
Error 1
Failure 0
Skipped 0
Errored Tests
  • ❌ pseudo-bulk_edgeR.ga_0

    Execution Problem:

    • Unexpected HTTP status code: 400: {"err_msg":"Workflow was not invoked; the following required tools are not installed: toolshed.g2.bx.psu.edu/repos/iuc/volcanoplot/volcanoplot/ (version 0.0.7)","err_code":0}
      

@gxydevbot
Copy link
Contributor Author

There are new updates, if you want to integrate them, close the PR and delete branch.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants