-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Error on very large fastq file #12
Comments
what is the command you are using? If you are setting |
From your instructions, I get the number of lines by "wc -l". wc -l Sample.R1.fastq By dividing by 4, I got this number: 637160468 fastq_pair -t 637160468 Sample.R1.fastq Sample.R2.fastq Is it ok to use a smaller number than actual reads? |
So that should have been OK (depending on the specific system you used), but yes, I suspect you have an integer overflow Yes, it is OK to use a smaller number than the actual number of reads. Try dividing the number by 4 (159290117). In this case you will end up with, on average, four sequences per bucket. |
Thank you. It worked ! This is the very program I have been looking for. |
Excellent! |
Hello,
I am analyzing a very large fastq file (~240 Gb for each pair).
I have this error when running fastq-pair.
"We cannot allocate the memory for a table size of -436581356. Please try a smaller value for -t"
Could you provide some solution for this?
Thanks
Sam
The text was updated successfully, but these errors were encountered: