Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Why do I have to destroy before I reserve. #23

Open
arianitu opened this issue Dec 17, 2013 · 1 comment
Open

Why do I have to destroy before I reserve. #23

arianitu opened this issue Dec 17, 2013 · 1 comment

Comments

@arianitu
Copy link

After reserving a job, I cannot reserve again until I delete the job. That means I can't do any jobs in parallel.

What if I queue a job that takes 40 seconds due to some asynchronous call out to the internet. I can't handle any other jobs in the mean time, until that call returns and then I call destroy.

What I've been doing is calling reserve right after a reserve returns:

client.reserve(function handleJob(error, jobId, payload) {
    setImmediate(function() {
        client.reserve(handleJob);
    });

    someCallThatTakesALongTimeToReturn(function() {
       client.destroy(jobId, ...);
    });
});

and for whatever reason, that does not work. I have to do

client.reserve(function handleJob(error, jobId, payload) {
    someCallThatTakesALongTimeToReturn(function() {
       client.destroy(jobId, function() {
            client.reserve(handleJob);
       });
    });
});

Is there a particular reason it has to work this way? Can beanstalkd not handle parallel requests?

@arianitu
Copy link
Author

I think the right way to solve this issue is to spawn more workers. I think that's fine, but I think there needs to be clear documentation that this is the way it works.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant