Celery Flower Max Number of Clients Reached Trying Again in 1 Seconds
Using celery with multiple queues, retries, and scheduled tasks
Using celery, it creates a queue on your broker (in the last blog post it was RabbitMQ) If you lot have a few asynchronous tasks and you use just the celery default queue, all tasks volition be going to the aforementioned queue. The solution for this is routing each task using named queues. You tin can use multiple workers for each queue and split the workers, determining which queue they will exist consuming. If you want to schedule tasks exactly as you practise in crontab, you may want to take a wait at CeleryBeat.

On this mail service, I'll evidence how to piece of work with multiple queues, scheduled tasks, and retry when something goes wrong.
If yous don't know how to use celery, read this post first: https://fernandofreitasalves.com/executing-time-consuming-tasks-asynchronously-with-django-and-celery/
Retrying a task
Let's say your task depends on an external API or connects to another spider web service and for any reason, it's raising a ConnectionError, for instance. It's plausible to think that later on a few seconds the API, spider web service, or annihilation you lot are using may be back on track and working again. In this cases, you may want to catch an exception and retry your chore.
from celery import shared_task @shared_task(bind=Truthful, max_retries=3) # you can decide the max_retries here
def access_awful_system(self, my_obj_id):
from cadre.models import Object
from requests import ConnectionError
o = Object.objects.get(pk=my_obj_id)
# If ConnectionError attempt once more in 180 seconds
try:
o.access_awful_system()
except ConnectionError as exc:
cocky.retry(exc=exc, countdown=180) # the task goes dorsum to the queue
The cocky.retry inside a office is what's interesting here. That's possible thanks to demark=True on the shared_task decorator. Information technology turns our part access_awful_system into a method of Task class. And it forced us to employ self as the first argument of the part besides.
Another nice way to retry a function is using exponential backoff:
self.retry(exc=exc, inaugural=two ** cocky.request.retries) ETA — Scheduling a task for later
At present, imagine that your application has to phone call an asynchronous task, but need to await one hour until running it.
In this case, we only need to telephone call the task using the ETA(estimated fourth dimension of inflow) property and it means your chore volition be executed any fourth dimension afterward ETA. To be precise not exactly in ETA time considering it will depend if there are workers bachelor at that time. If you lot want to schedule tasks exactly every bit you practise in crontab, y'all may want to take a look at CeleryBeat).
from django.utils import timezone
from datetime import timedelta now = timezone.at present() # later is one hour from now
after = now + timedelta(hours=ane)
access_awful_system.apply_async((object_id), eta=later) Using more than queues
When you execute celery, it creates a queue on your broker (in the last weblog mail service it was RabbitMQ). If you lot accept a few asynchronous tasks and you apply merely the celery default queue, all tasks will be going to the aforementioned queue.
Suppose that we have some other task called too_long_task and one more called quick_task and imagine that we accept one unmarried queue and four workers.
In that scenario, imagine if the producer sends ten messages to the queue to be executed by too_long_task and right after that, it produces 10 more messages to quick_task. What is going to happen? All your workers may be occupied executing too_long_task that went first on the queue and y'all don't take workers on quick_task.
The solution for this is routing each task using named queues.
# CELERY ROUTES
CELERY_ROUTES = {
'core.tasks.too_long_task': {'queue': 'too_long_queue'},
'core.tasks.quick_task': {'queue': 'quick_queue'},
} At present we can split the workers, determining which queue they will be consuming.
# For besides long queue
celery --app=proj_name worker -Q too_long_queue -c ii # For quick queue
celery --app=proj_name worker -Q quick_queue -c two I'm using 2 workers for each queue, but it depends on your organization.
Equally, in the terminal post, you may want to run it on Supervisord
There is a lot of interesting things to do with your workers here.
Calling Sequential Tasks
Some other common upshot is having to call two asynchronous tasks 1 afterwards the other. It can happen in a lot of scenarios, e.thou. if the second tasks use the first task as a parameter.
You can utilise chain to do that
from celery import chain
from tasks import first_task, second_task chain(first_task.due south(meu_objeto_id) | second_task.s())
The chain is a chore too, so you lot can use parameters on apply_async, for instance, using an ETA:
chain(salvar_dados.southward(meu_objeto_id) | trabalhar_dados.south()).apply_async(eta=depois) Ignoring the results from ResultBackend
If you simply use tasks to execute something that doesn't demand the return from the task you tin ignore the results and amend your operation.
If you're merely saving something on your models, yous'd like to utilise this in your settings.py:
CELERY_IGNORE_RESULT = Truthful Sources:
Super BĂ´nus
Celery Messaging at Calibration at Instagram — Pycon 2013
Originally published at Fernando Alves .
Tags
Related Stories
Source: https://hackernoon.com/using-celery-with-multiple-queues-retries-and-scheduled-tasks-589fe9a4f9ba
0 Response to "Celery Flower Max Number of Clients Reached Trying Again in 1 Seconds"
Post a Comment