python – Celeryd multi with supervisord

python – Celeryd multi with supervisord,第1张

概述尝试用芹菜多来运行supervisord(3.2.2). 似乎是supervisord无法处理它.单个芹菜工人工作正常. 这是我的supervisord配置 celery multi v3.1.20 (Cipater)> Starting nodes... > celery1@parzee-dev-app-sfo1: OKStale pidfile exists. Removing i @H_403_1@ 尝试用芹菜多来运行supervisord(3.2.2).

似乎是supervisord无法处理它.单个芹菜工人工作正常.

这是我的supervisord配置

celery multi v3.1.20 (Cipater)> Starting nodes...    > celery1@parzee-dev-app-sfo1: OKStale pIDfile exists. Removing it.    > celery2@parzee-dev-app-sfo1: OKStale pIDfile exists. Removing it.

celeryd.conf

; ==================================;  celery worker supervisor example; ==================================[program:celery]; Set full path to celery program if using virtualenvcommand=/usr/local/src/imbue/application/imbue/supervisorctl/celeryd/celeryd.shprocess_name = %(program_name)s%(process_num)d@%(host_node_name)sdirectory=/usr/local/src/imbue/application/imbue/conf/numprocs=2stderr_logfile=/usr/local/src/imbue/application/imbue/log/celeryd.errlogfile=/usr/local/src/imbue/application/imbue/log/celeryd.logstdout_logfile_backups = 10stderr_logfile_backups = 10stdout_logfile_maxbytes = 50MBstderr_logfile_maxbytes = 50MBautostart=trueautorestart=falsestartsecs=10

我使用以下supervisord变量来模仿我开始芹菜的方式:

>%(program_name)s
>%(process_num)d
> @
>%(host_node_name)s

Supervisorctl

supervisorctl celery:celery1@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)celery:celery2@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)

我尝试将/usr/local/lib/python2.7/dist-packages/supervisor/options.py中的此值从0更改为1:

numprocs_start = integer(get(section,'numprocs_start',1))

我还是得到:

celery:celery1@parzee-dev-app-sfo1   FATAL     Exited too quickly (process log may have details)celery:celery2@parzee-dev-app-sfo1   EXITED    May 14 12:47 AM

Celery正在开始,但是supervisord没有跟踪它.

根@ parzee-DEV-APP-SFO1:在/ etc /主管#

ps -ef | grep celeryroot      2728     1  1 00:46 ?        00:00:02 [celeryd: celery1@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery1@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pIDfile=/usr/local/src/imbue/application/imbue/log/1.pID)root      2973     1  1 00:46 ?        00:00:02 [celeryd: celery2@parzee-dev-app-sfo1:MainProcess] -active- (worker -c 16 -n celery2@parzee-dev-app-sfo1 --loglevel=DEBUG -P processes --logfile=/usr/local/src/imbue/application/imbue/log/celeryd.log --pIDfile=/usr/local/src/imbue/application/imbue/log/2.pID)

celery.sh

source ~/.profileCELERY_LOGfile=/usr/local/src/imbue/application/imbue/log/celeryd.logCELERYD_OPTS=" --loglevel=DEBUG"CELERY_WORKERS=2CELERY_PROCESSES=16cd /usr/local/src/imbue/application/imbue/confexec celery multi start $CELERY_WORKERS -P processes -c $CELERY_PROCESSES -n celeryd@{HOSTname} -f $CELERY_LOGfile $CELERYD_OPTS

类似:
Running celeryd_multi with supervisor
How to use Supervisor + Django + Celery with multiple Queues and Workers?

解决方法 由于管理程序监视(启动/停止/重新启动)进程,因此该进程应在​​前台运行(不应该进行守护程序).

Celery multi daemonizes自己,所以它不能与主管一起运行.

您可以为每个工作人员创建单独的流程并将其分组.

[program:worker1]command=celery worker -l info -n worker1[program:worker2]command=celery worker -l info -n worker2[group:workers]programs=worker1,worker2

你也可以写一个像这样的makes daemon process run in foreground的shell脚本.

#! /usr/bin/env bashset -eupIDfile="/var/run/your-daemon.pID"command=/usr/sbin/your-daemon# Proxy signalsfunction kill_app(){    kill $(cat $pIDfile)    exit 0 # exit okay}trap "kill_app" SIGINT SIGTERM# Launch daemon$celery multi start 2 -l INFOsleep 2# Loop while the pIDfile and the process existwhile [ -f $pIDfile ] && kill -0 $(cat $pIDfile) ; do    sleep 0.5doneexit 1000 # exit unexpected
总结

以上是内存溢出为你收集整理的python – Celeryd multi with supervisord全部内容,希望文章能够帮你解决python – Celeryd multi with supervisord所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1194990.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-03
下一篇 2022-06-03

发表评论

登录后才能评论

评论列表(0条)

保存