最新消息:20210816 当前crifan.com域名已被污染,为防止失联,请关注(页面右下角的)公众号

[已解决]Flask-RQ2+redis的后台进程不工作

Flask crifan 5030浏览 0评论

折腾:

[已解决]Flask中添加后台进程用于提醒到期时发送通知

期间,代码是:

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/sipevents/__init__.py

from flask_rq2 import RQ
rq = RQ(app)

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/sipevents/views.py

############################################################
# Flask-RQ2
############################################################
from . import rq
############################################################
# Flask-RQ2 related
############################################################
@rq.job
def add(x, y):
    gLog.debug(“x=%s, y=%x”, x, y)
    return x + y
@app.route(‘/creat_event’, methods=[‘GET’, ‘POST’])
@login_required
def creat_event():
        gLog.debug(“after create event, try use rq2 to run background work”)
        jobAdd = add.queue(1, 2)
        gLog.debug(“jobAdd=%s”, jobAdd)
        scheduledJobAdd = add.schedule(timedelta(seconds=20), 3, 4) # queue job in 20 seconds
        gLog.debug(“scheduledJobAdd=%s”, scheduledJobAdd)

然后去运行,结果redis没有相关的输出

add的函数也没有对应的输出

对应的log是:

after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 7eb9911e-84c4-46a9-9a0f-dd60382c863b: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:920]:
scheduledJobAdd=<Job 2de60c91-ce23-40d2-a0c1-c5bbfb31e166: sipevents.views.add(3, 4)>

然后此停止自己的flask应用,去看看redis的log中,是否有输出对应的内容,结果发现没有:

(SIPEvents) ➜  SIPEvents cat redis.log 
nohup: ignoring input
19594:C 04 Sep 12:18:23.733 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
19594:M 04 Sep 12:18:23.736 # Creating Server TCP listening socket *:6379: unable to bind socket
nohup: ignoring input
                _._                                                  
           _.-“__ ”-._                                             
      _.-“    `.  `_.  ”-._           Redis 3.2.3 (00000000/0) 64 bit
  .-“ .-“`.  “`\/    _.,_ ”-._                                   
 (    ‘      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-…-` __…-.“-._|’` _.-‘|     Port: 6379
 |    `-._   `._    /     _.-‘    |     PID: 19659
  `-._    `-._  `-./  _.-‘    _.-‘                                   
 |`-._`-._    `-.__.-‘    _.-‘_.-‘|                                  
 |    `-._`-._        _.-‘_.-‘    |           http://redis.io        
  `-._    `-._`-.__.-‘_.-‘    _.-‘                                   
 |`-._`-._    `-.__.-‘    _.-‘_.-‘|                                  
 |    `-._`-._        _.-‘_.-‘    |                                  
  `-._    `-._`-.__.-‘_.-‘    _.-‘                                   
      `-._    `-.__.-‘    _.-‘                                       
          `-._        _.-‘                                           
              `-.__.-‘                                               
19659:M 04 Sep 12:20:34.528 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
19659:M 04 Sep 12:20:34.529 # Server started, Redis version 3.2.3
19659:M 04 Sep 12:20:34.529 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
19659:M 04 Sep 12:20:34.529 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
19659:M 04 Sep 12:20:34.529 * DB loaded from disk: 0.000 seconds
19659:M 04 Sep 12:20:34.529 * The server is now ready to accept connections on port 6379

难道是:

此处的add的job没有输出,所以没看到任何输出?

但是此处有log啊:

@rq.job
def add(x, y):
    gLog.debug(“x=%s, y=%x”, x, y)
    return x + y

此处去确认,redis是正常工作的:

后台有进程,也是可以ping通的:

➜  ~ netstat -plntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:10022               0.0.0.0:*                   LISTEN      17286/docker-proxy  
tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      19706/docker-proxy  
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN      17220/mysqld        
tcp        0      0 127.0.0.1:6379              0.0.0.0:*                   LISTEN      19659/redis-server  
tcp        0      0 0.0.0.0:9003                0.0.0.0:*                   LISTEN      19669/docker-proxy  
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      19015/nginx         
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1335/sshd           
tcp        0      0 0.0.0.0:3000                0.0.0.0:*                   LISTEN      17240/docker-proxy  
tcp        0      0 0.0.0.0:8088                0.0.0.0:*                   LISTEN      29962/docker-proxy  
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN      18421/docker-proxy  
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN      19505/docker-proxy  
tcp        0      0 0.0.0.0:32771               0.0.0.0:*                   LISTEN      8748/docker-proxy   
tcp        0      0 0.0.0.0:32772               0.0.0.0:*                   LISTEN      8782/docker-proxy   
udp        0      0 192.168.42.1:123            0.0.0.0:*                               1346/ntpd           
udp        0      0 115.29.173.126:123          0.0.0.0:*                               1346/ntpd           
udp        0      0 10.161.170.247:123          0.0.0.0:*                               1346/ntpd           
udp        0      0 127.0.0.1:123               0.0.0.0:*                               1346/ntpd           
udp        0      0 0.0.0.0:123                 0.0.0.0:*                               1346/ntpd           
➜  ~ redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> get myname
“crifan”
127.0.0.1:6379>

然后想起来了:

之前还有file的log,去看看里面是否有输出:

结果也没有找到

Flask-RQ 不工作

flask使用RQ的时候怎样让rqwork在flask中执行(不阻塞),而不用rqwork在命令行执行? – V2EX

一大波 RQ 的问题, work 数,工作模式,非独立运行 …… – V2EX

Flask-RQ2 not run

Flask-RQ2 queue not work

Flask-RQ2 schedule

Flask-RQ2 job scheduler not work

python – Issue when running schedule with Flask – Stack Overflow

cron – How to run recurring task in the Python Flask framework? – Stack Overflow

python – How to stop Flask from initialising twice in Debug Mode? – Stack Overflow

先去试试:

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/run.py

from sipevents import app
if __name__ == ‘__main__’:
    app.run(debug=True, use_reloader=False)

然后去运行看看,是否app还会被多次初始化:

结果好像还是:

(SIPEvents) ➜  SIPEvents gunicorn -w 4 -b 127.0.0.1:8080 run:app
[2016-09-04 14:50:02 +0000] [21143] [INFO] Starting gunicorn 19.6.0
[2016-09-04 14:50:02 +0000] [21143] [INFO] Listening at: http://127.0.0.1:8080 (21143)
[2016-09-04 14:50:02 +0000] [21143] [INFO] Using worker: sync
[2016-09-04 14:50:02 +0000] [21148] [INFO] Booting worker with pid: 21148
[2016-09-04 14:50:02 +0000] [21149] [INFO] Booting worker with pid: 21149
[2016-09-04 14:50:03 +0000] [21152] [INFO] Booting worker with pid: 21152
[2016-09-04 14:50:03 +0000] [21153] [INFO] Booting worker with pid: 21153

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f9aaae6ef50>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f9aaae6ef90>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f9aaae78050>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f9aaae780d0>

<div–<——————————————————————————

看到:

Flask-RQ2 — Flask-RQ2 16.0.2 documentation

说是:

RQ: Documentation

的扩展

感觉是:

难道是此处没有安装RQ?

去看看

(SIPEvents) ➜  SIPEvents pip list
alembic (0.8.7)
click (6.6)
croniter (0.3.12)
enum34 (1.1.6)
Flask (0.11.1)
Flask-Images (2.1.2)
Flask-Login (0.3.2)
Flask-Migrate (2.0.0)
Flask-Redis (0.3.0)
Flask-RQ2 (16.0.2)
Flask-Script (2.0.5)
Flask-SQLAlchemy (2.1)
gunicorn (19.6.0)
itsdangerous (0.24)
Jinja2 (2.8)
Mako (1.0.4)
MarkupSafe (0.23)
PIL (1.1.7)
Pillow (3.3.1)
pillowcase (2.0.0)
pip (8.1.2)
pycrypto (2.6.1)
python-dateutil (2.5.3)
python-editor (1.0.1)
redis (2.10.5)
requests (2.6.0)
rq (0.6.0)
rq-scheduler (0.7.0)
setuptools (25.2.0)
six (1.10.0)
SQLAlchemy (1.1.0b3)
uWSGI (2.0.13.1)
wechat-sdk (0.6.4)
Werkzeug (0.11.10)
wheel (0.29.0)
xmltodict (0.10.2)

其中的确是已经安装了rq了:

rq (0.6.0)
rq-scheduler (0.7.0)

难道要去:

nvie/rq: Simple job queues for Python

执行:

rq worker

好像的确是可以看到任务了,至少看到部分任务了:

对于新建活动之后创建的任务:

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:915]:
after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 89542863-6eaf-4c7e-848f-334114781e9f: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:920]:
scheduledJobAdd=<Job 6107a556-bb39-447c-b819-39741f731271: sipevents.views.add(3, 4)>

去运行rq worker看到了:

(SIPEvents) ➜  SIPEvents which rq
/root/Envs/SIPEvents/bin/rq
(SIPEvents) ➜  SIPEvents rq help
Usage: rq [OPTIONS] COMMAND [ARGS]…
Error: No such command “help”.
(SIPEvents) ➜  SIPEvents rq worker
15:02:45 RQ worker u’rq:worker:AY140128113754462e2eZ.21641′ started, version 0.6.0
15:02:45 Cleaning registries for queue: default
15:02:45 
15:02:45 *** Listening on default…
15:02:45 default: sipevents.views.add(1, 2) (7eb9911e-84c4-46a9-9a0f-dd60382c863b)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f45dc510e10>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=1, y=2

<div–<——————————————————————————

15:02:45 default: Job OK (7eb9911e-84c4-46a9-9a0f-dd60382c863b)
15:02:45 Result is kept for 500 seconds
15:02:45 
15:02:45 *** Listening on default…
15:02:45 default: sipevents.views.add(1, 2) (af518d07-028c-4f8e-9ee3-6e814ccd652a)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f45dc511e10>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=1, y=2

<div–<——————————————————————————

15:02:46 default: Job OK (af518d07-028c-4f8e-9ee3-6e814ccd652a)
15:02:46 Result is kept for 500 seconds
15:02:46 
15:02:46 *** Listening on default…
15:02:46 default: sipevents.views.add(1, 2) (89542863-6eaf-4c7e-848f-334114781e9f)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f45dc511e10>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=1, y=2

<div–<——————————————————————————

15:02:46 default: Job OK (89542863-6eaf-4c7e-848f-334114781e9f)
15:02:46 Result is kept for 500 seconds
15:02:46 
15:02:46 *** Listening on default…

-》

此处的

add.queue(1, 2)

生效了。

但是:

add.schedule(timedelta(seconds=20), 3, 4)

没有生效啊。

难道是需要去:

rq-scheduler

的环境或工具,才能看到schedule的任务输出?

所以去搜:

flask python rq-scheduler

ui/rq-scheduler: A light library that adds job scheduling capabilities to RQ (Redis Queue)

再次测试:

after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 8dff6c46-0fe3-4ff1-908e-b1b13c21ffc6: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:920]:
scheduledJobAdd=<Job 0d158d5d-3ce9-47d5-92d8-b9c64a1149be: sipevents.views.add(3, 4)>

结果没有生效:

(SIPEvents) ➜  SIPEvents rq worker                                
15:11:49 RQ worker u’rq:worker:AY140128113754462e2eZ.21889′ started, version 0.6.0
15:11:49 Cleaning registries for queue: default
15:11:49 
15:11:49 *** Listening on default…

以及:

(SIPEvents) ➜  SIPEvents rqscheduler
15:13:23 Running RQ scheduler…
15:13:23 Checking for scheduled jobs…
^C15:13:28 Shutting down RQ scheduler…

-》感觉是:

首先需要:

rq后台先运行:

rq worker

以及:

rescheduler也要后台先运行:

rqscheduler

然后再去使用Python-RQ2的job的:

queue去触发加入队列-》rq的worker就会检测到?

schedule去加入要调度的队列?-》rqscheduler就会检测到并调度运行对应的任务

去试试:

1.后台运行rq

nohup rq worker > rq_worker.log 2>&1 &

2.后台运行rqscheduler

nohup rqscheduler >rqscheduler.log 2>&1 &
(SIPEvents) ➜  SIPEvents nohup rq worker > rq_worker.log 2>&1 &
[1] 22360
(SIPEvents) ➜  SIPEvents ps aux | grep rq                      
root         3  0.0  0.0      0     0 ?        S    Jun05   0:29 [ksoftirqd/0]
root        43  0.0  0.0      0     0 ?        S    Jun05   0:16 [ksoftirqd/1]
root     22360  3.0  0.3 199684 12676 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rq worker
root     22371  0.0  0.0 103368   820 pts/1    S+   15:36   0:00 grep –color=auto –exclude-dir=.bzr –exclude-dir=CVS –exclude-dir=.git –exclude-dir=.hg –exclude-dir=.svn rq
(SIPEvents) ➜  SIPEvents nohup rqscheduler >rqscheduler.log 2>&1 &
[2] 22377
(SIPEvents) ➜  SIPEvents ps aux | grep rq                         
root         3  0.0  0.0      0     0 ?        S    Jun05   0:29 [ksoftirqd/0]
root        43  0.0  0.0      0     0 ?        S    Jun05   0:16 [ksoftirqd/1]
root     22360  0.6  0.3 199684 12676 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rq worker
root     22377  1.6  0.3 194612 12252 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rqscheduler
root     22387  0.0  0.0 103368   824 pts/1    S+   15:36   0:00 grep –color=auto –exclude-dir=.bzr –exclude-dir=CVS –exclude-dir=.git –exclude-dir=.hg –exclude-dir=.svn rq
(SIPEvents) ➜  SIPEvents netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:10022               0.0.0.0:*                   LISTEN      17286/docker-proxy  
tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      19706/docker-proxy  
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN      17220/mysqld        
tcp        0      0 127.0.0.1:6379              0.0.0.0:*                   LISTEN      19659/redis-server  
tcp        0      0 0.0.0.0:9003                0.0.0.0:*                   LISTEN      19669/docker-proxy  
tcp        0      0 127.0.0.1:8080              0.0.0.0:*                   LISTEN      21835/python        
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      19015/nginx         
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1335/sshd           
tcp        0      0 0.0.0.0:3000                0.0.0.0:*                   LISTEN      17240/docker-proxy  
tcp        0      0 0.0.0.0:8088                0.0.0.0:*                   LISTEN      29962/docker-proxy  
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN      18421/docker-proxy  
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN      19505/docker-proxy  
tcp        0      0 0.0.0.0:32771               0.0.0.0:*                   LISTEN      8748/docker-proxy   
tcp        0      0 0.0.0.0:32772               0.0.0.0:*                   LISTEN      8782/docker-proxy   
udp        0      0 192.168.42.1:123            0.0.0.0:*                               1346/ntpd           
udp        0      0 115.29.173.126:123          0.0.0.0:*                               1346/ntpd           
udp        0      0 10.161.170.247:123          0.0.0.0:*                               1346/ntpd           
udp        0      0 127.0.0.1:123               0.0.0.0:*                               1346/ntpd           
udp        0      0 0.0.0.0:123                 0.0.0.0:*                               1346/ntpd 

此处确保了:

redis-server已经运行了,且正常

rq worker已经运行了

rqscheduler已经运行

(SIPEvents) ➜  SIPEvents ll
total 68K
-rw-r–r– 1 root root  833 Sep  2 16:54 config.py
-rw-r–r– 1 root root  348 Sep  2 16:59 config.pyc
-rw-r–r– 1 root root 6.8K Aug 30 17:53 db_create.py
-rw-r–r– 1 root root  773 Aug 29 17:57 db_manager.py
-rw-r–r– 1 root root  454 Sep  4 15:36 dump.rdb
drwxr-xr-x 2 root root 4.0K Sep  4 15:11 instance
drwxr-xr-x 2 root root 4.0K Sep  2 16:39 logs
-rw-r–r– 1 root root 5.1K Sep  4 15:36 redis.log
-rw-r–r– 1 root root  552 Sep  4 14:57 requirements.txt
-rw-r–r– 1 root root  135 Sep  4 15:37 rqscheduler.log
-rw-r–r– 1 root root  200 Sep  4 15:36 rq_worker.log
-rw-r–r– 1 root root  122 Sep  4 14:50 run.py
-rw-r–r– 1 root root  255 Sep  4 14:52 run.pyc
drwxr-xr-x 4 root root 4.0K Sep  4 11:06 sipevents
drwxr-xr-x 2 root root 4.0K Aug 21 11:04 toDel

对应的log分别都输出到了:

redis.log

rq_worker.log

rqscheduler.log

然后再去运行对应的自己的flask程序

(SIPEvents) ➜  SIPEvents gunicorn -w 4 -b 127.0.0.1:8080 run:app &
[3] 22520
(SIPEvents) ➜  SIPEvents [2016-09-04 15:39:07 +0000] [22520] [INFO] Starting gunicorn 19.6.0
[2016-09-04 15:39:07 +0000] [22520] [INFO] Listening at: http://127.0.0.1:8080 (22520)
[2016-09-04 15:39:07 +0000] [22520] [INFO] Using worker: sync
[2016-09-04 15:39:07 +0000] [22529] [INFO] Booting worker with pid: 22529
[2016-09-04 15:39:07 +0000] [22530] [INFO] Booting worker with pid: 22530
[2016-09-04 15:39:07 +0000] [22533] [INFO] Booting worker with pid: 22533
[2016-09-04 15:39:07 +0000] [22536] [INFO] Booting worker with pid: 22536

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f6b381d6f10>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f6b381d6f90>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f6b381df050>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [/root/html/SIPEvents/sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7f6b381df0d0>

<div–<——————————————————————————

然后去创建活动:

对应的job是:

after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 335febfd-c334-496c-b428-0ee7a7410c73: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:920]:
scheduledJobAdd=<Job bab35ea6-f39d-4fea-aaba-18ee9ca9c7c3: sipevents.views.add(3, 4)>

然后去看对应的log中,是否有对应的job运行了。

很明显:

(SIPEvents) ➜  SIPEvents cat rq_worker.log 
nohup: ignoring input
15:36:03 RQ worker u’rq:worker:AY140128113754462e2eZ.22360′ started, version 0.6.0
15:36:03 Cleaning registries for queue: default
15:36:03 
15:36:03 *** Listening on default…
15:40:12 default: sipevents.views.add(1, 2) (335febfd-c334-496c-b428-0ee7a7410c73)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b253dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=1, y=2

<div–<——————————————————————————

15:40:13 default: Job OK (335febfd-c334-496c-b428-0ee7a7410c73)
15:40:13 Result is kept for 500 seconds
15:40:13 
15:40:13 *** Listening on default…
(SIPEvents) ➜  SIPEvents cat rqscheduler.log 
nohup: ignoring input
15:36:12 Running RQ scheduler…
15:36:12 Checking for scheduled jobs…
15:37:12 Checking for scheduled jobs…
15:38:12 Checking for scheduled jobs…
15:39:12 Checking for scheduled jobs…
15:40:12 Checking for scheduled jobs…

还是:

rq  worker工作了

但是rqscheduler没有工作

flask rqscheduler

flask rqscheduler not work

使用rqscheduler定时执行任务 | 阿小信的博客

how do you package your flask/rq workers with your webapp ?:flask

后来的后来,发现好些rq的schedule的任务:

3+4的add

执行了:

(SIPEvents) ➜  SIPEvents cat rq_worker.log 
nohup: ignoring input
15:36:03 RQ worker u’rq:worker:AY140128113754462e2eZ.22360′ started, version 0.6.0
15:36:03 Cleaning registries for queue: default
15:36:03 
15:36:03 *** Listening on default…
15:40:12 default: sipevents.views.add(1, 2) (335febfd-c334-496c-b428-0ee7a7410c73)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b253dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=1, y=2

<div–<——————————————————————————

15:40:13 default: Job OK (335febfd-c334-496c-b428-0ee7a7410c73)
15:40:13 Result is kept for 500 seconds
15:40:13 
15:40:13 *** Listening on default…
15:41:12 default: sipevents.views.add(3, 4) (bab35ea6-f39d-4fea-aaba-18ee9ca9c7c3)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b253dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

15:41:13 default: Job OK (bab35ea6-f39d-4fea-aaba-18ee9ca9c7c3)
15:41:13 Result is kept for 500 seconds
15:41:13 
15:41:13 *** Listening on default…

-》但是却没有像预期的一样:每隔20秒就运行一次

总共只运行了一次

不过reschedule的log中却没有输出:

(SIPEvents) ➜  SIPEvents cat rqscheduler.log 
nohup: ignoring input
15:36:12 Running RQ scheduler…
15:36:12 Checking for scheduled jobs…
15:37:12 Checking for scheduled jobs…
15:38:12 Checking for scheduled jobs…
15:39:12 Checking for scheduled jobs…
15:40:12 Checking for scheduled jobs…
15:41:12 Checking for scheduled jobs…
15:42:12 Checking for scheduled jobs…
15:43:12 Checking for scheduled jobs…
15:44:13 Checking for scheduled jobs…
15:45:13 Checking for scheduled jobs…
15:46:13 Checking for scheduled jobs…
15:47:13 Checking for scheduled jobs…
15:48:13 Checking for scheduled jobs…
15:49:13 Checking for scheduled jobs…
15:50:13 Checking for scheduled jobs…

然后对应的flask的file的log也看到了:

(SIPEvents) ➜  SIPEvents tail logs/sipevents.log 
。。。
[2016-09-04 15:40:13,735 DEBUG views.py:805 index] tomorrowEventList=[]
[2016-09-04 15:40:13,737 DEBUG views.py:809 index] futureEventList=[]
[2016-09-04 15:41:12,979 DEBUG __init__.py:47 <module>] db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>
[2016-09-04 15:41:12,982 DEBUG __init__.py:62 <module>] type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b253dd0>
[2016-09-04 15:41:13,146 DEBUG views.py:363 add] x=3, y=4

-》

难道是:

add.schedule(timedelta(seconds=20), 3, 4)

本身的含义就是:

只运行一次,运行的时间是:在20秒之后?

flask python rq-scheduler

Ask HN: What you use for task scheduling in Python + Flask? | Hacker News

Python-RQ best practices?:flask

看到官网的解释了:

ui/rq-scheduler: A light library that adds job scheduling capabilities to RQ (Redis Queue)

Instead of taking a datetime object, this method expects a timedelta and schedules the job to run at X seconds/minutes/hours/days/weeks later. For example, if we want to monitor how popular a tweet is a few times during the course of the day, we could do something like:
from datetime import timedelta
# Schedule a job to run 10 minutes, 1 hour and 1 day later
scheduler.enqueue_in(timedelta(minutes=10), count_retweets, tweet_id)
scheduler.enqueue_in(timedelta(hours=1), count_retweets, tweet_id)
scheduler.enqueue_in(timedelta(days=1), count_retweets, tweet_id)
IMPORTANT: You should always use UTC datetime when working with RQ Scheduler.

-》很明显,传入timedelta,就是指的是:

在对应时间之后,去运行

而不是:

每隔多长时间,去运行

然后回头看文档:

API — Flask-RQ2 16.0.2 documentation

schedule(time_or_delta*args**kwargs)

A function to schedule running a RQ job at a given time or after a given timespan:

@rq.job

def add(x, y):

    return x + y

add.schedule(timedelta(hours=2), 1, 2)

add.schedule(datetime(2016, 12, 31, 23, 59, 59), 1, 2)

add.schedule(timedelta(days=14), 1, 2, repeat=1)

也是解释的说是:

在指定的时间,或者是:时间段之后,运行。

回头再去看官网示例:

Flask-RQ2 — Flask-RQ2 16.0.2 documentation

# queue job in 60 seconds
add.schedule(timedelta(seconds=60), 1, 2)
# queue job in 14 days and then repeat once 14 days later
add.schedule(timedelta(days=14), 1, 2, repeat=1)

才注意到:

queue job in 60 seconds

其实是:

60秒之后,把此任务加入队列(待到时候被调度执行)

而我之前理解错了,以为是:

每60秒就加入队列,被调度执行一次呢。。。

而对应另外那个repeat参数,此处的:

Flask-RQ2的

文档:

Flask-RQ2 — Flask-RQ2 16.0.2 documentation

和API:

API — Flask-RQ2 16.0.2 documentation

都没有具体解释细节用法:

比如想要无限循环下去,该如何设置repeat的值

而由于其是:rq-scheduler的封装,

所以去看rq-scheduler的官网文档:

ui/rq-scheduler: A light library that adds job scheduling capabilities to RQ (Redis Queue)

其中有repeat的解释:

Periodic & Repeated Jobs

As of version 0.3, RQ Scheduler also supports creating periodic and repeated jobs. You can do this via the schedulemethod. Note that this feature needs RQ >= 0.3.1.
This is how you do it:
scheduler.schedule(
scheduled_time=datetime.utcnow(), # Time for first execution, in UTC timezone
func=func, # Function to be queued
args=[arg1, arg2], # Arguments passed into function when executed
kwargs={‘foo’: ‘bar’}, # Keyword arguments passed into function when executed
interval=60, # Time before the function is called again, in seconds
repeat=10 # Repeat this number of times (None means repeat forever)
)
IMPORTANT NOTE: If you set up a repeated job, you must make sure that you either do not set a result_ttl value or you set a value larger than the interval. Otherwise, the entry with the job details will expire and the job will not get re-scheduled.

想要无限循环的执行,则:

repeat=None

然后好像对应的,同时要设置:

interval=20

才可以实现:

每隔20秒,无限循环下去。

去试试

        #scheduledJobAdd = add.schedule(timedelta(seconds=20), 3, 4) # queue job after 20 seconds
        # run after 1 second , then repeat forever every 20 seconds
        scheduledJobAdd = add.schedule(timedelta(seconds=1), 3, 4, repeat=None, interval=20)

然后2个job是:

after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 1a6ad39d-17c1-41a8-8ff2-b209275f9042: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:922]:
scheduledJobAdd=<Job ef96b3dc-0a60-464b-afd8-31aa559df11c: sipevents.views.add(3, 4)>

过了几分钟后,去看log:

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/rq_worker.log

16:20:16 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:20:16 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:20:16 Result is kept for 500 seconds
16:20:16
16:20:16 *** Listening on default…
16:21:16 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:21:16 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:21:16 Result is kept for 500 seconds
16:21:16
16:21:16 *** Listening on default…
16:22:16 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:22:16 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:22:16 Result is kept for 500 seconds
16:22:16
16:22:16 *** Listening on default…

-》可以看出来:

此处不是每隔20秒,而是1分钟60秒,才运行一次。

想起来了:

使用rqscheduler定时执行任务 | 阿小信的博客

中提到的:

rqscheduler -i 2

然后也看到:

(SIPEvents) ➜  SIPEvents rqscheduler –help
usage: rqscheduler [-h] [-b] [-H HOST] [-p PORT] [-d DB] [-P PASSWORD]
                   [–verbose] [–url URL] [-i INTERVAL] [–path PATH]
                   [–pid FILE]
Runs RQ scheduler
optional arguments:
  -h, –help            show this help message and exit
  -b, –burst           Run in burst mode (quit after all work is done)
  -H HOST, –host HOST  Redis host
  -p PORT, –port PORT  Redis port number
  -d DB, –db DB        Redis database
  -P PASSWORD, –password PASSWORD
                        Redis password
  –verbose, -v         Show more output
  –url URL, -u URL     URL describing Redis connection details. Overrides
                        other connection arguments if supplied.
  -i INTERVAL, –interval INTERVAL
                        How often the scheduler checks for new jobs to add to
                        the queue (in seconds, can be floating-point for more
                        precision).
  –path PATH           Specify the import path.
  –pid FILE            A filename to use for the PID file.

即:

rqscheduler是支持-i参数,决定调度轮训的间隔时间的

默认是60秒=1分钟

-》就是上述,代码中指定了20秒,实际上1分钟执行一次的原因

(SIPEvents) ➜  SIPEvents rq –help
Usage: rq [OPTIONS] COMMAND [ARGS]…
  RQ command line tool.
Options:
  –help  Show this message and exit.
Commands:
  empty    Empty given queues.
  info     RQ command-line monitor.
  requeue  Requeue failed jobs.
  resume   Resumes processing of queues, that where…
  suspend  Suspends all workers, to resume run `rq…
  worker   Starts an RQ worker.
(SIPEvents) ➜  SIPEvents rq help worker
Usage: rq [OPTIONS] COMMAND [ARGS]…
Error: No such command “help”.
(SIPEvents) ➜  SIPEvents rq worker –help
Usage: rq worker [OPTIONS] [QUEUES]…
  Starts an RQ worker.
Options:
  -u, –url TEXT            URL describing Redis connection details.
  -c, –config TEXT         Module containing RQ settings.
  -b, –burst               Run in burst mode (quit after all work is done)
  -n, –name TEXT           Specify a different name
  -w, –worker-class TEXT   RQ Worker class to use
  -j, –job-class TEXT      RQ Job class to use
  –queue-class TEXT        RQ Queue class to use
  -P, –path TEXT           Specify the import path.
  –results-ttl INTEGER     Default results timeout to be used
  –worker-ttl INTEGER      Default worker timeout to be used
  -v, –verbose             Show more output
  -q, –quiet               Show less output
  –sentry-dsn TEXT         Report exceptions to this Sentry DSN
  –exception-handler TEXT  Exception handler(s) to use
  –pid TEXT                Write the process ID number to a file at the
                            specified path
  –help                    Show this message and exit.

然后去试试:

先杀掉reschedule,再去加上-i参数重新运行:

(SIPEvents) ➜  SIPEvents ps aux | grep rq
root         3  0.0  0.0      0     0 ?        S    Jun05   0:29 [ksoftirqd/0]
root        43  0.0  0.0      0     0 ?        S    Jun05   0:16 [ksoftirqd/1]
root     22360  0.0  0.3 199684 12916 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rq worker
root     22377  0.0  0.3 194612 12368 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rqscheduler
root     23339  0.0  0.0 103368   820 pts/5    S+   16:29   0:00 grep –color=auto –exclude-dir=.bzr –exclude-dir=CVS –exclude-dir=.git –exclude-dir=.hg –exclude-dir=.svn rq
(SIPEvents) ➜  SIPEvents kill -9 22377
(SIPEvents) ➜  SIPEvents nohup rqscheduler -i 5 >> rqscheduler.log 2>&1 &
[2] 23451
(SIPEvents) ➜  SIPEvents ps aux | grep rq                                
root         3  0.0  0.0      0     0 ?        S    Jun05   0:29 [ksoftirqd/0]
root        43  0.0  0.0      0     0 ?        S    Jun05   0:16 [ksoftirqd/1]
root     22360  0.0  0.3 199684 12920 pts/1    SN   15:36   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rq worker
root     23451  2.2  0.3 194612 12376 pts/5    SN   16:31   0:00 /root/Envs/SIPEvents/bin/python /root/Envs/SIPEvents/bin/rqscheduler -i 5
root     23465  0.0  0.0 103368   820 pts/5    R+   16:31   0:00 grep –color=auto –exclude-dir=.bzr –exclude-dir=CVS –exclude-dir=.git –exclude-dir=.hg –exclude-dir=.svn rq

故意改个更短的间隔15秒:

        # run after 1 second , then repeat forever every 15 seconds
        scheduledJobAdd = add.schedule(timedelta(seconds=1), 3, 4, repeat=None, interval=15)

然后重新去运行:

after create event, try use rq2 to run background work

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:917]:
jobAdd=<Job 25a29a4d-c685-4c76-bdfa-6a801e92d170: sipevents.views.add(1, 2)>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [/root/html/SIPEvents/sipevents/views.py:922]:
scheduledJobAdd=<Job 5b4dabb9-5d4e-447b-96cd-4f9d7be7d489: sipevents.views.add(3, 4)>

从:

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/rq_worker.log

的:

16:35:26 Checking for scheduled jobs…
16:35:31 Checking for scheduled jobs…
16:35:37 Checking for scheduled jobs…
16:35:42 Checking for scheduled jobs…
16:35:47 Checking for scheduled jobs…
16:35:52 Checking for scheduled jobs…
16:35:57 Checking for scheduled jobs…
16:36:02 Checking for scheduled jobs…
16:36:07 Checking for scheduled jobs…

每5秒就调度一次,说明-i 5参数起效果了。

/Users/crifan/dev/dev_root/daryun/SIPEvents/sourcecode/sipevents/rq_worker.log

16:35:26 default: sipevents.views.add(3, 4) (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:35:27 default: Job OK (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)
16:35:27 Result is kept for 500 seconds
16:35:27
16:35:27 *** Listening on default…
16:35:27 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:35:27 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:35:27 Result is kept for 500 seconds
16:35:27
16:35:27 *** Listening on default…
16:35:42 default: sipevents.views.add(3, 4) (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:35:42 default: Job OK (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)
16:35:42 Result is kept for 500 seconds
16:35:42
16:35:42 *** Listening on default…
16:35:47 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:35:47 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:35:47 Result is kept for 500 seconds
16:35:47
16:35:47 *** Listening on default…
16:35:57 default: sipevents.views.add(3, 4) (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:35:57 default: Job OK (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)
16:35:57 Result is kept for 500 seconds
16:35:57
16:35:57 *** Listening on default…
16:36:07 default: sipevents.views.add(3, 4) (ef96b3dc-0a60-464b-afd8-31aa559df11c)

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:47]:
db=<SQLAlchemy engine=’sqlite:////usr/share/nginx/html/SIPEvents/instance/sipevents.db’>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in __init__ [./sipevents/__init__.py:62]:
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:36:07 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:36:07 Result is kept for 500 seconds
16:36:07 Cleaning registries for queue: default
16:36:07
16:36:07 *** Listening on default…

以及:

(SIPEvents) ➜  SIPEvents tail rq_worker.log

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:40:13 default: Job OK (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)
16:40:13 Result is kept for 500 seconds
16:40:13 
16:40:13 *** Listening on default…
16:40:28 default: sipevents.views.add(3, 4) (5b4dabb9-5d4e-447b-96cd-4f9d7be7d489)
(SIPEvents) ➜  SIPEvents tail rq_worker.log
type(rq)=<class ‘flask_rq2.app.RQ’>, rq=<flask_rq2.app.RQ object at 0x7eff3b252dd0>

<div–<——————————————————————————

<div–<——————————————————————————

DEBUG in views [./sipevents/views.py:363]:
x=3, y=4

<div–<——————————————————————————

16:40:29 default: Job OK (ef96b3dc-0a60-464b-afd8-31aa559df11c)
16:40:29 Result is kept for 500 seconds
16:40:29 
16:40:29 *** Listening on default…

可以看出来:

的确是在后台,始终重复循环的运行了。

[总结]

Flask-RQ2支持:

rq worker

和:

rqscheduler

对于耗时的任务,被放到后台异步执行,等有了结果了,保存到某个地方(redis,其他数据库等),然后其他地方,可以在需要的时候去到保存的地方取回对应的结果,这样的工作,可以被称为:后台(耗时,异步)任务

-》这类的后台任务,Flask-RQ2在内部,是通过rq worker的去支持的。

-》对应的代码:

add.queue(1, 2)

内部后台运行了:

rq worker

监听对应的,加入了此队列的任务,

然后就去执行对应的任务

另一类是:

定期的,周期性的,任务,需要在特定的时间,甚至是定期循环的执行的任务,需要在特定的时间去调度执行,此类任务,可以被称为调度任务

-》这类的调度任务,Flask-RQ2在内部,是通过rqshceduler的去支持的。

-》对应代码:

add.schedule(timedelta(seconds=20), 3, 4)
add.schedule(timedelta(seconds=1), 3, 4, repeat=None, interval=15)

内部后台是:

rqscheduler

每隔一段时间(通过-i参数指定间隔时间)去运行

检测是否有需要调度执行的任务

如果有就去调度rq worker去执行?

-》

所以可以说成是:

干活的,最终都是rq worker干活的

只不过:

add.queue是立刻去干活

add.schedule是指定了,在特定的时间到了,再去干活(比如30秒之后,从现在开始每隔15秒都去干活)

-》

rq worker:负责最终的干对应的活

rqscheduler:调度,在对应的时间到了,把要干活的,丢给rq worker去干

问题1: 为何Flask-RQ2的正常工作之前,需要去运行rq woker?

详细解释:

Flask-RQ2的官网文档:

Flask-RQ2 — Flask-RQ2 16.0.2 documentation

中已经说了:

Flask-RQ2只是针对于:

RQ: Simple job queues for Python

的扩展

-》意味着:

1.安装Flask-RQ2的时候,会自动装上对应的依赖的和RQ相关的库:

rq (0.6.0)

rq-scheduler (0.7.0)

-》而RQ==Redis Queue,本身是基于Redis去实现队列和任务的管理,

所以也会自动安装所以来的:

redis (2.10.5)

2.Flask-RQ2是需要RQ的后台的服务作为支持,才能完整的工作的

-》

所以为了能让后台任务管理完整的工作,需要:

运行rq的后台服务:

rq worker

注意到其输出是:

Listening for work on default

其作用是:

监听默认的default的rq ?

如果有任务被加入到此队列中,就去执行改任务。

看起来,应该对应着:

API — Flask-RQ2 16.0.2 documentation

中的默认的配置的queues

queues = [‘default’]

List of queue names for RQ to work on.

问题2: 为何Flask-RQ2的正常工作之前,需要去运行rqscheduler?

ui/rq-scheduler: A light library that adds job scheduling capabilities to RQ (Redis Queue)

对应的代码:

add.schedule(timedelta(seconds=1), 3, 4, repeat=None, interval=15)

内部是把此任务,加入到了rqsheduler中了

rqsheduler每隔一段时间运行一次,去检查是否有到了对应的时间的任务

如果有,就丢给rq worker去运行

-》

所以需要后台也一直有个调度的任务:

rqscheduler

去运行,负责任务的调度执行。

问题3:Flask-RQ2和Flask-RQ是什么关系?

Flask-RQ2官网文档中:

Flask-RQ2 — Flask-RQ2 16.0.2 documentation

已经说了:

Flask-RQ2是,借鉴了Matt Wright的Flask-RQ:

mattupstate/flask-rq: RQ (Redis Queue) integration for Flask applications

的思想,

而Flask-RQ的思想是啥?

那就是:简单,方便,好用。

但是由于Flask-RQ还是做的不是足够的方便好用,

所以jezdez才去,重新弄了个Flask-RQ2,以便于更方便的使用Redis Queue去实现后台任务,调度任务等功能。

总的说就是:

Flask-RQ2和Flask-RQ都是基于RQ的扩展,

而个人觉得,Flask-RQ2比Flask-RQ更加方便和好用。

【如何让Flask-RQ2完整的工作起来的步骤】

1.后台运行rq worker

具体的命令,可以用:

nohup rq worker > rq_worker.log 2>&1 &

关于nohup命令不了解的可以参考:

[整理]命令前面加上nohup是什么含义

关于2>&1不了解的可以参考:

[整理]Linux中重定向中大于号的用户以及>2&1的含义

2.后台运行rqscheduler

nohup rqscheduler -i 5 >rqscheduler.log 2>&1 &

其中的:

-i 5 表示interval间隔时间为5-》5秒就去运行一次,检查是否有需要调度执行的任务

如果不设置,默认间隔为60秒=1分钟

-》你根据你的情况,决定是否要传入-i参数修改默认间隔

-》我此处之所以需要设置-i,是因为:

我测试期间,有个任务需要每隔15秒就运行一次

所以此处的-i间隔的时间,要短于我的任务的间隔

否则任务间隔就变成默认的60秒=1分钟才执行一次了。

另外:

关于schedule的参数的问题:

Flask-RQ2中没有详细解释

可以去看:

rq-scheduler的官网文档:

ui/rq-scheduler: A light library that adds job scheduling capabilities to RQ (Redis Queue)

比如:

interval:Time before the function is called again, in seconds

repeat:Repeat this number of times (None means repeat forever)

注意:

IMPORTANT NOTE: If you set up a repeated job, you must make sure that you either do not set a result_ttl value or you set a value larger than the interval. Otherwise, the entry with the job details will expire and the job will not get re-scheduled.

比如:

@rq.job
def add(x, y):
    gLog.debug(“x=%s, y=%x”, x, y)
    return x + y
add.schedule(timedelta(seconds=1), 3, 4, repeat=None, interval=15)

就表示:过1秒之后就运行,之后永远不停止地每间隔15秒就运行一次。

如何查看输出:

可以去:

rq_worker.log

和:

rqscheduler.log

去看对应的输出的log

至此,就可以看到Flask-RQ2正常的运行,我们所添加的job任务了。

转载请注明:在路上 » [已解决]Flask-RQ2+redis的后台进程不工作

发表我的评论
取消评论

表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址
82 queries in 0.217 seconds, using 22.31MB memory