Discussion:
Django Channels : High CPU usage by workers and 503 responses.
RAUSHAN RAJ
2017-06-15 11:19:27 UTC
Permalink
Hi,
I am using HAPROXY --> 2 Daphne interface servers ---> Redis Channel Layer
and Running (28)workers on 10 cores.
I was doing load testing using artillery
artillery quick --duration 60 --rate 100 -n 10 url
After few seconds only Getting Errors like
504,503,ECONNRESET,ETIMEDOUT .
Same loadtesting i applied on 4CPU instance running tornado and it
completed without a single error.
What went wrong , any ideas.

My Supervisor Conf
[program:worker]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = python manage.py runworker --settings="sentieosocket.settings"
process_name=%(program_name)s_%(process_num)02d
numprocs = 7
autostart=true
autorestart=true
stderr_logfile=/var/log/worker.err1.log
stdout_logfile=/var/log/worker.out1.log

[program:daphne9000]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = daphne -b 0.0.0.0 -p 9000 sentieosocket.asgi:channel_layer
--ping-timeout 120 --ping-interval 30 --websocket_connect_timeout 8
--access-log /var/log/daphne
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne9000.err1.log
stdout_logfile=/var/log/daphne9000.out1.log


1.) Django (1.11.1) , django-redis-sessions (0.5.6) , Twisted (17.1.0) ,
daphne (1.2.0) ,autobahn (17.5.1),asgi-redis (1.4.0)
2.) This was redis(It's a single redis server running on 2CPU instance.)
channel layer config, CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(REDIS_HOST, 6379)],
"capacity":500, # tried with 1000 also
},
"ROUTING": "sentieosocket.routing.channel_routing",
},
}
3.) Errors given by worker are
File
"/usr/local/lib/python2.7/dist-packages/channels/management/commands/runworker.py",
line 83, in handle
worker.run()
File "/usr/local/lib/python2.7/dist-packages/channels/worker.py", line 151,
in run
consumer_finished.send(sender=self.class)
File
"/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py",
line 193, in send
for receiver in self._live_receivers(sender)
File "/usr/local/lib/python2.7/dist-packages/channels/message.py", line 93,
in send_and_flush
sender.send(message, immediately=True)
File "/usr/local/lib/python2.7/dist-packages/channels/channel.py", line 44,
in send
self.channel_layer.send(self.name, content)
File "/usr/local/lib/python2.7/dist-packages/asgi_redis/core.py", line 184,
in send
raise self.ChannelFull
asgiref.base_layer.ChannelFull
4.) Running 28 workers on 10 cores.
5.) For very less number of concurrent users, the configuration is running
fine.But as already said, I did a load testing on tornado server (4Cores)
and was running fine, but with django-daphne i am giving 10Cores to workers
and still facing 503 error code and 504 error code by haproxy.May be I
misunderstood something and make misconfiguration in my django channels
settings.There seems to be 100% CPU usage for workers, everytime little
load is increased.
--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+***@googlegroups.com.
To post to this group, send email to django-***@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/74f656a5-870b-4030-9076-ae93319b3725%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Andrew Godwin
2017-06-15 13:34:38 UTC
Permalink
The error you have looks like https://github.com/django/channels/issues/643,
which was fixed in the master branch but not released yet. Could you
install channels from the master git branch and see if that improves things?

Andrew
Post by RAUSHAN RAJ
Hi,
I am using HAPROXY --> 2 Daphne interface servers ---> Redis Channel Layer
and Running (28)workers on 10 cores.
I was doing load testing using artillery
artillery quick --duration 60 --rate 100 -n 10 url
After few seconds only Getting Errors like
504,503,ECONNRESET,ETIMEDOUT .
Same loadtesting i applied on 4CPU instance running tornado and it
completed without a single error.
What went wrong , any ideas.
My Supervisor Conf
[program:worker]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = python manage.py runworker --settings="sentieosocket.settings"
process_name=%(program_name)s_%(process_num)02d
numprocs = 7
autostart=true
autorestart=true
stderr_logfile=/var/log/worker.err1.log
stdout_logfile=/var/log/worker.out1.log
[program:daphne9000]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = daphne -b 0.0.0.0 -p 9000 sentieosocket.asgi:channel_layer
--ping-timeout 120 --ping-interval 30 --websocket_connect_timeout 8
--access-log /var/log/daphne
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne9000.err1.log
stdout_logfile=/var/log/daphne9000.out1.log
1.) Django (1.11.1) , django-redis-sessions (0.5.6) , Twisted (17.1.0) ,
daphne (1.2.0) ,autobahn (17.5.1),asgi-redis (1.4.0)
2.) This was redis(It's a single redis server running on 2CPU instance.)
channel layer config, CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(REDIS_HOST, 6379)],
"capacity":500, # tried with 1000 also
},
"ROUTING": "sentieosocket.routing.channel_routing",
},
}
3.) Errors given by worker are
File "/usr/local/lib/python2.7/dist-packages/channels/
management/commands/runworker.py", line 83, in handle
worker.run()
File "/usr/local/lib/python2.7/dist-packages/channels/worker.py", line
151, in run
consumer_finished.send(sender=self.class)
File "/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py",
line 193, in send
for receiver in self._live_receivers(sender)
File "/usr/local/lib/python2.7/dist-packages/channels/message.py", line
93, in send_and_flush
sender.send(message, immediately=True)
File "/usr/local/lib/python2.7/dist-packages/channels/channel.py", line
44, in send
self.channel_layer.send(self.name, content)
File "/usr/local/lib/python2.7/dist-packages/asgi_redis/core.py", line
184, in send
raise self.ChannelFull
asgiref.base_layer.ChannelFull
4.) Running 28 workers on 10 cores.
5.) For very less number of concurrent users, the configuration is running
fine.But as already said, I did a load testing on tornado server (4Cores)
and was running fine, but with django-daphne i am giving 10Cores to workers
and still facing 503 error code and 504 error code by haproxy.May be I
misunderstood something and make misconfiguration in my django channels
settings.There seems to be 100% CPU usage for workers, everytime little
load is increased.
--
You received this message because you are subscribed to the Google Groups
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/
msgid/django-users/74f656a5-870b-4030-9076-ae93319b3725%40googlegroups.com
<https://groups.google.com/d/msgid/django-users/74f656a5-870b-4030-9076-ae93319b3725%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+***@googlegroups.com.
To post to this group, send email to django-***@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/CAFwN1urWzv2Z86u1ePZ06uwHnWNDWK3Qe2VvWKe1pn3N6Lj0GA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
RAUSHAN RAJ
2017-06-15 18:46:52 UTC
Permalink
I just installed channels==1.1.4, it's working like a charm.The issue is
gone.Thanks Andrew.
Post by Andrew Godwin
The error you have looks like
https://github.com/django/channels/issues/643, which was fixed in the
master branch but not released yet. Could you install channels from the
master git branch and see if that improves things?
Andrew
Post by RAUSHAN RAJ
Hi,
I am using HAPROXY --> 2 Daphne interface servers ---> Redis Channel
Layer and Running (28)workers on 10 cores.
I was doing load testing using artillery
artillery quick --duration 60 --rate 100 -n 10 url
After few seconds only Getting Errors like
504,503,ECONNRESET,ETIMEDOUT .
Same loadtesting i applied on 4CPU instance running tornado and it
completed without a single error.
What went wrong , any ideas.
My Supervisor Conf
[program:worker]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = python manage.py runworker --settings="sentieosocket.settings"
process_name=%(program_name)s_%(process_num)02d
numprocs = 7
autostart=true
autorestart=true
stderr_logfile=/var/log/worker.err1.log
stdout_logfile=/var/log/worker.out1.log
[program:daphne9000]
directory = /home/fingoo/Documents/coding/projects/sentieosocket
command = daphne -b 0.0.0.0 -p 9000 sentieosocket.asgi:channel_layer
--ping-timeout 120 --ping-interval 30 --websocket_connect_timeout 8
--access-log /var/log/daphne
autostart=true
autorestart=true
stderr_logfile=/var/log/daphne9000.err1.log
stdout_logfile=/var/log/daphne9000.out1.log
1.) Django (1.11.1) , django-redis-sessions (0.5.6) , Twisted (17.1.0) ,
daphne (1.2.0) ,autobahn (17.5.1),asgi-redis (1.4.0)
2.) This was redis(It's a single redis server running on 2CPU instance.)
channel layer config, CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(REDIS_HOST, 6379)],
"capacity":500, # tried with 1000 also
},
"ROUTING": "sentieosocket.routing.channel_routing",
},
}
3.) Errors given by worker are
File
"/usr/local/lib/python2.7/dist-packages/channels/management/commands/runworker.py",
line 83, in handle
worker.run()
File "/usr/local/lib/python2.7/dist-packages/channels/worker.py", line
151, in run
consumer_finished.send(sender=self.class)
File
"/usr/local/lib/python2.7/dist-packages/django/dispatch/dispatcher.py",
line 193, in send
for receiver in self._live_receivers(sender)
File "/usr/local/lib/python2.7/dist-packages/channels/message.py", line
93, in send_and_flush
sender.send(message, immediately=True)
File "/usr/local/lib/python2.7/dist-packages/channels/channel.py", line
44, in send
self.channel_layer.send(self.name, content)
File "/usr/local/lib/python2.7/dist-packages/asgi_redis/core.py", line
184, in send
raise self.ChannelFull
asgiref.base_layer.ChannelFull
4.) Running 28 workers on 10 cores.
5.) For very less number of concurrent users, the configuration is
running fine.But as already said, I did a load testing on tornado server
(4Cores) and was running fine, but with django-daphne i am giving 10Cores
to workers and still facing 503 error code and 504 error code by
haproxy.May be I misunderstood something and make misconfiguration in my
django channels settings.There seems to be 100% CPU usage for workers,
everytime little load is increased.
--
You received this message because you are subscribed to the Google Groups
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an
<javascript:>.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit
https://groups.google.com/d/msgid/django-users/74f656a5-870b-4030-9076-ae93319b3725%40googlegroups.com
<https://groups.google.com/d/msgid/django-users/74f656a5-870b-4030-9076-ae93319b3725%40googlegroups.com?utm_medium=email&utm_source=footer>
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+***@googlegroups.com.
To post to this group, send email to django-***@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/0e48df4d-9e90-49f7-9220-2c4e7869e70b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...