Running Scrapy spiders in a Celery task

I have a Django site where a scrape happens when a user requests it, and my code kicks off a Scrapy spider standalone script in a new process. Naturally, this isn’t working with an increase of users.

Something like this:

class StandAloneSpider(Spider):
    #a regular spider

settings.overrides['LOG_ENABLED'] = True
#more settings can be changed...

crawler = CrawlerProcess( settings )

spider = StandAloneSpider()

crawler.crawl( spider )

I’ve decided to use Celery and use workers to queue up the crawl requests.

However, I’m running into issues with Tornado reactors not being able to restart. The first and second spider runs successfully, but subsequent spiders will throw the ReactorNotRestartable error.

Anyone can share any tips with running Spiders within the Celery framework?

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

Okay here is how I got Scrapy working with my Django project that uses Celery for queuing up what to crawl. The actual workaround came primarily from joehillen’s code located here

First the file

from celery import task

def crawl_domain(domain_pk):
    from crawl import domain_crawl
    return domain_crawl(domain_pk)

Then the file

from multiprocessing import Process
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
from spider import DomainSpider
from models import Domain

class DomainCrawlerScript():

    def __init__(self):
        self.crawler = CrawlerProcess(settings)

    def _crawl(self, domain_pk):
        domain = Domain.objects.get(
            pk = domain_pk,
        urls = []
        for page in domain.pages.all():

    def crawl(self, domain_pk):
        p = Process(target=self._crawl, args=[domain_pk])

crawler = DomainCrawlerScript()

def domain_crawl(domain_pk):

The trick here is the “from multiprocessing import Process” this gets around the “ReactorNotRestartable” issue in the Twisted framework. So basically the Celery task calls the “domain_crawl” function which reuses the “DomainCrawlerScript” object over and over to interface with your Scrapy spider. (I am aware that my example is a little redundant but I did do this for a reason in my setup with multiple versions of python [my django webserver is actually using python2.4 and my worker servers use python2.7])

In my example here “DomainSpider” is just a modified Scrapy Spider that takes a list of urls in then sets them as the “start_urls”.

Hope this helps!

Solution 2

I set CELERYD_MAX_TASKS_PER_CHILD to 1 in the settings file and that took care of the issue. The worker daemon starts a new process after each spider run and that takes care of the reactor.

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply