蜘蛛完成后,您将需要停止反应器。您可以通过侦听
spider_closed信号来完成此 *** 作:
from twisted.internet import reactorfrom scrapy import log, signalsfrom scrapy.crawler import Crawlerfrom scrapy.settings import Settingsfrom scrapy.xlib.pydispatch import dispatcherfrom testspiders.spiders.followall import FollowAllSpiderdef stop_reactor(): reactor.stop()dispatcher.connect(stop_reactor, signal=signals.spider_closed)spider = FollowAllSpider(domain='scrapinghub.com')crawler = Crawler(Settings())crawler.configure()crawler.crawl(spider)crawler.start()log.start()log.msg('Running reactor...')reactor.run() # the script will block here until the spider is closedlog.msg('Reactor stopped.')
命令行日志输出可能类似于:
stav@maia:/srv/scrapy/testspiders$ ./api2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 23934,...}2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.stav@maia:/srv/scrapy/testspiders$
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)