从脚本进行抓取抓取始终会在抓取后阻止脚本执行

从脚本进行抓取抓取始终会在抓取后阻止脚本执行,第1张

脚本进行抓取抓取始终会在抓取后阻止脚本执行

蜘蛛完成后,您将需要停止反应器。您可以通过侦听

spider_closed
信号来完成此 *** 作:

from twisted.internet import reactorfrom scrapy import log, signalsfrom scrapy.crawler import Crawlerfrom scrapy.settings import Settingsfrom scrapy.xlib.pydispatch import dispatcherfrom testspiders.spiders.followall import FollowAllSpiderdef stop_reactor():    reactor.stop()dispatcher.connect(stop_reactor, signal=signals.spider_closed)spider = FollowAllSpider(domain='scrapinghub.com')crawler = Crawler(Settings())crawler.configure()crawler.crawl(spider)crawler.start()log.start()log.msg('Running reactor...')reactor.run()  # the script will block here until the spider is closedlog.msg('Reactor stopped.')

命令行日志输出可能类似于:

stav@maia:/srv/scrapy/testspiders$ ./api2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:    {'downloader/request_bytes': 23934,...}2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.stav@maia:/srv/scrapy/testspiders$


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5653107.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存