Python之Scrapy遇见个坑

Python之Scrapy遇见个坑,第1张

概述运行Scrapy爬虫被限制抓取,报错:2018-01-0818:37:14[scrapy.middleware]INFO:Enableditempipelines:[]2018-01-0818:37:14[scrapy.core.engine]INFO:Spideropened2018-01-0818:37:14[scrapy.extensions.logstats]INFO:Crawled0pages(at0pages/min),scrape

运行Scrapy爬虫被限制抓取,报错:

2018-01-08 18:37:14 [scrapy.mIDdleware] INFO: Enabled item pipelines:[]2018-01-08 18:37:14 [scrapy.core.engine] INFO: SpIDer opened2018-01-08 18:37:14 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2018-01-08 18:37:14 [scrapy.extensions.telnet] DEBUG: Telnet console Listening on 127.0.0.1:60232018-01-08 18:37:23 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://accounts.douban.com/login> (referer: None)2018-01-08 18:37:23 [scrapy.spIDermIDdlewares.httperror] INFO: Ignoring response <403 https://accounts.douban.com/login>: http status code is not handled or not allowed2018-01-08 18:37:23 [scrapy.core.engine] INFO: Closing spIDer (finished)2018-01-08 18:37:23 [scrapy.statscollectors] INFO: DumPing Scrapy stats:{'downloader/request_bytes': 222,

解决方法:

settings.py中添加用户代理

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5' 

搞定。。。

 

总结

以上是内存溢出为你收集整理的Python之Scrapy遇见个坑全部内容,希望文章能够帮你解决Python之Scrapy遇见个坑所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1186478.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-03
下一篇 2022-06-03

发表评论

登录后才能评论

评论列表(0条)

保存