Scrapy爬虫实例讲解_校花网

Scrapy爬虫实例讲解_校花网,第1张

概述学习爬虫有一段时间了,今天使用Scrapy框架将校花网的图片爬取到本地。Scrapy爬虫框架相对于使用requests库进行网页的爬取,拥有更高的性能。

学习爬虫有一段时间了,今天使用Scrapy框架将校花网的图片爬取到本地。Scrapy爬虫框架相对于使用requests库进行网页的爬取,拥有更高的性能。

Scrapy官方定义:Scrapy是用于抓取网站并提取结构化数据的应用程序框架,可用于广泛的有用应用程序,如数据挖掘,信息处理或历史存档。

建立Scrapy爬虫工程

在安装好Scrapy框架后,直接使用命令行进行项目的创建:

E:\ScrapyDemo>scrapy startproject xiaohuarNew Scrapy project 'xiaohuar',using template directory 'c:\users\lei\appdata\local\programs\python\python35\lib\site-packages\scrapy\templates\project',created in: E:\ScrapyDemo\xiaohuarYou can start your first spIDer with: cd xiaohuar scrapy genspIDer example example.com

创建一个Scrapy爬虫

创建工程的时候,会自动创建一个与工程同名的目录,进入到目录中执行如下命令:

E:\ScrapyDemo\xiaohuar>scrapy genspIDer -t basic xiaohua xiaohuar.comCreated spIDer 'xiaohua' using template 'basic' in module:xiaohuar.spIDers.xiaohua命令中"xiaohua"

是生成SpIDer中*.py文件的文件名,"xiaohuar.com"是将要爬取网站的URL,可以在程序中更改。

编写SpIDer代码

编写E:\ScrapyDemo\xiaohuar\xiaohuar\spIDers中的xiaohua.py文件。主要是配置URL和对请求到的页面的解析方式。

# -*- Coding: utf-8 -*-import scrapyfrom scrapy.http import Requestimport reclass XiaohuaSpIDer(scrapy.SpIDer): name = 'xiaohua' allowed_domains = ['xiaohuar.com'] start_urls = [] for i in range(43):  url = "http://www.xiaohuar.com/List-1-%s.HTML" %i  start_urls.append(url) def parse(self,response):  if "www.xiaohuar.com/List-1" in response.url:   # 下载的HTML源代码   HTML = response.text   # 网页中图片存储地址:src="/d/file/20160126/905e563421921adf9b6fb4408ec4e72f.jpg"   # 通过正则匹配到所有的图片   # 获取的是图片的相对路径的列表   img_urls = re.findall(r'/d/file/\d+/\w+\.jpg',HTML)      # 使用循环对图片页进行请求   for img_url in img_urls:    # 将图片的URL补全    if "http://" not in img_url:     img_url = "http://www.xiaohuar.com%s" %img_url        # 回调,返回response    yIEld Request(img_url)  else:   # 下载图片    url = response.url   # 保存的图片文件名   Title = re.findall(r'\w*.jpg',url)[0]   # 保存图片   with open('E:\xiaohua_img\%s' % Title,'wb') as f:    f.write(response.body)

这里使用正则表达式对图片的地址进行匹配,其他网页也都大同小异,需要根据具体的网页源代码进行分析。

运行爬虫

E:\ScrapyDemo\xiaohuar>scrapy crawl xiaohua2017-10-22 22:30:11 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: xiaohuar)2017-10-22 22:30:11 [scrapy.utils.log] INFO: OverrIDden settings: {'BOT_name': 'xiaohuar','SPIDER_MODulES': ['xiaohuar.spIDers'],'ROBOTSTXT_OBEY': True,'NEWSPIDER_MODulE': 'xiaohuar.spIDers'}2017-10-22 22:30:11 [scrapy.mIDdleware] INFO: Enabled extensions:['scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.corestats.CoreStats','scrapy.extensions.logstats.LogStats']2017-10-22 22:30:12 [scrapy.mIDdleware] INFO: Enabled downloader mIDdlewares:['scrapy.downloadermIDdlewares.robotstxt.RobotsTxtMIDdleware','scrapy.downloadermIDdlewares.httpauth.httpAuthMIDdleware','scrapy.downloadermIDdlewares.downloadtimeout.DownloadTimeoutMIDdleware','scrapy.downloadermIDdlewares.defaultheaders.DefaultheadersMIDdleware','scrapy.downloadermIDdlewares.useragent.UserAgentMIDdleware','scrapy.downloadermIDdlewares.retry.RetryMIDdleware','scrapy.downloadermIDdlewares.redirect.MetaRefreshMIDdleware','scrapy.downloadermIDdlewares.httpcompression.httpCompressionMIDdleware','scrapy.downloadermIDdlewares.redirect.RedirectMIDdleware','scrapy.downloadermIDdlewares.cookies.cookiesMIDdleware','scrapy.downloadermIDdlewares.httpproxy.httpProxyMIDdleware','scrapy.downloadermIDdlewares.stats.DownloaderStats']2017-10-22 22:30:12 [scrapy.mIDdleware] INFO: Enabled spIDer mIDdlewares:['scrapy.spIDermIDdlewares.httperror.httpErrorMIDdleware','scrapy.spIDermIDdlewares.offsite.OffsiteMIDdleware','scrapy.spIDermIDdlewares.referer.RefererMIDdleware','scrapy.spIDermIDdlewares.urllength.UrlLengthMIDdleware','scrapy.spIDermIDdlewares.depth.DepthMIDdleware']2017-10-22 22:30:12 [scrapy.mIDdleware] INFO: Enabled item pipelines:[]2017-10-22 22:30:12 [scrapy.core.engine] INFO: SpIDer opened2017-10-22 22:30:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),scraped 0 items (at 0 items/min)2017-10-22 22:30:12 [scrapy.extensions.telnet] DEBUG: Telnet console Listening on 127.0.0.1:60232017-10-22 22:30:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/robots.txt> (referer: None)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/List-1-0.HTML> (referer: None)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170721/cb96f1b106b3db4a6bfcf3d2e880dea0.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170824/dcc166b0eba6a37e05424cfc29023121.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170916/7f78145b1ca162eb814fbc03ad24fbc1.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170919/2f728d0f110a21fea95ce13e0b010d06.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170819/9c3dfeef7e08cc0303ce233e4ddafa7f.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170917/715515e7fe1f1cb9fd388bbbb00467c2.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170628/f3d06ef49965aedbe18286a2f221fd9f.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170513/6121e3e90ff3ba4c9398121bda1dd582.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170516/6e295fe48c33245be858c40d37fb5ee6.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170707/f7ca636f73937e33836e765b7261f036.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170528/b352258c83776b9a2462277dec375d0c.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170527/4a7a7f1e6b69f126292b981c90110d0a.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170715/61110ba027f004fb503ff09cdee44d0c.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170520/dd21a21751e24a8f161792b66011688c.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170529/8140c4ad797ca01f5e99d09c82dd8a42.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170603/e55f77fb3aa3c7f118a46eeef5c0fbbf.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170529/e5902d4d3e40829f9a0d30f7488eab84.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170604/ec3794d0d42b538bf4461a84dac32509.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170603/c34b29f68e8f96d44c63fe29bf4a66b8.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170701/fb18711a6af87f30942d6a19f6da6b3e.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170619/e0456729d4dcbea569a1acbc6a47ab69.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170626/0ab1d89f54c90df477a90aa533ceea36.jpg> (referer: http://www.xiaohuar.com/List-1-0.HTML)2017-10-22 22:30:15 [scrapy.core.engine] INFO: Closing spIDer (finished)2017-10-22 22:30:15 [scrapy.statscollectors] INFO: DumPing Scrapy stats:{'downloader/request_bytes': 8785,'downloader/request_count': 24,'downloader/request_method_count/GET': 24,'downloader/response_bytes': 2278896,'downloader/response_count': 24,'downloader/response_status_count/200': 24,'finish_reason': 'finished','finish_time': datetime.datetime(2017,10,22,14,30,15,892287),'log_count/DEBUG': 25,'log_count/INFO': 7,'request_depth_max': 1,'response_received_count': 24,'scheduler/dequeued': 23,'scheduler/dequeued/memory': 23,'scheduler/enqueued': 23,'scheduler/enqueued/memory': 23,'start_time': datetime.datetime(2017,12,698874)}2017-10-22 22:30:15 [scrapy.core.engine] INFO: SpIDer closed (finished)scrapy crawl xiaohua

图片保存

在图片保存过程中"\"需要进行转义。

>>> r = requests.get("https://timgsa.baIDu.com/timg?image&quality=80&size=b9999_10000&sec=1508693697147&di=23eb655d8e450f84cf39453bc1029bc0&imgtype=0&src=http%3A%2F%2Fb.hiphotos.baIDu.com%2Fimage%2Fpic%2Fitem%2Fc9fcc3cec3fdfc038b027f7bde3f8794a5c226fe.jpg")>>> open("E:\xiaohua_img.jpg",'wb').write(r.content) file "<stdin>",line 1SyntaxError: (unicode error) 'unicodeescape' codec can't decode by>>> open("E:\xiaohua_img.jpg",'wb').write(r.content)Traceback (most recent call last): file "<stdin>",line 1,in <module>OSError: [Errno 22] InvalID argument: 'E:\xiaohua_img\x01.jpg'>>> open("E:\xiaohua_img\1.jpg",'wb').write(r.content)

以上这篇Scrapy爬虫实例讲解_校花网就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持编程小技巧。

总结

以上是内存溢出为你收集整理的Scrapy爬虫实例讲解_校花网全部内容,希望文章能够帮你解决Scrapy爬虫实例讲解_校花网所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/1201438.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-04
下一篇 2022-06-04

发表评论

登录后才能评论

评论列表(0条)

保存