基于scrapy实现的简单蜘蛛采集程序

基于scrapy实现的简单蜘蛛采集程序,第1张

概述本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:

本文实例讲述了基于scrapy实现的简单蜘蛛采集程序。分享给大家供大家参考。具体如下:

# Standard Python library imports# 3rd party importsfrom scrapy.contrib.spIDers import CrawlSpIDer,Rulefrom scrapy.contrib.linkextractors.sgml import SgmllinkExtractorfrom scrapy.selector import HTMLXPathSelector# My importsfrom poetry_analysis.items import PoetryAnalysisItemHTML_file_name = r'.+\.HTML'class PoetryParser(object):  """  ProvIDes common parsing method for poems formatted this one specific way.  """  date_pattern = r'(\d{2} \w{3,9} \d{4})'   def parse_poem(self,response):    hxs = HTMLXPathSelector(response)    item = PoetryAnalysisItem()    # All poetry text is in pre Tags    text = hxs.select('//pre/text()').extract()    item['text'] = ''.join(text)    item['url'] = response.url    # head/Title contains Title - a poem by author    Title_text = hxs.select('//head/Title/text()').extract()[0]    item['Title'],item['author'] = Title_text.split(' - ')    item['author'] = item['author'].replace('a poem by','')    for key in ['Title','author']:      item[key] = item[key].strip()    item['date'] = hxs.select("//p[@class='small']/text()").re(date_pattern)    return itemclass PoetrySpIDer(CrawlSpIDer,PoetryParser):  name = 'example.com_poetry'  allowed_domains = ['www.example.com']  root_path = 'someuser/poetry/'  start_urls = ['http://www.example.com/someuser/poetry/recent/','http://www.example.com/someuser/poetry/less_recent/']  rules = [Rule(SgmllinkExtractor(allow=[start_urls[0] + HTML_file_name]),callback='parse_poem'),Rule(SgmllinkExtractor(allow=[start_urls[1] + HTML_file_name]),callback='parse_poem')]

希望本文所述对大家的Python程序设计有所帮助。

总结

以上是内存溢出为你收集整理的基于scrapy实现的简单蜘蛛采集程序全部内容,希望文章能够帮你解决基于scrapy实现的简单蜘蛛采集程序所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1203007.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-04
下一篇 2022-06-04

发表评论

登录后才能评论

评论列表(0条)

保存