用Python爬取彼岸图网图片

用Python爬取彼岸图网图片,第1张

概述 用Python爬取彼岸图网图片*使用了  四个模块importtimeimportrequestsfromlxmlimportetreeimportos没有的话自行百度安装。#encoding=utf-8importtimeimportrequestsfromlxmlimportetreeimportos#http://www.netbian.com/爬虫if__name__==

 用Python爬取彼岸图网图片

*使用了  四个模块
import time
import requests
from lxml import etree
import os

没有的话自行百度安装。

#enCoding = utf-8import timeimport requestsfrom lxml import etreeimport os# http://www.netbian.com/ 爬虫if __name__ == '__main__':    filePath = './保存图片'    if not os.path.exists(filePath):        os.mkdir(filePath)    page_next = 'http://www.netbian.com/dongman/index.htm' #第一页    header = { #UA伪装        "User-Agent": "Mozilla/5.0 (windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36 Edg/89.0.774.77"    }    for _ in range(1,133):        page_text = requests.get(url=page_next, headers=header)        page_text.enCoding = 'gbk'        tree = etree.HTML(page_text.text)        li_List = tree.xpath('//div[@]//li')        _next = tree.xpath('//div[@]/a[@]/@href')        if len(_next) == 1:            page_next = 'http://www.netbian.com/' + _next[0]        else:            page_next = 'http://www.netbian.com/' + _next[1]        for _ in li_List:            time.sleep(0.3)            href = _.xpath('./a/@href')            if href != []:                href = href[0]            else :                continue            if href == 'https://pic.netbian.com/': #广告                continue            page_url = 'http://www.netbian.com/' + href            Title = _.xpath('./a/img/@alt')[0]            spanIndex = Title.find(' ',0,len(Title)) #空格位置            Title = filePath + '/' + Title[0:spanIndex] + '.jpg'            img_page = requests.get(url=page_url, headers=header)            _tree = etree.HTML(img_page.text)            img_url = _tree.xpath('//div[@]//a/img/@src')[0]            try:                img_file = requests.get(img_url, headers=header, stream=True)                if img_file != None:                    with open(Title, 'wb') as f:                        f.write(img_file.content)                        print(Title + '下载成功')            except:                print('异常咯,不用管')    print('Over 全部下载完成')

 

总结

以上是内存溢出为你收集整理的用Python爬取彼岸图网图片全部内容,希望文章能够帮你解决用Python爬取彼岸图网图片所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1187092.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-03
下一篇 2022-06-03

发表评论

登录后才能评论

评论列表(0条)

保存