本文实例为大家分享了Python爬虫爬取淘宝商品的具体代码,供大家参考,具体内容如下
1、需求目标 :
进去淘宝页面,搜索耐克关键词,抓取 商品的标题,链接,价格,城市,旺旺号,付款人数,进去第二层,抓取商品的销售量,款号等。
2、结果展示
3、源代码
# enCoding: utf-8import sysreload(sys)sys.setdefaultencoding('utf-8')import timeimport pandas as pdtime1=time.time()from lxml import etreefrom selenium import webdriver#########自动模拟driver=webdriver.PhantomJs(executable_path='D:/Python27/Scripts/phantomJs.exe')import re#################定义列表存储#############Title=[]price=[]city=[]shop_name=[]num=[]link=[]sale=[]number=[]#####输入关键词耐克(这里必须用unicode)keyword="%E8%80%90%E5%85%8B"for i in range(0,1): try: print "...............正在抓取第"+str(i)+"页..........................." url="https://s.taobao.com/search?q=%E8%80%90%E5%85%8B&imgfile=&Js=1&stats_click=search_radio_all%3A1&initiative_ID=staobaoz_20170710&IE=utf8&bcoffset=4&ntoffset=4&p4ppushleft=1%2C48&s="+str(i*44) driver.get(url) time.sleep(5) HTML=driver.page_source selector=etree.HTML(HTML) Title1=selector.xpath('//div[@]/a') for each in Title1: print each.xpath('string(.)').strip() Title.append(each.xpath('string(.)').strip()) price1=selector.xpath('//div[@]/strong/text()') for each in price1: print each price.append(each) city1=selector.xpath('//div[@]/text()') for each in city1: print each city.append(each) num1=selector.xpath('//div[@]/text()') for each in num1: print each num.append(each) shop_name1=selector.xpath('//div[@]/a/span[2]/text()') for each in shop_name1: print each shop_name.append(each) link1=selector.xpath('//div[@]/a/@href') for each in link1: kk="https://" + each link.append("https://" + each) if "https" in each: print each driver.get(each) else: print "https://" + each driver.get("https://" + each) time.sleep(3) HTML2=driver.page_source selector2=etree.HTML(HTML2) sale1=selector2.xpath('//*[@ID="J_DetailMeta"]/div[1]/div[1]/div/ul/li[1]/div/span[2]/text()') for each in sale1: print each sale.append(each) sale2=selector2.xpath('//strong[@ID="J_SellCounter"]/text()') for each in sale2: print each sale.append(each) if "tmall" in kk: number1 = re.findall('<ul ID="J_Attrul">(.*?)</ul>',HTML2,re.S) for each in number1: m = re.findall('>*号: (.*?)</li>',str(each).strip(),re.S) if len(m) > 0: for each1 in m: print each1 number.append(each1) else: number.append("NulL") if "taobao" in kk: number2=re.findall('<ul >(.*?)</ul>',re.S) for each in number2: h=re.findall('>*号: (.*?)</li>',re.S) if len(m) > 0: for each2 in h: print each2 number.append(each2) else: number.append("NulL") if "click" in kk: number.append("NulL") except: passprint len(Title),len(city),len(price),len(num),len(shop_name),len(link),len(sale),len(number)# ## ######数据框data1=pd.DataFrame({"标题":Title,"价格":price,"旺旺":shop_name,"城市":city,"付款人数":num,"链接":link,"销量":sale,"款号":number})print data1# 写出excelwriter = pd.ExcelWriter(r'C:\taobao_spIDer2.xlsx',engine='xlsxwriter',options={'strings_to_urls': False})data1.to_excel(writer,index=False)writer.close()time2 = time.time()print u'ok,爬虫结束!'print u'总共耗时:' + str(time2 - time1) + 's'####关闭浏览器driver.close()
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持编程小技巧。
您可能感兴趣的文章:通过抓取淘宝评论为例讲解Python爬取ajax动态生成的数据(经典)Python实现爬取知乎神回复简单爬虫代码分享python爬取网站数据保存使用的方法以视频爬取实例讲解Python爬虫神器Beautiful Soup用法利用Python爬取可用的代理IPPython使用Scrapy爬虫框架全站爬取图片并保存本地的实现代码python爬虫爬取快手视频多线程下载功能python爬取m3u8连接的视频python定向爬取淘宝商品价格python爬虫爬取网页表格数据 总结以上是内存溢出为你收集整理的python爬虫爬取淘宝商品信息(selenum+phontomjs)全部内容,希望文章能够帮你解决python爬虫爬取淘宝商品信息(selenum+phontomjs)所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)