你也可以使用
ScrapyJS(无需
selenium使用真正的浏览器)解决该问题:
该库使用
Splash提供了
Scrapy + Javascript集成。
按照安装说明
Splash和
ScrapyJS,启动飞溅泊坞窗容器:
$ docker run -p 8050:8050 scrapinghub/splash
将以下设置放入
settings.py:
SPLASH_URL = 'http://192.168.59.103:8050' DOWNLOADER_MIDDLEWARES = { 'scrapyjs.SplashMiddleware': 725,}DUPEFILTER_CLASS = 'scrapyjs.SplashAwareDupeFilter'
# -*- coding: utf-8 -*-import scrapyclass ExampleSpider(scrapy.Spider): name = "example" allowed_domains = ["koovs.com"] start_urls = ( 'http://www.koovs.com/only-onlall-stripe-ls-shirt-59554.html?from=category-651&skuid=236376', ) def start_requests(self): for url in self.start_urls: yield scrapy.Request(url, self.parse, meta={ 'splash': { 'endpoint': 'render.html', 'args': {'wait': 0.5} } }) def parse(self, response): for option in response.css("div.select-size select.sizeOptions option")[1:]: print option.xpath("text()").extract()
这是控制台上打印的内容:
[u'S / 34 -- Not Available'][u'L / 40 -- Not Available'][u'L / 42']
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)