为什么Python2.7爬虫无法获取全部Cookie

为什么Python2.7爬虫无法获取全部Cookie,第1张

用requests的session()方法就是了,

s = requestssession()

r = spost(url, data) # 登录

sget(url1) 这是保持登录状态的访问

ck = cookielibMozillaCookieJar() ckload('#cookpath') opener = urllib2build_opener(urllib2>

scrapyFormRequest

loginpy

class LoginSpider(scrapySpider):

name = 'login_spider'

start_urls = ['hincom'] def parse(self, response):

return [

scrapyFormRequestfrom_response(

response, # username和password要根据实际页面的表单的name字段进行修改

formdata={'username': 'your_username', 'password': 'your_password'},

callback=selfafter_login)] def after_login(self, response):

# 登录后的代码

pass123456789101112131415

selenium登录获取cookie

get_cookie_by_seleniumpy

import pickleimport timefrom selenium import webdriverdef get_cookies():

url = 'httestcom'

web_driver = webdriverChrome()

web_driverget(url)

username = web_driverfind_element_by_id('login-email')

usernamesend_keys('username')

password = web_driverfind_element_by_id('login-password')

passwordsend_keys('password')

login_button = web_driverfind_element_by_id('login-submit')

login_buttonclick()

timesleep(3)

cookies = web_driverget_cookies()

web_driverclose() return cookiesif __name__ == '__main__':

cookies = get_cookies()

pickledump(cookies, open('cookiespkl', 'wb'))12345678910111213141516171819202122232425

获取浏览器cookie(以Ubuntu的Firefox为例)

get_cookie_by_firefoxpy

import sqlite3import pickledef get_cookie_by_firefox():

cookie_path = '/home/name/mozilla/firefox/bqtvfe08default/cookiessqlite'

with sqlite3connect(cookie_path) as conn:

sql = 'select name,value from moz_cookies where baseDomain="testcom"'

cur = conncursor()

cookies = [{'name': name, 'value': value} for name, value in curexecute(sql)fetchall()] return cookiesif __name__ == '__main__':

cookies = get_cookie_from_firefox()

pickledump(cookies, open('cookiespkl', 'wb'))12345678910111213141516

scrapy使用获取后的cookie

cookies = pickleload(open('cookiespkl', 'rb'))yield scrapyRequest(url, cookies=cookies, callback=selfparse)12

requests使用获取后的cookie

cookies = pickleload(open('cookiespkl', 'rb'))

s = requestsSession()for cookie in cookies:

scookiesset(cookie['name'], cookie['value'])1234

selenium使用获取后的cookie

from selenium import webdriver

cookies = pickleload(open('cookiespkl', 'rb'))

w = webdriverChrome()# 直接添加cookie会报错,下面是一种解决方案,可能有更好的# -- start --wget('hwwtestcom')

wdelete_all_cookies()# -- end --for cookie in cookies:

wadd_cookie(cookie)

以上就是关于为什么Python2.7爬虫无法获取全部Cookie全部的内容,包括:为什么Python2.7爬虫无法获取全部Cookie、python里有没有简单方法读ie的cookie信息、怎么用python爬需要登录的网站数据等相关内容解答,如果想了解更多相关内容,可以关注我们,你们的支持是我们更新的动力!

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/9357681.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-27
下一篇 2023-04-27

发表评论

登录后才能评论

评论列表(0条)

保存