网络爬虫——药品总局相关数据获取(requests)

网络爬虫——药品总局相关数据获取(requests),第1张

网络爬虫——药品总局相关数据获取(requests)
import requests
import json
if __name__ == '__main__':
    url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsList'
    headers = {
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36'
    }
    id_list = []
    all_id_list = []
    for page in range(1,5):
        page=str(page)        #注意这里要转化为字符串形式
        data = {
            'on': 'true',
            'page': page,
            'pageSize': '15',
            'productName':'',
            'conditionType': '1',
            'applyname':'',
            'applysn':'',
        }

        response = requests.post(url=url,data=data,headers=headers)
        dic_data = response.json()
        for dic in dic_data['list']:
            id_list.append(dic['ID'])


    for ID in id_list:
        post_url = 'http://scxk.nmpa.gov.cn:81/xk/itownet/portalAction.do?method=getXkzsById'
        data = {
            'id':ID
        }
        response = requests.post(url=post_url,data=data,headers=headers)
        dic = response.json()
        all_id_list.append(dic)
    with open('./yaopin.json','w',encoding='utf-8') as fp:
        json.dump(all_id_list,fp=fp,ensure_ascii=False)
    print('over!!!')

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5711624.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存