本文实例讲述了Python多线程下载文件的方法。分享给大家供大家参考。具体实现方法如下:
import httplibimport urllib2import timefrom threading import Threadfrom Queue import Queuefrom time import sleepproxy = 'your proxy';opener = urllib2.build_opener( urllib2.ProxyHandler({'http':proxy}) )urllib2.install_opener( opener )IDs = {};for i in range(1,110): try: ListUrl = "http://www.someweb.net/sort/List_8_%d.sHTML" % (i); print ListUrl; page = urllib2.urlopen(ListUrl).read(); speUrl = "http://www.someweb.net/soft/"; speUrlLen = len(speUrl); IDx = page.find(speUrl,0); while IDx!=-1: dotIDx = page.find(".",IDx + speUrlLen); if dotIDx != -1: ID = page[IDx + speUrlLen:dotIDx]; IDs[ID] = 1; IDx = page.find("http://www.someweb.net/soft/",IDx + speUrlLen); except: pass;q = Queue()NUM = 5FailedID = [];def do_somthing_using(ID): try: url = "http://www.someweb.net/download.PHP?softID=%s&type=dx" % (ID); h2 = httplib.httpconnection("your proxy","you port"); h2.request("head",url); resp = h2.getresponse(); header = resp.getheaders(); location = header[3][1]; sContent = urllib2.urlopen(location).read(); savePath = "C:\someweb\%s.rar" % (ID); file=open(savePath,'wb'); file.write(sContent); file.close(); print savePath + " saved"; except: pass;def working(): while True: arguments = q.get() do_somthing_using(arguments) sleep(1) q.task_done()for i in range(NUM): t = Thread(target=working) t.setDaemon(True) t.start()for ID in IDs: q.put(ID)q.join()
希望本文所述对大家的Python程序设计有所帮助。
总结以上是内存溢出为你收集整理的Python多线程下载文件的方法全部内容,希望文章能够帮你解决Python多线程下载文件的方法所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)