似乎没有一种(简单的)方法可以终止Python中的线程。
这是一个并行运行多个HTTP请求的简单示例:
import threadingdef crawl(): import urllib2 data = urllib2.urlopen("http://www.google.com/").read() print "Read google.com"threads = []for n in range(10): thread = threading.Thread(target=crawl) thread.start() threads.append(thread)# to wait until all three functions are finishedprint "Waiting..."for thread in threads: thread.join()print "Complete."
有了额外的开销,您可以使用更强大的多进程方法,并允许您终止类似线程的进程。
我将示例扩展为使用该示例。希望对您有帮助:
import multiprocessingdef crawl(result_queue): import urllib2 data = urllib2.urlopen("http://news.ycombinator.com/").read() print "Requested..." if "result found (for example)": result_queue.put("result!") print "Read site."processs = []result_queue = multiprocessing.Queue()for n in range(4): # start 4 processes crawling for the result process = multiprocessing.Process(target=crawl, args=[result_queue]) process.start() processs.append(process)print "Waiting for result..."result = result_queue.get() # waits until any of the proccess have `.put()` a resultfor process in processs: # then kill them all off process.terminate()print "Got result:", result
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)