如何上传超过2147483647字节的字符串块?

如何上传超过2147483647字节的字符串块?,第1张

如何上传超过2147483647字节的字符串块?

requests
bug跟踪器上询问了您的问题;
他们的建议是使用流式上传。如果这不起作用,您可能会看到块编码的请求是否有效。

[编辑]

基于原始代码的示例:

# Using `with` here will handle closing the file implicitlywith open(attachment_path, 'rb') as file_to_upload:    r = requests.put(        "{base}problems/{pid}/{atype}/{path}".format( base=self._baseurl, # It's better to use consistent naming; search PEP-8 for standard Python conventions. pid=problem_id, atype=attachment_type, path=urllib.quote(os.path.basename(attachment_path)),        ),        headers=headers,        # Note that you're passing the file object, NOT the contents of the file:        data=file_to_upload,        # Hard to say whether this is a good idea with a large file upload        timeout=300,    )

我不能保证它会按原样运行,因为我无法实际测试它,但是它应该很接近。我链接到的错误跟踪器注释还提到,发送多个标头可能会导致问题,因此,如果实际上指定的标头是必需的,则此方法可能无效。

关于块编码:这应该是您的第二选择。您的代码未指定

'rb'
为的模式
open(...)
,因此更改该模式可能应使上述代码正常工作。如果没有,您可以尝试一下。

def read_in_chunks():    # If you're going to chunk anyway, doesn't it seem like smaller ones than this would be a good idea?    chunk_size = 30720 * 30720    # I don't know how correct this is; if it doesn't work as expected, you'll need to debug    with open(attachment_path, 'rb') as file_object:        while True: data = file_object.read(chunk_size) if not data:     break yield data# Same request as above, just using the function to chunk explicitly; see the `data` paramr = requests.put(    "{base}problems/{pid}/{atype}/{path}".format(        base=self._baseurl,        pid=problem_id,        atype=attachment_type,        path=urllib.quote(os.path.basename(attachment_path)),    ),    headers=headers,    # Call the chunk function here and the request will be chunked as you specify    data=read_in_chunks(),    timeout=300,)


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5648533.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存