win10 c++调用pytorch模型

win10 c++调用pytorch模型,第1张

win10 c++调用pytorch模型

1.pytorch模型生成pt模型

"""Export a pth model to Torchscript formats


import time
import torch
import torch.nn as nn
from torch.utils.mobile_optimizer import optimize_for_mobile
from model.model import parsingNet



def main():
    net=“测试代码中调用模型的代码”
    state_dict = torch.load("./model/ep099.pth", map_location='cpu')['model']
    net.load_state_dict(compatible_state_dict, strict=False)
    net.eval()

    # An example input you would normally provide to your model's forward() method.
    example = torch.rand(1,3,288,800).cuda()

    # Use torch.jit.trace to generate a torch.jit.scriptModule via tracing.
    traced_script_module = torch.jit.trace(net, example)
    output = traced_script_module(torch.ones(1,3,288,800).cuda())
    traced_script_module.save("./model/best.pt")

    # The traced scriptModule can now be evaluated identically to a regular PyTorch module
    print(output)


if __name__ == "__main__":
    main()

2. vs2019下配置libtorch

注意libtorch版本和训练模型的pytorch版本一致

3. 使用c++调用pytorch模型

#include 
#include 

int main(void)
{
	
	torch::jit::script::Module module = torch::jit::load("best.pt");

	assert(module != nullptr);

	std::cout << "Model is loaded!" << std::endl;
	// Create a vector of inputs.
	std::vector inputs;
	inputs.push_back(torch::ones({ 1, 3, 288, 800 }).cuda());

	// Execute the model and turn its output into a tensor.
	at::Tensor result = module.forward(inputs).toTensor();

	std::cout << result << std::endl;

	system("pause");

	return 0;
}

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5702893.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存