简体   繁体   English

失败 ONNX InferenceSession ONNX model 从 PyTorch 导出

[英]Failure ONNX InferenceSession ONNX model exported from PyTorch

I am trying to export a custom PyTorch model to ONNX to perform inference but without success... The tricky thing here is that I'm trying to use the script-based exporter as shown in the example here in order to call a function from my model.我正在尝试将自定义 PyTorch model 导出到 ONNX 以执行推理,但没有成功......这里的棘手问题是我正在尝试使用基于脚本的导出器,如此处的示例 所示,以便从 ABC1 调用 ZC1C425268E6873984D1我的 model。

I can export the model without any complain but then when trying to start an InferenceSession I get the following error:我可以毫无怨言地导出 model 但是在尝试启动InferenceSession时出现以下错误:

Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ner.onnx failed:Type Error: Type parameter (T) bound to different types (tensor(int64) and tensor(float) in node (Concat_1260).

I tried to identify the root cause of that problem and it seems to be generated by the use of torch.matmul() in the following function (quite nasty cause I'm trying to only use pytorch operators):我试图确定该问题的根本原因,它似乎是通过在以下 function 中使用torch.matmul()生成的(非常讨厌,因为我试图只使用 pytorch 运算符):

@torch.jit.script
def valid_sequence_output(sequence_output, valid_mask):
    X = torch.where(valid_mask.unsqueeze(-1) == 1, sequence_output, torch.zeros_like(sequence_output))
    bs, max_len, _ = X.shape

    tu = torch.unique(torch.nonzero(X)[:, :2], dim=0)
    batch_axis = tu[:, 0]
    rows_axis = tu[:, 1]

    a = torch.arange(bs).repeat(batch_axis.shape).reshape(batch_axis.shape[0], -1)
    a = torch.transpose(a, 0, 1)

    T = torch.cumsum(torch.where(batch_axis == a, torch.ones_like(a), torch.zeros_like(a)), dim=1) - 1
    cols_axis = T[batch_axis, torch.arange(batch_axis.shape[0])]

    A = torch.zeros((bs, max_len, max_len))
    A[(batch_axis, cols_axis, rows_axis)] = 1.0

    valid_output = torch.matmul(A, X)
    valid_attention_mask = torch.where(valid_output[:, :, 0] != 0, torch.ones_like(valid_mask),
                                       torch.zeros_like(valid_mask))
    return valid_output, valid_attention_mask

It seems like torch.matmul isn't supported (according to the docs) so I tried a bunch of workaround (eg A.matmul(X) , torch.baddbmm ) but I still get the same issue...似乎不支持torch.matmul (根据文档)所以我尝试了一堆解决方法(例如A.matmul(X)torch.baddbmm )但我仍然遇到同样的问题......

Any suggestions on how to fix this behavior would be awesome:D Thanks for your help!有关如何解决此行为的任何建议都很棒:D 感谢您的帮助!

This points to a model conversion issue.这指向 model 转换问题。 Please open an issue againt the Torch exporter feature.请针对 Torch 导出器功能打开一个问题。 A type (T) has to be bound to the same type for the model to be valid and ORT is basically complaining about this.类型 (T) 必须绑定到相同类型才能使 model 有效,而 ORT 基本上是在抱怨这一点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM