Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【咨询】dynamic_gru算法出现问题 #278

Open
silence-will20 opened this issue Feb 13, 2023 · 0 comments
Open

【咨询】dynamic_gru算法出现问题 #278

silence-will20 opened this issue Feb 13, 2023 · 0 comments

Comments

@silence-will20
Copy link

silence-will20 commented Feb 13, 2023

代码片段:

        conv_1 = pfl_mpc.layers.conv2d(
           input=x, num_filters=6, filter_size=3, act='relu', padding=1)
        pool_1 = pfl_mpc.layers.pool2d(
            input=conv_1, pool_size=2, pool_stride=2)
        conv_2 = pfl_mpc.layers.conv2d(
            input=pool_1, num_filters=12, filter_size=3, act='relu', padding=1)
        pool_2 = pfl_mpc.layers.pool2d(
            input=conv_2, pool_size=2, pool_stride=2)
        f = pfl_mpc.layers.fc(pool_2, 1, 3, act='relu')
        f = layers.squeeze(f, [4])
        f = layers.transpose(f, [0, 1, 3, 2])
        for i in range(BATCH_SIZE): // 将batch_size中的数据一个一个取出
            f_t = f[:, i:i+1, :, :]
            f_t = layers.squeeze(f_t, [1])
            sequenceout_t = pfl_mpc.layers.fc(input=f_t, size=12*3)
            sequenceout_t = layers.transpose(sequenceout_t, [1, 0, 2])
            sequenceout_t = pfl_mpc.layers.dynamic_gru(input=sequenceout_t, size=12) // 将数据放入dynamic_gru函数之中,也是该函数发出报错,只接受lod_level=1的数据输入 
            if i == 0:
                sequenceout = layers.unsqueeze(sequenceout_t, [1])
            else:
                sequenceout_t = layers.unsqueeze(sequenceout_t, [1])
                sequenceout = layers.concat(input=[sequenceout, sequenceout_t], axis=1)
                sequenceout_t = layers.squeeze(sequenceout_t, [1])
        featureout = pfl_mpc.layers.batch_norm(input=sequenceout, act='relu')
        fc_out = pfl_mpc.layers.fc(input=featureout, size=100)
        fc_out = pfl_mpc.layers.fc(input=fc_out, size=10)
        cost, softmax = pfl_mpc.layers.softmax_with_cross_entropy(
            logits=fc_out, label=y, soft_label=True, return_softmax=True)
        return cost, softmax

1、报错:
Expected lods.size() == 1UL, but received lods.size():0 != 1UL:1.
报错中,查看负责检查lods.size()的语句是:

      auto lods = lod_tensor.lod();
      PADDLE_ENFORCE_EQ(lods.size(), 1UL, "Only support one level sequence now.");

可以得知这里的lods.size()就是lod数组的第0维大小,检查是否等于1UL,就是检查是否第0维大小是1,即是否属于lod_level=1的类型。

2、paddlefl中MpcVariable类型的数据无法调用LoDTensor中的成员函数,无法直接修改和继承lod信息。请问这二者之间有什么关系,能否互相转换以及自动生成lod数组呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant