Skip to content

Conversation

wilson-seok
Copy link
Contributor

@wilson-seok wilson-seok commented Aug 28, 2025

Description of the issue(symptom, root-cause, how it was resolved)

  • Customer's dynamic shape model has accuracy drop than static shape model.
  • The model has auto pad(same_upper) type in convolution and pooling. During graph transformation, padding value is updated in static shape model while dynamic shape doesn't have a chance to update. Onednn primitive creator has padding calculation and update, but it didn't consider auto pad(same_upper/same_lower).
  • This PR implements same_upper/same_lower cases in convolution and pooling.

The code and line that caused this issue (if it is not changed directly)

Reproduction step and snapshot (if applicable. Do not attach for customer model)

  • $ ./benchmark_app -d GPU.1 -m ~/task/blackmagic/Blackmagic_Dynamic_v2/DynamicModels/PersonMask_ps_mask_faster.xml -i ~/task/blackmagic/Blackmagic_Dynamic_v2/InputNpys/PersonMask_ps_mask_faster_input_0.npy -data_shape [1,14,607,1080] -hint none -nstreams 1 -niter 1

Problematic graph

image image

Checklist

  • [v] Is it a proper fix? (not a workaround)
  • Did you include test case for this fix, if necessary?
  • Did you review existing test that can be extended to cover this scenario? Which test did you review?

Tickets:

  • id

@wilson-seok wilson-seok requested review from a team as code owners August 28, 2025 14:16
@github-actions github-actions bot added the category: GPU OpenVINO GPU plugin label Aug 28, 2025
@@ -68,7 +69,16 @@ static std::shared_ptr<dnnl::convolution_forward::primitive_desc> get_convolutio
auto is = input_md.get_dims()[2 + i];
auto ks = weights_md.get_dims()[weights_offset];
auto kernel_range = 1 + (ks - 1) * (dilation[i] + 1);
pad_r[i] = (os - 1) * stride[i] - is + kernel_range - pad_l[i];
auto padding = (os - 1) * stride[i] - is + kernel_range - pad_l[i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there be - pad_l[i] here?
It seems to imply that this is a total padding, since we then subtract from this value to obtain the l/r paddings
Or did I misunderstand?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: GPU OpenVINO GPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants