You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: optimum/executorch/attentions/whisper_attention.py
+23-28Lines changed: 23 additions & 28 deletions
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
# limitations under the License.
14
14
15
15
# Export friendly cross attention implementation for Whisper. Adopted
16
-
# from https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L241
16
+
# from https://github.com/huggingface/transformers/blob/454c0a7ccf33f7fc13e3e2eb9b188a5c09ab708b/src/transformers/models/whisper/modeling_whisper.py#L241
17
17
# Rewritten to replace if branches with torch.cond. Note that unlike
18
18
# the original WhisperAttention, this implementation only works for
19
19
# cross attention (where `key_value_states` is not None).
0 commit comments