Skip to content

Commit 8976ceb

Browse files
authored
Refactor check_auto_docstring using AST (#41432)
* refactor check_auto_docstring with AST * use dataclass for ASTIndexes * simplify and improve readability * fix missing imports * fix modular * fix modular issues
1 parent c01e711 commit 8976ceb

File tree

4 files changed

+243
-214
lines changed

4 files changed

+243
-214
lines changed

src/transformers/models/glm4v/modeling_glm4v.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1418,14 +1418,11 @@ def forward(
14181418
pixel_values_videos: Optional[torch.FloatTensor] = None,
14191419
image_grid_thw: Optional[torch.LongTensor] = None,
14201420
video_grid_thw: Optional[torch.LongTensor] = None,
1421-
rope_deltas: Optional[torch.LongTensor] = None,
14221421
cache_position: Optional[torch.LongTensor] = None,
14231422
logits_to_keep: Union[int, torch.Tensor] = 0,
14241423
**kwargs: Unpack[TransformersKwargs],
14251424
) -> Union[tuple, Glm4vCausalLMOutputWithPast]:
14261425
r"""
1427-
rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
1428-
The rope index difference between sequence length and multimodal rope.
14291426
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
14301427
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
14311428
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored

src/transformers/models/glm4v/modular_glm4v.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1341,14 +1341,11 @@ def forward(
13411341
pixel_values_videos: Optional[torch.FloatTensor] = None,
13421342
image_grid_thw: Optional[torch.LongTensor] = None,
13431343
video_grid_thw: Optional[torch.LongTensor] = None,
1344-
rope_deltas: Optional[torch.LongTensor] = None,
13451344
cache_position: Optional[torch.LongTensor] = None,
13461345
logits_to_keep: Union[int, torch.Tensor] = 0,
13471346
**kwargs: Unpack[TransformersKwargs],
13481347
) -> Union[tuple, Glm4vCausalLMOutputWithPast]:
13491348
r"""
1350-
rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
1351-
The rope index difference between sequence length and multimodal rope.
13521349
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
13531350
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
13541351
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored

src/transformers/models/glm4v_moe/modeling_glm4v_moe.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1638,8 +1638,6 @@ def forward(
16381638
**kwargs: Unpack[TransformersKwargs],
16391639
) -> Union[tuple, Glm4vMoeCausalLMOutputWithPast]:
16401640
r"""
1641-
rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
1642-
The rope index difference between sequence length and multimodal rope.
16431641
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
16441642
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
16451643
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored

0 commit comments

Comments
 (0)