Skip to content

Commit c80ea9b

Browse files
authored
Fix Sphinx tool errors to unblock azure-ai-projects 2.0.0b1 release (#43950)
1 parent 0bc6951 commit c80ea9b

File tree

3 files changed

+43
-72
lines changed

3 files changed

+43
-72
lines changed

sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -656,14 +656,14 @@ class ServiceTier(str, Enum, metaclass=CaseInsensitiveEnumMeta):
656656
"""Specifies the processing type used for serving the request.
657657
658658
* If set to 'auto', then the request will be processed with the service tier
659-
configured in the Project settings. Unless otherwise configured, the Project will use
660-
'default'.
659+
configured in the Project settings. Unless otherwise configured, the Project will use
660+
'default'.
661661
* If set to 'default', then the request will be processed with the standard
662-
pricing and performance for the selected model.
662+
pricing and performance for the selected model.
663663
* If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)'
664-
or 'priority', then the request will be processed with the corresponding service
665-
tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority
666-
processing.
664+
or 'priority', then the request will be processed with the corresponding service
665+
tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority
666+
processing.
667667
* When not set, the default behavior is 'auto'.
668668
669669
When the ``service_tier`` parameter is set, the response body will include the ``service_tier``

sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py

Lines changed: 37 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -10068,7 +10068,6 @@ class Response(_Model):
1006810068
:ivar metadata: Set of 16 key-value pairs that can be attached to an object. This can be
1006910069
useful for storing additional information about the object in a structured
1007010070
format, and querying for objects via API or the dashboard.
10071-
1007210071
Keys are strings with a maximum length of 64 characters. Values are strings
1007310072
with a maximum length of 512 characters. Required.
1007410073
:vartype metadata: dict[str, str]
@@ -10081,7 +10080,6 @@ class Response(_Model):
1008110080
where the model considers the results of the tokens with top_p probability
1008210081
mass. So 0.1 means only the tokens comprising the top 10% probability mass
1008310082
are considered.
10084-
1008510083
We generally recommend altering this or ``temperature`` but not both. Required.
1008610084
:vartype top_p: float
1008710085
:ivar user: A unique identifier representing your end-user, which can help OpenAI to monitor
@@ -10119,38 +10117,29 @@ class Response(_Model):
1011910117
and `Structured Outputs <https://platform.openai.com/docs/guides/structured-outputs>`_.
1012010118
:vartype text: ~azure.ai.projects.models.ResponseText
1012110119
:ivar tools: An array of tools the model may call while generating a response. You
10122-
can specify which tool to use by setting the ``tool_choice`` parameter.
10123-
10124-
The two categories of tools you can provide the model are:
10125-
10120+
can specify which tool to use by setting the _tool_choice_ parameter.
10121+
The two categories of tools you can provide the model are:
1012610122

10127-
10128-
* **Built-in tools**: Tools that are provided by OpenAI that extend the
10129-
model's capabilities, like [web
10130-
search](https://platform.openai.com/docs/guides/tools-web-search)
10131-
or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about
10132-
[built-in tools](https://platform.openai.com/docs/guides/tools).
10133-
* **Function calls (custom tools)**: Functions that are defined by you,
10134-
enabling the model to call your own code. Learn more about
10135-
[function calling](https://platform.openai.com/docs/guides/function-calling).
10123+
* Built-in tools: Tools that are provided by OpenAI that extend the
10124+
model's capabilities, like web search or file search.
10125+
* Function calls (custom tools): Functions that are defined by you,
10126+
enabling the model to call your own code.
1013610127
:vartype tools: list[~azure.ai.projects.models.Tool]
1013710128
:ivar tool_choice: How the model should select which tool (or tools) to use when generating
10138-
a response. See the ``tools`` parameter to see how to specify which tools
10129+
a response. See the tools parameter to see how to specify which tools
1013910130
the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a
1014010131
ToolChoiceObject type.
1014110132
:vartype tool_choice: str or ~azure.ai.projects.models.ToolChoiceOptions or
1014210133
~azure.ai.projects.models.ToolChoiceObject
1014310134
:ivar prompt:
1014410135
:vartype prompt: ~azure.ai.projects.models.Prompt
1014510136
:ivar truncation: The truncation strategy to use for the model response.
10146-
1014710137
* `auto`: If the context of this response and previous ones exceeds
10148-
the model's context window size, the model will truncate the
10149-
response to fit the context window by dropping input items in the
10150-
middle of the conversation.
10138+
the model's context window size, the model will truncate the
10139+
response to fit the context window by dropping input items in the
10140+
middle of the conversation.
1015110141
* `disabled` (default): If a model response will exceed the context window
10152-
size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a
10153-
Literal["disabled"] type.
10142+
size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a Literal["disabled"] type.
1015410143
:vartype truncation: str or str
1015510144
:ivar id: Unique identifier for this Response. Required.
1015610145
:vartype id: str
@@ -10169,18 +10158,13 @@ class Response(_Model):
1016910158
:ivar incomplete_details: Details about why the response is incomplete. Required.
1017010159
:vartype incomplete_details: ~azure.ai.projects.models.ResponseIncompleteDetails1
1017110160
:ivar output: An array of content items generated by the model.
10172-
10173-
10174-
10175-
* The length and order of items in the `output` array is dependent
10176-
on the model's response.
10161+
* The length and order of items in the `output` array is dependent on the model's response.
1017710162
* Rather than accessing the first item in the `output` array and
10178-
assuming it's an `assistant` message with the content generated by
10179-
the model, you might consider using the `output_text` property where
10180-
supported in SDKs. Required.
10163+
assuming it's an `assistant` message with the content generated by
10164+
the model, you might consider using the `output_text` property where
10165+
supported in SDKs. Required.
1018110166
:vartype output: list[~azure.ai.projects.models.ItemResource]
1018210167
:ivar instructions: A system (or developer) message inserted into the model's context.
10183-
1018410168
When using along with ``previous_response_id``, the instructions from a previous
1018510169
response will not be carried over to the next response. This makes it simple
1018610170
to swap out system (or developer) messages in new responses. Required. Is either a str type or
@@ -10207,7 +10191,6 @@ class Response(_Model):
1020710191
"""Set of 16 key-value pairs that can be attached to an object. This can be
1020810192
useful for storing additional information about the object in a structured
1020910193
format, and querying for objects via API or the dashboard.
10210-
1021110194
Keys are strings with a maximum length of 64 characters. Values are strings
1021210195
with a maximum length of 512 characters. Required."""
1021310196
temperature: float = rest_field(visibility=["read", "create", "update", "delete", "query"])
@@ -10219,7 +10202,6 @@ class Response(_Model):
1021910202
where the model considers the results of the tokens with top_p probability
1022010203
mass. So 0.1 means only the tokens comprising the top 10% probability mass
1022110204
are considered.
10222-
1022310205
We generally recommend altering this or ``temperature`` but not both. Required."""
1022410206
user: str = rest_field(visibility=["read", "create", "update", "delete", "query"])
1022510207
"""A unique identifier representing your end-user, which can help OpenAI to monitor and detect
@@ -10257,39 +10239,33 @@ class Response(_Model):
1025710239
and `Structured Outputs <https://platform.openai.com/docs/guides/structured-outputs>`_."""
1025810240
tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"])
1025910241
"""An array of tools the model may call while generating a response. You
10260-
can specify which tool to use by setting the ``tool_choice`` parameter.
10261-
10262-
The two categories of tools you can provide the model are:
10263-
10264-
10265-
10266-
* **Built-in tools**: Tools that are provided by OpenAI that extend the
10267-
model's capabilities, like [web
10268-
search](https://platform.openai.com/docs/guides/tools-web-search)
10269-
or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about
10270-
[built-in tools](https://platform.openai.com/docs/guides/tools).
10271-
* **Function calls (custom tools)**: Functions that are defined by you,
10272-
enabling the model to call your own code. Learn more about
10273-
[function calling](https://platform.openai.com/docs/guides/function-calling)."""
10242+
can specify which tool to use by setting the tool_choice parameter.
10243+
The two categories of tools you can provide the model are:
10244+
10245+
* Built-in tools: Tools that are provided by OpenAI that extend the
10246+
model's capabilities, like web search or file search. Learn more about
10247+
built-in tools at https://platform.openai.com/docs/guides/tools.
10248+
* Function calls (custom tools): Functions that are defined by you,
10249+
enabling the model to call your own code. Learn more about
10250+
function calling at https://platform.openai.com/docs/guides/function-calling."""
1027410251
tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceObject"]] = rest_field(
1027510252
visibility=["read", "create", "update", "delete", "query"]
1027610253
)
1027710254
"""How the model should select which tool (or tools) to use when generating
10278-
a response. See the ``tools`` parameter to see how to specify which tools
10255+
a response. See the tools parameter to see how to specify which tools
1027910256
the model can call. Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a
1028010257
ToolChoiceObject type."""
1028110258
prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"])
1028210259
truncation: Optional[Literal["auto", "disabled"]] = rest_field(
1028310260
visibility=["read", "create", "update", "delete", "query"]
1028410261
)
1028510262
"""The truncation strategy to use for the model response.
10286-
10287-
* `auto`: If the context of this response and previous ones exceeds
10288-
the model's context window size, the model will truncate the
10289-
response to fit the context window by dropping input items in the
10290-
middle of the conversation.
10291-
* `disabled` (default): If a model response will exceed the context window
10292-
size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or
10263+
* `auto`: If the context of this response and previous ones exceeds
10264+
the model's context window size, the model will truncate the
10265+
response to fit the context window by dropping input items in the
10266+
middle of the conversation.
10267+
* `disabled` (default): If a model response will exceed the context window
10268+
size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or
1029310269
a Literal[\"disabled\"] type."""
1029410270
id: str = rest_field(visibility=["read", "create", "update", "delete", "query"])
1029510271
"""Unique identifier for this Response. Required."""
@@ -10315,20 +10291,16 @@ class Response(_Model):
1031510291
"""Details about why the response is incomplete. Required."""
1031610292
output: list["_models.ItemResource"] = rest_field(visibility=["read", "create", "update", "delete", "query"])
1031710293
"""An array of content items generated by the model.
10318-
10319-
10320-
10321-
* The length and order of items in the `output` array is dependent
10322-
on the model's response.
10323-
* Rather than accessing the first item in the `output` array and
10324-
assuming it's an `assistant` message with the content generated by
10325-
the model, you might consider using the `output_text` property where
10326-
supported in SDKs. Required."""
10294+
* The length and order of items in the `output` array is dependent
10295+
on the model's response.
10296+
* Rather than accessing the first item in the `output` array and
10297+
assuming it's an `assistant` message with the content generated by
10298+
the model, you might consider using the `output_text` property where
10299+
supported in SDKs. Required."""
1032710300
instructions: Union[str, list["_models.ItemParam"]] = rest_field(
1032810301
visibility=["read", "create", "update", "delete", "query"]
1032910302
)
1033010303
"""A system (or developer) message inserted into the model's context.
10331-
1033210304
When using along with ``previous_response_id``, the instructions from a previous
1033310305
response will not be carried over to the next response. This makes it simple
1033410306
to swap out system (or developer) messages in new responses. Required. Is either a str type or

sdk/ai/azure-ai-projects/pyproject.toml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,6 @@ pytyped = ["py.typed"]
6363

6464
[tool.azure-sdk-build]
6565
verifytypes = false
66-
sphinx = false
6766

6867
[tool.mypy]
6968
exclude = [

0 commit comments

Comments
 (0)