Skip to content

Commit 0c17d42

Browse files
tonybove-appleabove3
andauthored
Docs - Sphinx Indent Errors in convert() Docstring (#2053)
Co-authored-by: above3 <anthony_bove@apple.com>
1 parent 6284782 commit 0c17d42

File tree

1 file changed

+49
-41
lines changed

1 file changed

+49
-41
lines changed

coremltools/converters/_converters_entry.py

Lines changed: 49 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def convert(
7878
):
7979
"""
8080
Convert a TensorFlow or PyTorch model to the Core ML model format as either
81-
a neural network or an `ML program <https://coremltools.readme.io/docs/ml-programs>`_.
81+
a neural network or an `ML program <https://apple.github.io/coremltools/docs-guides/source/convert-to-ml-program.html>`_.
8282
Some parameters and requirements differ for TensorFlow and PyTorch
8383
conversions.
8484
@@ -155,6 +155,7 @@ def convert(
155155
)
156156
157157
* TensorFlow 1 and 2 (including tf.keras):
158+
158159
- The ``inputs`` parameter is optional. If not provided, the inputs
159160
are placeholder nodes in the model (if the model is a frozen graph)
160161
or function inputs (if the model is a ``tf.function``).
@@ -168,27 +169,24 @@ def convert(
168169
- If ``dtype`` is not specified, it defaults to the ``dtype`` of the
169170
inputs in the TF model.
170171
- For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
171-
When ``inputs`` not provided or ``dtype`` not specified. The float 32 inputs defaults to float 16.
172+
When ``inputs`` not provided or ``dtype`` not specified, the float 32 inputs default to float 16.
172173
173174
* PyTorch:
174175
175176
- TorchScript Models:
176177
- The ``inputs`` parameter is required.
177-
- Number of elements in ``inputs`` must match the number of inputs
178-
of the PyTorch model.
178+
- Number of elements in ``inputs`` must match the number of inputs of the PyTorch model.
179179
- ``inputs`` may be a nested list or tuple.
180180
- ``TensorType`` and ``ImageType`` must have the ``shape`` specified.
181181
- If the ``name`` argument is specified with ``TensorType`` or
182-
``ImageType``, the converted Core ML model will have inputs with
183-
the same name.
182+
``ImageType``, the converted Core ML model will have inputs with the same name.
184183
- If ``dtype`` is missing:
185184
* For ``minimum_deployment_target <= ct.target.macOS12``, it defaults to float 32.
186185
* For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
187186
It defaults to float 16.
188-
189187
- Torch Exported Models:
190-
- The ``inputs`` parameter is not supported. The ``inputs`` parameter is
191-
inferred from the Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
188+
- The ``inputs`` parameter is not supported.
189+
- The ``inputs`` parameter is inferred from the Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
192190
193191
outputs : list of ``TensorType`` or ``ImageType`` (optional)
194192
@@ -230,7 +228,7 @@ def convert(
230228
will be converted up to that node.
231229
- For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
232230
If ``dtype`` not specified, the outputs inferred of type float 32
233-
defaults to float 16.
231+
default to float 16.
234232
235233
* PyTorch: TorchScript Models
236234
- If specified, the length of the list must match the number of
@@ -240,11 +238,11 @@ def convert(
240238
- For ``minimum_deployment_target >= ct.target.macOS13``,
241239
and with ``compute_precision`` in float 16 precision.
242240
- If ``dtype`` not specified, the outputs inferred of type float 32
243-
defaults to float 16.
241+
default to float 16.
244242
245243
* PyTorch: Torch Exported Models:
246244
- The ``outputs`` parameter is not supported.
247-
The ``outputs`` parameter is inferred from Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
245+
- The ``outputs`` parameter is inferred from Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
248246
249247
classifier_config : ClassifierConfig class (optional)
250248
The configuration if the MLModel is intended to be a classifier.
@@ -254,17 +252,21 @@ def convert(
254252
The value of this parameter determines the type of the model
255253
representation produced by the converter. To learn about the differences
256254
between ML programs and neural networks, see
257-
`ML Programs <https://coremltools.readme.io/docs/ml-programs>`_.
255+
`ML Programs <https://apple.github.io/coremltools/docs-guides/source/convert-to-ml-program.html>`_.
258256
259257
- The converter produces a neural network (``neuralnetwork``) if:
260-
::
258+
259+
.. sourcecode:: python
260+
261261
minimum_deployment_target <= coremltools.target.iOS14/
262262
coremltools.target.macOS11/
263263
coremltools.target.watchOS7/
264264
coremltools.target.tvOS14:
265265
266266
- The converter produces an ML program (``mlprogram``) if:
267-
::
267+
268+
.. sourcecode:: python
269+
268270
minimum_deployment_target >= coremltools.target.iOS15/
269271
coremltools.target.macOS12/
270272
coremltools.target.watchOS8/
@@ -275,7 +277,9 @@ def convert(
275277
model type with as minimum of a deployment target as possible.
276278
- If this parameter is specified and ``convert_to`` is also specified,
277279
they must be compatible. The following are examples of invalid values:
278-
::
280+
281+
.. sourcecode:: python
282+
279283
# Invalid:
280284
convert_to="mlprogram", minimum_deployment_target=coremltools.target.iOS14
281285
@@ -287,7 +291,7 @@ def convert(
287291
The value of this parameter determines the type of the model
288292
representation produced by the converter. To learn about the
289293
differences between ML programs and neural networks, see
290-
`ML Programs <https://coremltools.readme.io/docs/ml-programs>`_.
294+
`ML Programs <https://apple.github.io/coremltools/docs-guides/source/convert-to-ml-program.html>`_.
291295
292296
- ``'mlprogram'`` : Returns an MLModel (``coremltools.models.MLModel``)
293297
containing a MILSpec.Program proto, which is the Core ML program format.
@@ -303,7 +307,9 @@ def convert(
303307
(``coremltools.converters.mil.Program``). An MIL program is primarily
304308
used for debugging and inspection. It can be converted to an MLModel for
305309
execution by using one of the following:
306-
::
310+
311+
.. sourcecode:: python
312+
307313
ct.convert(mil_program, convert_to="neuralnetwork")
308314
ct.convert(mil_program, convert_to="mlprogram")
309315
@@ -320,7 +326,9 @@ def convert(
320326
applied to produce a float 16 program; that is, a program in which all
321327
the intermediate float tensors are of type float 16 (for ops that
322328
support that type).
323-
::
329+
330+
.. sourcecode:: python
331+
324332
coremltools.transform.FP16ComputePrecision(op_selector=
325333
lambda op:True)
326334
@@ -344,7 +352,9 @@ def convert(
344352
but you can customize this.
345353
346354
For example:
347-
::
355+
356+
.. sourcecode:: python
357+
348358
coremltools.transform.FP16ComputePrecision(op_selector=
349359
lambda op: op.op_type != "linear")
350360
@@ -394,14 +404,13 @@ def skip_real_div_ops(op):
394404
conversion, the model is loaded with the provided set of compute units and
395405
returned.
396406
397-
An enum with the following possible values.
398-
- ``coremltools.ComputeUnit.ALL``: Use all compute units available, including the
399-
neural engine.
400-
- ``coremltools.ComputeUnit.CPU_ONLY``: Limit the model to only use the CPU.
401-
- ``coremltools.ComputeUnit.CPU_AND_GPU``: Use both the CPU and GPU, but not the
402-
neural engine.
403-
- ``coremltools.ComputeUnit.CPU_AND_NE``: Use both the CPU and neural engine, but
404-
not the GPU. Available only for macOS >= 13.0.
407+
An enum with the following possible values:
408+
409+
* ``coremltools.ComputeUnit.ALL``: Use all compute units available, including the neural engine.
410+
* ``coremltools.ComputeUnit.CPU_ONLY``: Limit the model to only use the CPU.
411+
* ``coremltools.ComputeUnit.CPU_AND_GPU``: Use both the CPU and GPU, but not the neural engine.
412+
* ``coremltools.ComputeUnit.CPU_AND_NE``: Use both the CPU and neural engine, but
413+
not the GPU. Available only for macOS >= 13.0.
405414
406415
package_dir : str
407416
Post conversion, the model is saved at a temporary location and
@@ -415,11 +424,10 @@ def skip_real_div_ops(op):
415424
debug : bool
416425
This flag should generally be ``False`` except for debugging purposes.
417426
Setting this flag to ``True`` produces the following behavior:
418-
- For Torch conversion, it will print the list of supported and
419-
unsupported ops found in the model if conversion fails due to an
420-
unsupported op.
421-
- For Tensorflow conversion, it will cause to display extra logging
422-
and visualizations.
427+
428+
* For Torch conversion, it will print the list of supported and
429+
unsupported ops found in the model if conversion fails due to an unsupported op.
430+
* For Tensorflow conversion, it will cause to display extra logging and visualizations.
423431
424432
pass_pipeline : PassPipeline
425433
Manage graph passes. You can control which graph passes to run and the order of the
@@ -431,18 +439,18 @@ def skip_real_div_ops(op):
431439
432440
.. sourcecode:: python
433441
434-
pipeline = ct.PassPipeline()
435-
pipeline.remove_passes({"common::fuse_conv_batchnorm"})
436-
mlmodel = ct.convert(model, pass_pipeline=pipeline)
442+
pipeline = ct.PassPipeline()
443+
pipeline.remove_passes({"common::fuse_conv_batchnorm"})
444+
mlmodel = ct.convert(model, pass_pipeline=pipeline)
437445
438446
* To avoid folding too-large ``const`` ops that lead to a large model, set pass option
439447
as shown in the following example:
440448
441449
.. sourcecode:: python
442450
443-
pipeline = ct.PassPipeline()
444-
pipeline.set_options("common::const_elimination", {"skip_const_by_size": "1e6"})
445-
mlmodel = ct.convert(model, pass_pipeline=pipeline)
451+
pipeline = ct.PassPipeline()
452+
pipeline.set_options("common::const_elimination", {"skip_const_by_size": "1e6"})
453+
mlmodel = ct.convert(model, pass_pipeline=pipeline)
446454
447455
We also provide a set of predefined pass pipelines that you can directly call.
448456
@@ -520,7 +528,7 @@ def skip_real_div_ops(op):
520528
>>> results = mlmodel.predict({"input": example_input.numpy()})
521529
>>> print(results['1651']) # 1651 is the node name given by PyTorch's JIT
522530
523-
See `Conversion Options <https://coremltools.readme.io/docs/neural-network-conversion>`_ for
531+
See `Conversion Options <https://apple.github.io/coremltools/docs-guides/source/conversion-options.html>`_ for
524532
more advanced options.
525533
"""
526534
_check_deployment_target(minimum_deployment_target)
@@ -989,7 +997,7 @@ def _determine_target(convert_to, minimum_deployment_target):
989997
"ct.target.iOS15 (which is same as ct.target.macOS12). "
990998
"Note: the model will not run on systems older than iOS15/macOS12/watchOS8/tvOS15. "
991999
"In order to make your model run on older system, please set the 'minimum_deployment_target' to iOS14/iOS13. "
992-
"Details please see the link: https://coremltools.readme.io/docs/unified-conversion-api#target-conversion-formats"
1000+
"Details please see the link: https://apple.github.io/coremltools/docs-guides/source/target-conversion-formats.html"
9931001
)
9941002
if minimum_deployment_target is not None:
9951003
if convert_to == "mlprogram" and minimum_deployment_target < AvailableTarget.iOS15:

0 commit comments

Comments
 (0)