@@ -32,13 +32,15 @@ Table of Contents
32324. `TensorFlow SageMaker Estimators <#tensorflow-sagemaker-estimators >`__
33335. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators >`__
34346. `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators >`__
35- 7. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
36- 8. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
37- 9. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
38- 10. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
39- 11. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
40- 12. `BYO Model <#byo-model >`__
41- 13. `SageMaker Workflow <#sagemaker-workflow >`__
35+ 7. `SageMaker SparkML Serving <#sagemaker-sparkml-serving >`__
36+ 8. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
37+ 9. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
38+ 10. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
39+ 11. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
40+ 12. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
41+ 13. `BYO Model <#byo-model >`__
42+ 14. `Inference Pipelines <#inference-pipelines >`__
43+ 15. `SageMaker Workflow <#sagemaker-workflow >`__
4244
4345
4446Installing the SageMaker Python SDK
@@ -374,7 +376,7 @@ For more information, see `TensorFlow SageMaker Estimators and Models`_.
374376
375377
376378Chainer SageMaker Estimators
377- ------------------------------ -
379+ ----------------------------
378380
379381By using Chainer SageMaker `` Estimators`` , you can train and host Chainer models on Amazon SageMaker.
380382
@@ -390,7 +392,7 @@ For more information about Chainer SageMaker ``Estimators``, see `Chainer SageM
390392
391393
392394PyTorch SageMaker Estimators
393- ------------------------------ -
395+ ----------------------------
394396
395397With PyTorch SageMaker `` Estimators`` , you can train and host PyTorch models on Amazon SageMaker.
396398
@@ -408,6 +410,39 @@ For more information about PyTorch SageMaker ``Estimators``, see `PyTorch SageMa
408410.. _PyTorch SageMaker Estimators and Models: src/ sagemaker/ pytorch/ README .rst
409411
410412
413+ SageMaker SparkML Serving
414+ ------------------------ -
415+
416+ With SageMaker SparkML Serving, you can now perform predictions against a SparkML Model in SageMaker.
417+ In order to host a SparkML model in SageMaker, it should be serialized with `` MLeap`` library.
418+
419+ For more information on MLeap, see https:// github.com/ combust/ mleap .
420+
421+ Supported major version of Spark: 2.2 (MLeap version - 0.9 .6)
422+
423+ Here is an example on how to create an instance of `` SparkMLModel`` class and use `` deploy()`` method to create an
424+ endpoint which can be used to perform prediction against your trained SparkML Model.
425+
426+ .. code:: python
427+
428+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
429+ model_name = ' sparkml-model'
430+ endpoint_name = ' sparkml-endpoint'
431+ predictor = sparkml_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c4.xlarge' , endpoint_name = endpoint_name)
432+
433+ Once the model is deployed, we can invoke the endpoint with a `` CSV `` payload like this:
434+
435+ .. code:: python
436+
437+ payload = ' field_1,field_2,field_3,field_4,field_5'
438+ predictor.predict(payload)
439+
440+
441+ For more information about the different `` content- type `` and `` Accept`` formats as well as the structure of the
442+ `` schema`` that SageMaker SparkML Serving recognizes, please see `SageMaker SparkML Serving Container` _.
443+
444+ .. _SageMaker SparkML Serving Container: https:// github.com/ aws/ sagemaker- sparkml- serving- container
445+
411446AWS SageMaker Estimators
412447------------------------
413448Amazon SageMaker provides several built- in machine learning algorithms that you can use to solve a variety of problems.
@@ -709,11 +744,45 @@ This returns a predictor the same way an ``Estimator`` does when ``deploy()`` is
709744A full example is available in the `Amazon SageMaker examples repository < https:// github.com/ awslabs/ amazon- sagemaker- examples/ tree/ master/ advanced_functionality/ mxnet_mnist_byom> ` __.
710745
711746
747+ Inference Pipelines
748+ ------------------ -
749+ You can create a Pipeline for realtime or batch inference comprising of one or multiple model containers. This will help
750+ you to deploy an ML pipeline behind a single endpoint and you can have one API call perform pre- processing, model- scoring
751+ and post- processing on your data before returning it back as the response.
752+
753+ For this, you have to create a `` PipelineModel`` which will take a list of `` Model`` objects. Calling `` deploy()`` on the
754+ `` PipelineModel`` will provide you with an endpoint which can be invoked to perform the prediction on a data point against
755+ the ML Pipeline.
756+
757+ .. code:: python
758+
759+ xgb_image = get_image_uri(sess.boto_region_name, ' xgboost' , repo_version = " latest" )
760+ xgb_model = Model(model_data = ' s3://path/to/model.tar.gz' , image = xgb_image)
761+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
762+
763+ model_name = ' inference-pipeline-model'
764+ endpoint_name = ' inference-pipeline-endpoint'
765+ sm_model = PipelineModel(name = model_name, role = sagemaker_role, models = [sparkml_model, xgb_model])
766+
767+ This will define a `` PipelineModel`` consisting of SparkML model and an XGBoost model stacked sequentially. For more
768+ information about how to train an XGBoost model, please refer to the XGBoost notebook here_.
769+
770+ .. _here: https:// docs.aws.amazon.com/ sagemaker/ latest/ dg/ xgboost.html# xgboost-sample-notebooks
771+
772+ .. code:: python
773+
774+ sm_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c5.xlarge' , endpoint_name = endpoint_name)
775+
776+ This returns a predictor the same way an `` Estimator`` does when `` deploy()`` is called. Whenever you make an inference
777+ request using this predictor, you should pass the data that the first container expects and the predictor will return the
778+ output from the last container.
779+
780+
712781SageMaker Workflow
713782------------------
714783
715784You can use Apache Airflow to author, schedule and monitor SageMaker workflow.
716785
717786For more information, see `SageMaker Workflow in Apache Airflow` _.
718787
719- .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
788+ .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
0 commit comments