#600: Amazon SageMaker Multi Model Endpoints

Published: July 3, 2023, 10:02 a.m.

b'Amazon SageMaker Multi-Model Endpoint (MME) is fully managed capability of SageMaker Inference that allows customers to deploy thousands of models on a single endpoint and save costs by sharing instances on which the endpoints run across all the models. Until recently, MME was only supported for machine learning (ML) models which run on CPU instances. Now, customers can use MME to deploy thousands of ML models on GPU based instances as well, and potentially save costs by 90%. MME dynamically loads and unloads models from GPU memory based on incoming traffic to the endpoint. Customers save cost with MME as the GPU instances are shared by thousands of models. Customers can run ML models from multiple ML frameworks including PyTorch, TensorFlow, XGBoost, and ONNX. Customers can get started by using the NVIDIA Triton\\u2122 Inference Server and deploy models on SageMaker\\u2019s GPU instances in \\u201cmulti-model\\u201c mode. Once the MME is created, customers specify the ML model from which they want to obtain inference while invoking the endpoint. Multi Model Endpoints for GPU is available in all AWS regions where Amazon SageMaker is available. \\nTo learn more checkout:\\nOur launch blog: https://go.aws/3NwtJyh\\nAmazon SageMaker website: https://go.aws/44uCdNr'