Cloud Client

What is Xplainable Cloud?

Xplainable Cloud is a hosted service that allows you to persist and load models and preprocessing pipelines and collaborate on them within teams and organisations. Persisted models are also able to be deployed as API endpoints in seconds. The cloud service is accessible via a web interface to manage organisations, teams, and users and provides an excellent interface for visualising model explainers and metrics. You can find more information about Xplainable Cloud at https://www.xplainable.io.

What is the Cloud Client?

The cloud client is built into the xplainable python package, allowing you to connect to Xplainable Cloud and query the API, enabling you to manage your account, models, and deployments within Python.

Initialising a session

To initialise a session, you first must generate an API key at xplainable cloud <https://beta.xplainable.io>.

Copyright Xplainable Pty Ltd, 2023

xplainable.client.init.initialise(api_key=None, hostname='https://api.xplainable.io')[source]

Initialise the client with an API Key.

API Keys can be generated from https://beta.xplainable.io with a valid account.

Example

>>> import xplainable as xp
>>> import os
>>> xp.initialise(api_key=os.environ['XP_API_KEY'])
Returns:

The users account information.

Return type:

dict

Querying the API

When you connect successfully to Xplainable Cloud, you can use the client to query the API. The client is accessible by running:

import xplainable as xp
import os

# Initialise your session
xp.initialise(api_key=os.environ['XP_API_KEY'])

# Query the API
xp.client.list_models()
class xplainable.client.client.Client(api_key, hostname='https://api.xplainable.io')[source]

Bases: object

A client for interfacing with the xplainable web api (xplainable cloud).

Access models, preprocessors and user data from xplainable cloud. API keys can be generated at https://beta.xplainable.io.

Parameters:

api_key (str) – A valid api key.

activate_deployment(deployment_id)[source]

Activates a model deployment.

Parameters:

deployment_id (str) – The deployment id

add_deployment_middleware(deployment_id, func, name, description=None)[source]

Adds or replaces a middleware function to a deployment.

Parameters:
  • deployment_id (str) – The deployment id

  • func (function) – The middleware function

create_model_id(model, model_name: str, model_description: str) str[source]

Creates a new model and returns the model id.

Parameters:
  • model_name (str) – The name of the model

  • model_description (str) – The description of the model

  • model (XClassifier | XRegressor) – The model to create.

Returns:

The model id

Return type:

int

create_model_version(model, model_id: str, x: DataFrame, y: Series) str[source]

Creates a new model version and returns the version id.

Parameters:
  • model_id (int) – The model id

  • partition_on (str) – The partition column name

  • ruleset (dict | str) – The feeature ruleset

  • health_info (dict) – Feature health information

  • versions (dict) – Versions of current environment

Returns:

The model version id

Return type:

int

create_preprocessor_id(preprocessor_name: str, preprocessor_description: str) str[source]

Creates a new preprocessor and returns the preprocessor id.

Parameters:
  • preprocessor_name (str) – The name of the preprocessor

  • preprocessor_description (str) – The description of the preprocessor

Returns:

The preprocessor id

Return type:

int

create_preprocessor_version(preprocessor_id: str, pipeline: list, df: DataFrame | None = None) str[source]

Creates a new preprocessor version and returns the version id.

Parameters:
  • preprocessor_id (int) – The preprocessor id

  • pipeline (xplainable.preprocessing.pipeline.Pipeline) – pipeline

Returns:

The preprocessor version id

Return type:

int

deactivate_deployment(deployment_id)[source]

Deactivates a model deployment.

Parameters:

deployment_id (str) – The deployment id

delete_deployment_middleware(deployment_id)[source]

Deletes a middleware function from a deployment.

Parameters:

deployment_id (str) – The deployment id

deploy(model_id: str, version_id: str, hostname: str = 'https://inference.xplainable.io', location: str = 'syd', raw_output: bool = True) dict[source]

Deploys a model partition to xplainable cloud.

The hostname should be the url of the inference server. For example: https://inference.xplainable.io

Parameters:
  • hostname (str) – The host name for the inference server

  • model_id (int) – The model id

  • version_id (int) – The version id

  • partition_id (int) – The partition id

  • raw_output (bool, optional) – returns a dictionary

Returns:

deployment status and details.

Return type:

dict

generate_deploy_key(description: str, deployment_id: str, days_until_expiry: float = 90, clipboard: bool = True, surpress_output: bool = False) None[source]

Generates a deploy key for a model deployment.

Parameters:
  • description (str) – Description of the deploy key use case.

  • deployment_id (str) – The deployment id.

  • days_until_expiry (float) – The number of days until the key expires.

  • surpress_output (bool) – Surpress output. Defaults to False.

Returns:

No key is returned. The key is copied to the clipboard.

Return type:

None

generate_example_deployment_payload(deployment_id)[source]

Generates an example deployment payload for a deployment.

Parameters:

deployment_id (str) – The deployment id.

get_user_data() dict[source]

Retrieves the user data for the active user.

Returns:

User data

Return type:

dict

list_deployments()[source]

Lists all deployments of the active user’s team.

Returns:

Dictionary of deployments.

Return type:

dict

list_model_versions(model_id: int) list[source]

Lists all versions of a model.

Parameters:

model_id (int) – The model id

Returns:

Dictionary of model versions.

Return type:

dict

list_models() list[source]

Lists all models of the active user’s team.

Returns:

Dictionary of saved models.

Return type:

dict

list_preprocessor_versions(preprocessor_id: int) list[source]

Lists all versions of a preprocessor.

Parameters:

preprocessor_id (int) – The preprocessor id

Returns:

Dictionary of preprocessor versions.

Return type:

dict

list_preprocessors() list[source]

Lists all preprocessors of the active user’s team.

Returns:

Dictionary of preprocessors.

Return type:

dict

load_classifier(model_id: int, version_id: int, model=None)[source]

Loads a binary classification model by model_id

Parameters:
  • model_id (str) – A valid model_id

  • version_id (str) – A valid version_id

  • model (PartitionedClassifier) – An existing model to add partitions

Returns:

The loaded xplainable classifier

Return type:

xplainable.PartitionedClassifier

load_preprocessor(preprocessor_id: int, version_id: int, gui_object: bool = False, response_only: bool = False)[source]

Loads a preprocessor by preprocessor_id and version_id.

Parameters:
  • preprocessor_id (int) – The preprocessor id

  • version_id (int) – The version id

  • response_only (bool, optional) – Returns the preprocessor metadata.

Returns:

The loaded pipeline

Return type:

xplainable.preprocessing.pipeline.Pipeline

load_regressor(model_id: int, version_id: int, model=None)[source]

Loads a regression model by model_id and version_id

Parameters:
  • model_id (str) – A valid model_id

  • version_id (str) – A valid version_id

  • model (PartitionedRegressor) – An existing model to add partitions to

Returns:

The loaded xplainable regressor

Return type:

xplainable.PartitionedRegressor