Skip to main content
Version: 0.10.x

Status

GET 

/status

Status

Proxy for status endpoint on inference API, returns all information about the app and its current status.

Request​

Responses​

Server is running, backend configuration is returned in body

Schema

    config

    object

    required

    The app configuration

    batch_duration_millis int64required

    Default value: 100

    echo booleanrequired
    enable_metrics booleanrequired

    Default value: true

    heartbeat_check_interval int64required

    Default value: 1

    launch_management_server booleanrequired

    Default value: true

    launch_sagemaker_server booleanrequired

    Default value: true

    launch_vertex_server booleanrequired

    Default value: true

    management_port int32required

    Default value: 3001

    max_batch_size integerrequired

    Default value: 8

    port int32required

    Default value: 3000

    vertex_port int32required

    Default value: 3002

    status

    object

    nullable

    dead_readers

    object

    required

    property name*

    ReaderInfo

    backend stringrequired
    consumer_group stringrequired
    model_name stringrequired
    model_type stringrequired
    pids int32[]required

    input_model_paths

    object

    required

    property name* string

    last_heartbeat

    object

    required

    property name*

    LastHeartbeat

    heartbeat_timestamp int64required
    heartbeat_wait_interval int64required

    live_readers

    object

    required

    property name*

    ReaderInfo

    backend stringrequired
    consumer_group stringrequired
    model_name stringrequired
    model_type stringrequired
    pids int32[]required

    loading_readers

    object

    required

    property name*

    ReaderInfo

    backend stringrequired
    consumer_group stringrequired
    model_name stringrequired
    model_type stringrequired
    pids int32[]required
Loading...