Status
GET/status
Status
Proxy for status endpoint on inference API, returns all information about the app and its current status.
Request​
Responses​
- 200
- 503
Server is running, backend configuration is returned in body
- application/json
- Schema
- Example (from schema)
Schema
config
object
required
The app configuration
Default value: 100
Default value: 2097152
Default value: true
Default value: 1
Default value: true
Default value: true
Default value: true
Default value: true
Default value: 3001
Default value: 8
Default value: 3003
Default value: 3000
Default value: file:///path/to/home/artefacts
Default value: 3002
status
object
nullable
dead_readers
object
required
property name*
ReaderInfo
last_heartbeat
object
required
property name*
LastHeartbeat
live_readers
object
required
property name*
ReaderInfo
loading_readers
object
required
property name*
ReaderInfo
{
"config": {
"allow_remote_images": false,
"batch_duration_millis": 100,
"body_size_limit_bytes": 2097152,
"echo": false,
"enable_metrics": true,
"heartbeat_check_interval": 1,
"launch_management_server": true,
"launch_openai_server": true,
"launch_sagemaker_server": true,
"launch_vertex_server": true,
"management_port": 3001,
"max_batch_size": 8,
"openai_port": 3003,
"port": 3000,
"repository_path": "file:///path/to/home/artefacts",
"vertex_port": 3002
},
"status": {
"dead_readers": {},
"last_heartbeat": {},
"live_readers": {},
"loading_readers": {}
}
}
Server is not available