Liveness
Analyzes a single photo to determine whether it likely came from a live person (not a screen, printed photo, or mask).
No template enrollment or comparison is performed — this is a standalone liveness check.
Endpoint
POST https://cloud.ooto-ai.com/api/v1.0/liveness
Request Format
Method: POST
Content-Type: multipart/form-data
Query Parameters: check_deepfake
Query Parameters
Name
Type
Required
Description
check_deepfake
Boolean
No
Enable deepfake detection
Authentication Headers
To access the API, you need to include the following headers in your request:
APP-ID: Your application's unique identifier.
APP-KEY: Your application's authentication key.
Form Data
Field
Type
Required
Description
photo
File
Yes
JPEG or PNG image with one clear human face
Example Request (cURL)
curl -X POST --location 'https://cloud.ooto-ai.com/api/v1.0/liveness?check_deepfake=true' \
--header 'APP-ID: <put_app_id_here>' \
--header 'APP-KEY: <put_app_key_here>' \
--form 'photo=@"/path/to/photo"'
Replace «app_id», «app_key» with your actual credentials and the path to your selfie image.
Successful Response (HTTP 200)
{
"transactionId": "618fe889-13ea-41c4-9c4f-1baa6b7dfb94",
"result": {
"liveness": {
"score": 0.9657026651967726,
"fine": true
},
"deepfake": {
"score": 0.9491408169269562,
"fine": true
},
"quality": {
"pitch": 5.798101723194122,
"yaw": -2.267319895327091,
"roll": -0.3308084886521101,
"uniformity": {
"value": 0.7976967765996249,
"fine": true
},
"exposure": {
"value": 0.6352452907096943,
"fine": true
},
"contrast": {
"value": 0.780912373462111,
"fine": true
},
"flare": {
"score": 0.026086460798978806,
"fine": true
},
"blur": {
"score": 0.000007942797310533933,
"fine": true
},
"macroblocks": {
"score": 8.378465921055067e-9,
"fine": true
},
"distortion": {
"score": 0.907139241695404,
"fine": false
},
"occlusion": {
"score": 0.0005318471812643111,
"fine": true
},
"emotion": {
"score": 0.020526384934782982,
"fine": true
},
"leftEyeClosed": {
"score": 0.004296362400054932,
"fine": true
},
"rightEyeClosed": {
"score": 0.000009238719940185547,
"fine": true
},
"crfiqa": {
"score": 0.5999192595481873,
"fine": true
}
},
"demography": {
"age": 59,
"gender": "male",
"race": "latino hispanic"
},
"box": {
"x": 655,
"y": 1083,
"w": 937,
"h": 1210
},
"landmarks": [
[
912,
1580
],
[
1344,
1583
],
[
1125,
1835
],
[
946,
1996
],
[
1297,
1999
]
]
}
}
Field Explanation
Field
Description
score
Liveness score from 0.0 to 1.0 — higher is more likely live
fine
true if score passes internal threshold (usually ≥ 0.75)
quality
Image quality of detected face
box
Bounding box of detected face [x1, y1, x2, y2]
landmarks
Facial keypoints (68-point format)
Error response (HTTP 400)
{
"transactionId": "efb66e50-4c87-493d-b026-543dacdbe314",
"result": {
"status": "error",
"code": 5,
"info": "can not detect face"
}
}
Engine Errors
Code
Info
1
photo should not be empty
2
wrong mime-type in input data
3
photo size is 0 bytes
4
can not decode image, check it is valid jpeg or png file
5
can not detect face
6
more than one face detected on photo
9
can not extract features from sample, probably it is too small
Notes
Use in real-time flows to detect screen/photo attacks
Input must contain exactly one frontal face
Can be used before enrolling or verifying identity
Last updated