Classification
Run inference on classification models hosted on Roboflow.
Infer on Local and Hosted Images
To install dependencies, pip install inference-sdk.
from inference_sdk import InferenceHTTPClient
CLIENT = InferenceHTTPClient(
api_url="https://classify.roboflow.com",
api_key="API_KEY"
)
result = CLIENT.infer(your_image.jpg, model_id="vehicle-classification-eapcd/2")Node.js
We use axios to perform the POST request in this example so first run npm install axios to install the dependency.
Inferring on a Local Image
const axios = require("axios");
const fs = require("fs");
const image = fs.readFileSync("YOUR_IMAGE.jpg", {
encoding: "base64"
});
axios({
method: "POST",
url: "https://classify.roboflow.com/your-model/42",
params: {
api_key: "YOUR_KEY"
},
data: image,
headers: {
"Content-Type": "application/x-www-form-urlencoded"
}
})
.then(function(response) {
console.log(response.data);
})
.catch(function(error) {
console.log(error.message);
});Uploading a Local Image Using base64
import UIKit
// Load Image and Convert to Base64
let image = UIImage(named: "your-image-path") // path to image to upload ex: image.jpg
let imageData = image?.jpegData(compressionQuality: 1)
let fileContent = imageData?.base64EncodedString()
let postData = fileContent!.data(using: .utf8)
// Initialize Inference Server Request with API_KEY, Model, and Model Version
var request = URLRequest(url: URL(string: "https://classify.roboflow.com/your-model/your-model-version?api_key=YOUR_APIKEY&name=YOUR_IMAGE.jpg")!,timeoutInterval: Double.infinity)
request.addValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
request.httpBody = postData
// Execute Post Request
URLSession.shared.dataTask(with: request, completionHandler: { data, response, error in
// Parse Response to String
guard let data = data else {
print(String(describing: error))
return
}
// Convert Response String to Dictionary
do {
let dict = try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any]
} catch {
print(error.localizedDescription)
}
// Print String Response
print(String(data: data, encoding: .utf8)!)
}).resume()Response Object Formats
Single-Label Classification
The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:
time= total time, in seconds, to process the image and return predictionsimage= an object that holds information about the imagewidthandheightwidththe height of the predicted imageheight= the height of the predicted image
predictions= collection of all predicted classes and their associated confidence values for the predictionclass= the label of the classificationconfidence= the model's confidence that the image contains objects of the detected classification
top= highest confidence predicted classconfidence= highest predicted confidence scoreimage_path= path of the predicted imageprediction_type= the model type used to perform inference,ClassificationModelin this case
// an example JSON object
{
"time": 0.19064618100037478,
"image": {
"width": 210,
"height": 113
},
"predictions": [
{
"class": "real-image",
"confidence": 0.7149
},
{
"class": "illustration",
"confidence": 0.2851
}
],
"top": "real-image",
"confidence": 0.7149,
"image_path": "/cropped-images-1.jpg",
"prediction_type": "ClassificationModel"
}Multi-Label Classification
The hosted API inference route returns a JSON object containing an array of predictions. Each prediction has the following properties:
time= total time, in seconds, to process the image and return predictionsimage= an object that holds information about the imagewidthandheightwidththe height of the predicted imageheight= the height of the predicted image
predictions= collection of all predicted classes and their associated confidence values for the predictionclass= the label of the classificationconfidence= the model's confidence that the image contains objects of the detected classification
predicted_classes= an array that contains a list of all classifications (labels/classes) returned in model predictionsimage_path= path of the predicted imageprediction_type= the model type used to perform inference,ClassificationModelin this case
// an example JSON object
{
"time": 0.19291414400004214,
"image": {
"width": 113,
"height": 210
},
"predictions": {
"dent": {
"confidence": 0.5253503322601318
},
"severe": {
"confidence": 0.5804202556610107
}
},
"predicted_classes": [
"dent",
"severe"
],
"image_path": "/car-model-343.jpg",
"prediction_type": "ClassificationModel"
}API Reference
Using the Inference API
POST https://classify.roboflow.com/:datasetSlug/:versionNumber
You can POST a base64 encoded image directly to your model endpoint. Or you can pass a URL as the image parameter in the query string if your image is already hosted elsewhere.
Path Parameters
datasetSlug
string
The url-safe version of the dataset name. You can find it in the web UI by looking at the URL on the main project view.
string
The version number identifying the version of your dataset.
Query Parameters
api_key
string
Your API key (obtained via your workspace API settings page)
{
"predictions":{
"bird":{
"confidence":0.5282308459281921
},
"cat":{
"confidence":0.5069406032562256
},
"dog":{
"confidence":0.49514248967170715
}
},
"predicted_classes":[
"bird",
"cat"
]
}{
"message":"Forbidden"
}jLast updated
Was this helpful?