Mobile iOS

Deploy your trained Roboflow model in your iOS app

The Roboflow Mobile iOS SDK is a great option if you are developing an iOS application where having a model running on the edge (iPad or iPhone) is needed for faster inference or to unlock a new suite of features, capabilities, and use cases (like augmented reality).

Native mobile applications with custom computer vision models embedded in them allows developers to give their apps the sense of sight.

Task Support

The following task types are supported by the hosted API:

Task Type
Supported by iOS SDK Deployment

Object Detection

Classification

Instance Segmentation

Semantic Segmentation

Deploy a Model to an iOS Device

Supported Hardware and Software

All iOS devices support on-device inference, but those older than the iPhone 8 (A11 Bionic Processor) will fall back to the less energy efficient gpu engine.

Roboflow requires a minimum iOS version of 15.4.

Prototyping

You can develop against the Roboflow Hosted Inference API. It uses the same trained models as on-device inference.

Installation

"CocoaPods is built with Ruby and it will be installable with the default Ruby available on macOS. You can use a Ruby Version manager, however, we recommend that you use the standard Ruby available on macOS unless you know what you're doing. Using the default Ruby install will require you to use sudo when installing gems. (This is only an issue for the duration of the gem installation, though.)" - CocoaPods

The "Sudo-less" installation is an option, if you do not want to grant RubyGems admin privileges for this process. However, note that the sudo installation is more typical.

Check that CocoaPods is successfully installed by entering pod --version in your Terminal.

Installing the Roboflow CocoaPod

First, run pod init in your project directory.

Make sure the Podfile specifies the platform :ios, '15.4'

Next, add pod 'Roboflow' to your Podfile.

If you do not have the XCode Command Line Tools installed, run xcode-select --install in your Terminal.

This will return: xcode-select: error: command line tools are alreadyinstalled, use "Software Update" to install updates if the Command Line Tools are already present on your system.

Lastly, run pod install and open the generated .xcworkspace file in XCode.

  • If it returns this error: "You may have encountered a bug in the Ruby interpreter or extension libraries," then first run brew install cocoapods, and then run pod install and open the generated .xcworkspace file in XCode.

    • Check that CocoaPods is successfully installed by entering pod --version in your Terminal.

Using Roboflow in Swift

Navigate to the .xcworkspace file in XCode.

Next, import Roboflow by adding import Roboflow. to the .xcworkspace file.

Then, create an instance of the Roboflow API with let rf = Roboflow(apiKey: "API_KEY"). For modelVersion, replace YOUR-MODEL-VERSION-# with the integer value of your model's version number.

Locating Your Project Information

Completion Handler Usage:

import Roboflow
...
//initalize with your API Key
let rf = RoboflowMobile(apiKey: "API_KEY")
var model: RFObjectDetectionModel?
...

//model is your model's project name
rf.load(model: "YOUR-MODEL-ID", modelVersion: YOUR-MODEL-VERSION-#) { [self] model, error, modelName, modelType in
    if error != nil {
        print(error?.localizedDescription as Any)
    } else {
        model?.configure(threshold: threshold, overlap: overlap, maxObjects: maxObjects)
        self.model = model
    }
    
}
...

//model?.detect takes a UIImage and runs inference on it
let img = UIImage(named: "example.jpeg")
model?.detect(image: img!) { predictions, error in
    if error != nil {
        print(error)
    } else {
        print(predictions)
    }
}

Asynchronous Usage:

To use asynchronously, you must be invoking your Roboflow model within an asynchronous block.

import Roboflow
...
//initalize with your API Key
let rf = RoboflowMobile(apiKey: "API_KEY")
...

//model is your model's project name
let (model, loadingError, modelName, modelType) = await rf.load(model: "YOUR-MODEL-ID", modelVersion: YOUR-MODEL-VERSION-#)
model!.configure(threshold: threshold, overlap: overlap, maxObjects: maxObjects)
...

//model?.detect takes a UIImage and runs inference on it
let img = UIImage(named: "example.jpeg")
let (predictions, predictionError) = await model!.detect(image: img!)
print(predictions)

Predictions Format:

x:Float //center of object x
y:Float //center of object y
width:Float
height:Float
className:String
confidence:Float
color:UIColor
box:CGRect

CGRect

React Native Expo App Example

We also provide an example of integrating this SDK into an expo app with React Native here. You may find this useful when considering the construction of your own downstream application.

Be sure that you have both Expo and CocoaPods installed.

  • expo-cli supports the following Node.js versions: >=12.13.0 <15.0.0 (Maintenance LTS) and >=16.0.0 <17.0.0 (Active LTS)

  • The yarn package must be installed for Node.js (npm install -g yarn)

Example iOS application - CashCounter

Download CashCounter, our example iOS app that counts US coins and bills, as an example of how you could deploy a computer vision model to an iPhone. You'll see examples of visualizing bounding boxes, FPS, object counting, image upload, and more.

Last updated