Roboflow Docs
DashboardResourcesProducts
  • Product Documentation
  • Developer Reference
  • Changelog
  • Roboflow Documentation
  • Quickstart
  • Workspaces
    • Workspaces, Projects, and Models
    • Create a Workspace
    • Rename a Workspace
    • Delete a Workspace
  • Team Members
    • Invite a Team Member
    • Role-Based Access Control (RBAC)
    • Change a Team Member Role
    • Remove a Team Member
  • Single Sign On (SSO)
  • Workflows
    • What is Workflows?
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
    • Workflows AI Assistant
  • Enterprise Integrations
  • Workflow Blocks
    • Run a Model
      • Object Detection Model
      • Single-Label Classification Model
    • Visualize Predictions
      • Bounding Box Visualization
      • Label Visualization
      • Circle Visualization
      • Background Color Visualization
      • Classification Label Visualization
      • Crop Visualization
  • Dataset Management
    • Create a Project
    • Upload Images, Videos, and Annotations
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Import from Roboflow Universe
    • Manage Datasets
      • Dataset Batches
      • Search a Dataset
      • Set Dataset Classes
      • Add Tags to Images
      • Create an Annotation Attribute
      • Download an Image
      • Delete an Image
    • Dataset Versions
      • Create a Dataset Version
      • Preprocess Images
      • Image Augmentation
        • Add Augmentations to Images
      • Delete a Version
    • Dataset Analytics
    • Merge Projects
    • Rename a Project
    • Delete a Project
    • Project Folders
    • Make a Project Public
    • Download a Dataset
  • Annotate
    • Introduction to Roboflow Annotate
    • Annotate an Image
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
    • AI Labeling
      • Label Assist
      • Enhanced Smart Polygon with SAM
        • Smart Polygon (Legacy)
      • Box Prompting
      • Auto Label
    • Set Keypoint Skeletons
    • Annotate Keypoints
    • Annotate Multimodal Data
    • Collaborate on Labeling
    • Annotation Insights
  • Roboflow Labeling Services
  • Train
    • Train a Model
      • Train from a Universe Checkpoint
      • Train from Azure Vision
      • Train from Google Cloud
    • Roboflow Instant
    • Cancel a Training Job
    • Stop Training Early
    • View Training Results
    • View Trained Models
    • Evaluate Trained Models
  • Deploy
    • Deploy a Model or Workflow
    • Supported Models
    • Managed Deployments
    • Serverless Hosted API V2
      • Use in a Workflow
      • Use with the REST API
      • Run an Instant Model
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Dedicated Deployments
      • Create a Dedicated Deployment
      • Make Requests to a Dedicated Deployment
      • Manage Dedicated Deployments with an API
    • Batch Processing
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Luxonis OAK
    • Upload Custom Model Weights
    • Download Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Universe
    • What is Roboflow Universe?
    • Find a Dataset on Universe
    • Explore Images in a Universe Dataset
    • Fork a Universe Dataset
    • Find a Model on Universe
    • Download a Universe Dataset
  • Set a Project Description
  • View Project Analytics
  • Support
    • Share a Workspace with Support
    • Delete Your Roboflow Account
    • Apply for Academic Credits
  • Billing
    • Premium Trial
    • Credits
      • View Credit Usage
      • Enable or Disable Flex Billing
      • Purchase Prepaid Credits
    • Plans
      • Purchase a Plan
      • Cancel a Plan
      • Update Billing Details
      • Update Payment Method
      • View Invoices
Powered by GitBook
On this page
  • Task Support
  • Deploy a Model to Lens Studio
  • Configure Model in Lens Studio
  • Configure Classes in Lens Studio

Was this helpful?

  1. Deploy
  2. SDKs

Lens Studio

Deploy a model to Lens Studio for use in building a Snap Lens.

Previousinferencejs RequirementsNextChangelog - Lens Studio

Last updated 21 days ago

Was this helpful?

With a trained model ready in Roboflow, you can deploy your model to SnapML.

Task Support

The following task types are supported by the hosted API:

Task Type
Supported by Lens Studio

Object Detection

✅

Classification

Instance Segmentation

Semantic Segmentation

Note: Only models trained using Roboflow Train 3.0 are supported. You can check if a model is trained on Roboflow Train 3.0 by checking the Versions page associated with your model.

Deploy a Model to Lens Studio

Click on “Deploy” in the Roboflow sidebar, then scroll down until you see the “Use with Snap Lens Studio” box. Click “Export to Lens Studio”.

When you click this button, a pop up will appear showing information about the classes in your model.

These classes are ordered and will be used in the next step for configuring your model in Lens Studio. Take note of the class list for future use.

In addition, two files will be downloaded:

  1. The Roboflow Lens Studio template, with which you can use your weights in an application with minimal configuration, and;

  2. Your model weights.

The Roboflow Lens Studio template is 100 MB, so downloading the template may take a few moments depending on your internet connection.

With the template ready, we can start setting up our model in Lens Studio.

Configure Model in Lens Studio

Unzip the Roboflow Lens Studio template you downloaded earlier, then open up the “Roboflow-Lens-Template.Isproj” file in the unzipped folder.

When you open the application, you will see something like this:

By default, the template uses a coin counting model. For this example, we will use the playing cards model we built earlier. This application draws boxes around each prediction, but you can add your own filters and logic using Lens Studio.

Click the “ML Controller” box at the top of the left sidebar in Lens Studio:

This will open up a box in which you can configure your model for use in the application next to the preview window:

Our demo application is configured for the coin counter example. To use your own model, first click the “ML Model” box:

Then, drag the weights downloaded from Roboflow into the pop up box:

When you drag in the weights, you will be prompted with some configuration options. In the “Inputs” section of the pop up, set each “Scale” value to 0.0039. Leave the bias values as they are by default.

Then, click “Import” to import your model.

Configure Classes in Lens Studio

We now have our model loaded into Lens Studio. There is one more step: tell our model what classes we are using.

In the “Class Settings” tab below the ML Model button that we used earlier, you will see a list of classes. These are configured for a coin counter example in our demo project, but if you are working with your own Lens Studio project these values will be blank.

Here, we need to set our class names and labels. The labels must be in the order presented in the Roboflow dashboard. Here is an example of setting one of our values for the playing card application:

We need to do this configuration for each class in our model. You must specify all classes in your model so Snap can interpret the information in the model weights.

If you haven’t already installed Lens Studio, go to the and download the latest version of Lens Studio. With Lens Studio installed, we are ready to start configuring our model.

For this section, we will use the Roboflow Lens Studio template. But, you can use your model weights in any application with the .

Now our application is ready to use! You can use the “Preview” box to use your application on your computer, or demo your application on your own device using the .

Snap AR website
MLController component
Pairing with Snapchat feature