Roboflow Docs
DashboardResourcesProducts
  • Documentation
  • Developer Reference
  • Changelog
  • Changelog
  • Explore by Month
    • May 2025
    • April 2025
    • March 2025
      • Batch Processing
      • Use RTSP Streams in the Workflows Web Editor
      • Merge Detections block available in Workflows
      • RF-DETR
      • Select multiple annotations in Roboflow Annotate
  • Februrary 2025
    • Download trained model weights from Roboflow
    • Roboflow Instant
    • Workflows AI Assistant
    • Use and Deploy Qwen2.5-VL with Roboflow Workflows
    • Training and API Deployment Support for YOLOv12
    • Inference v0.40.0
  • January 2025
    • Documentation and Developer Experience Overhaul
    • Workflow Editor Enhancements
    • Inference v0.33.0
    • Llama Vision 3.2 Support in Workflows
    • Draw polygon zones in Roboflow Workflows
  • December 2024
    • Dedicated Deployments Now Live
    • Improved Saving for Multimodal Projects
Powered by GitBook
On this page

Was this helpful?

  1. January 2025

Llama Vision 3.2 Support in Workflows

PreviousInference v0.33.0NextDraw polygon zones in Roboflow Workflows

Last updated 12 hours ago

Was this helpful?

, a multimodal LLM developed by Meta AI, can now be used in Roboflow Workflows.

You can use the model to ask questions about the contents of images and retrieve a text response.

For example, you could use the block to:

  1. Read the text in an image.

  2. Ask questions about the text in an image.

  3. Classify an image according to a specific prompt.

This response can then be returned by your Workflow, or processed further by other blocks (i.e. the Expression block).

Note: The Llama Vision 3.2 block is configured to use for inference. You will need an OpenRouter API key to use the Llama Vision 3.2 block.

Try it in Workflows today.
OpenRouter
Llama Vision 3.2