What is YOLOv7?

YOLOv7 is a state of the art object detection model.

About the model

Here is an overview of the

YOLOv7

model:

Date of Release Jul 06, 2022
Model Type Object Detection
Architecture YOLO, CNN
Framework Used PyTorch
Annotation Format YOLOv7 PyTorch TXT
Stars on GitHub 7300+

YOLOv7 was released in July 2022 by WongKinYiu and AlexeyAB. It achieves state of the art performance on and are trained to detect the generic 80 classes in the MS COCO dataset for real-time object detection.

YOLOv7 inference on image of horses

There are six versions of the model ranging from the namesake YOLOv7 (fastest, smallest, and least accurate) to the beefy YOLOv7-E6E (slowest, largest, and most accurate).

The differences between the different sizes of the model are:

  • The image input resolution
  • The number of anchors
  • The number of parameters
  • The number of layers
Compare YOLOv7 vs Other Models on COCO

The evaluation of YOLOv7 models show that they infer faster (x-axis) and with greater accuracy (y-axis) than comparable realtime object detection models. YOLOv7 evaluates in the upper left - faster and more accurate than its peer networks.

Evolution of layer aggregation strategies in YOLOv7

Check out YOLOv8, defining a new state-of-the-art in computer vision

YOLOv8 is here, setting a new standard for performance in object detection and image segmentation tasks. Roboflow has developed a library of resources to help you get started with YOLOv8, covering guides on how to train YOLOv8, how the model stacks up against v5 and v7, and more.

Learn about YOLOv8

Check out YOLOv8, defining a new state-of-the-art in computer vision

YOLOv8 is here, setting a new standard for performance in object detection and image segmentation tasks. Roboflow has developed a library of resources to help you get started with YOLOv8, covering guides on how to train YOLOv8, how the model stacks up against v5 and v7, and more.

Learn about YOLOv8

Check out YOLOv8, defining a new state-of-the-art in computer vision

YOLOv8 is here, setting a new standard for performance in object detection and image segmentation tasks. Roboflow has developed a library of resources to help you get started with YOLOv8, covering guides on how to train YOLOv8, how the model stacks up against v5 and v7, and more.

Learn about YOLOv8

Check out YOLOv8, defining a new state-of-the-art in computer vision

YOLOv8 is here, setting a new standard for performance in object detection and image segmentation tasks. Roboflow has developed a library of resources to help you get started with YOLOv8, covering guides on how to train YOLOv8, how the model stacks up against v5 and v7, and more.

Learn about YOLOv8

Model Performance

Model Test Size APtest AP50test AP75test batch 1 fps batch 32 average time
YOLOv7 640 51.4% 69.7% 55.9% 161 fps 2.8 ms
YOLOv7-X 640 53.1% 71.2% 57.8% 114 fps 4.3 ms
YOLOv7-W6 1280 54.9% 72.6% 60.1% 84 fps 7.6 ms
YOLOv7-E6 1280 56.0% 73.5% 61.2% 56 fps 12.3 ms
YOLOv7-D6 1280 56.6% 74.0% 61.8% 44 fps 15.0 ms
YOLOv7-E6E 1280 56.8% 74.4% 62.1% 36 fps 18.7 ms

Explore this model on Roboflow

YOLOv7 Annotation Format

YOLOv7

uses the

uses the

YOLOv7 PyTorch TXT

annotation format. If your annotation is in a different format, you can use Roboflow's annotation conversion tools to get your data into the right format.

Convert data between formats

Label data automatically with YOLOv7

You can automatically label a dataset using

YOLOv7

with help from Autodistill, an open source package for training computer vision models. You can label a folder of images automatically with only a few lines of code. Below, see our tutorials that demonstrate how to use

YOLOv7

to train a computer vision model.

No items found.

Deploy a computer vision model today

Join 100k developers curating high quality datasets and deploying better models with Roboflow.

Get started
MANAGING over 100 million images for companies of all sizes

Join over 250,000 developers managing computer vision data on Roboflow.

VentureBeatTechCrunchInteresting EngineeringInternational Business TimesU.S. News & World ReportYahoo Finance