Conversation

NEW! Roboflow 100: A Multi-Domain Object Detection Benchmark We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes to be used for benchmarking state-of-the-art computer vision models. Learn more: rf100.org Continued..👇
We observed that there's a disconnect between the types of tasks people are trying to perform in the wild and the types of datasets researchers are benchmarking their models on. Datasets like MS COCO are often used in research to compare models' performance but...
1
1
Then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
1
1
We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring others...
1
1
Check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate. We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful!
1
1