This project is modified from the official YOLOv5 by Ultralytics to perform realtime Object Counting Task from the detected objects in the frame.
This modifies detect.py script will print the count of all the detected objects (using --print_all flag) as well as individual object (using --print_class "person") in the detected image/frame.
-
--nosave : Not to save the recorded video while feeding from Webcam
-
--line-thickness : Keep clean bounding boxes use 1
-
--source 0 : To read from WebCam
-
--print_class : To print the number of detected objects in the frame for the given class eg: --print_class "cell phone"
-
--print_all : To print all the detected classes in the image
- --imgsz : To get more accurate results, use imgsz 800
python detect.py --source 0 --imgsz 800 --line-thickness 1 --print_class person --nosaveor
python detect.py --source 0 --line-thickness 1 --print_all --nosave
Quit using STRG + C while running.
docker build -t yolov5 .This will build a docker image with a pre-lauched jupyter notebook at port 8888
docker run --rm -it -p 90:8888 -v ${PWD}:/yolo/ --name yolo5 -d yolov5docker exec -ti yolo5 bashjupyter notebook --allow-root --notebook-dir=/yolo/ --ip=0.0.0.0 --port=8888 --no-browserThis will run jupyter notebook in root mode with the given directory as default in the docker image. The notebook can be accessed at http://localhost
Enter the password root to access the notebook.
Highlights from Original Readme
YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
See the YOLOv5 Docs for full documentation on training, testing and deployment.
Install
Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7.
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # installInference with detect.py
detect.py runs inference on a variety of sources, downloading models automatically from
the latest YOLOv5 release and saving results to runs/detect.
python detect.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
'path/*.jpg' # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP streamYOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the --data argument. To start training on MNIST for example use --data mnist.
# Single-GPU
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128
# Multi-GPU DDP
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:
bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validateUse pretrained YOLOv5s-cls.pt to predict bus.jpg:
python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpgmodel = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s-cls.pt') # load from PyTorch HubExport a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT:
python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224