Automated Quality Control System for Printed Circuit Board Manufacturing
An end-to-end deep learning system for automated PCB defect detection that combines computer vision with domain expertise. This project demonstrates the practical application of AI in industrial quality control, achieving 91.2% F1-score on multi-label defect classification.
Built as a portfolio project showcasing:
- ๐ง Deep Learning & Computer Vision expertise
- โ๏ธ Industrial ML system design
- ๐ง Electrical Engineering domain knowledge
- ๐ Data-driven problem solving
- โ 91.2% F1-Score on DeepPCB benchmark dataset
- ๐ฏ 6 Defect Types: Open circuits, shorts, mousebites, spurs, spurious copper, pin-holes
- โ๏ธ Real-time inference: 20ms per image (50 images/second)
- ๐ Stable training: Dropout + Weight Decay regularization
- ๐ Production-ready: Automated QA report generation
- ๐ฐ ROI: 6-12 months payback period
Input PCB Image (640ร640px)
โ
Preprocessing
โ
ResNet-18 CNN Backbone
(Transfer Learning)
โ
Dropout Layer (0.5)
โ
Multi-Label Classification
(6 defect classes)
โ
Sigmoid Activation
โ
Confidence Scores (0-1)
โ
Threshold Decision (0.5)
โ
Quality Control Report
| Metric | Score | Industry Target |
|---|---|---|
| F1 Score | 91.2% | 85-95% โ |
| Precision | 86.2% | >80% โ |
| Recall | 98.3% | >95% โ |
| Inference Time | 20ms/image | <100ms โ |
| Defect Type | F1 Score | Notes |
|---|---|---|
| Open Circuit | 97.7% | Excellent detection โญ |
| Short Circuit | 87.7% | Good performance |
| Mousebite | 90.5% | Strong recall |
| Spur | 85.3% | Challenging class |
| Spurious Copper | 95.7% | Very good |
| Pin-hole | 93.2% | Excellent precision |
Complete detection pipeline showing template comparison to AI classification
Quick summary of detection results across multiple samples
Detailed model predictions with confidence scores
Training stability comparison between models
Per-class confusion matrices showing detection accuracy
Detailed precision-recall analysis by defect type
Implemented Dropout (0.5) + L2 Weight Decay (1e-4) to prevent overfitting:
Result: Training stability improved by 81%
Unlike generic AI models (BLIP), we use domain-specific templates:
PCB QUALITY INSPECTION REPORT
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Status: โ ๏ธ FAIL - HIGH Severity
DETECTED DEFECTS: 2
โข SHORT CIRCUIT (Confidence: 94%)
โ Unintended electrical connection
โ Risk: Component damage, fire hazard
โข OPEN CIRCUIT (Confidence: 87%)
โ Discontinuity in electrical path
โ Risk: Non-functional board
RECOMMENDATIONS:
1. URGENT: Do NOT proceed to assembly
2. Review etching process parameters
3. Inspect batch for similar defects
Why This Matters: 100% technical accuracy vs. <10% with generic BLIP model
Deliberately prioritized high recall (98.3%) over precision (86.2%) because:
| Error Type | Business Impact |
|---|---|
| False Negative (missed defect) | Board ships to customer โ Field failure โ $1,000+ cost |
| False Positive (false alarm) | Extra 2-min inspection โ $2 cost |
Decision: Better to have false alarms than miss critical defects!
Core Framework:
- Python 3.9+
- PyTorch 2.0+
- torchvision (ResNet-18)
Data Processing:
- OpenCV - Image preprocessing
- NumPy - Numerical operations
- Pandas - Data manipulation
Visualization:
- Matplotlib & Seaborn
- Confusion matrices
- Training curves
Dataset:
- DeepPCB (1,500 PCB image pairs)
- 6 defect classes
- 640ร640px resolution
ResNet-18 with regularized classifier head:
- Pretrained on ImageNet for transfer learning
- Dropout layer (0.5) to prevent overfitting
- Multi-label output for simultaneous defect detection
- Sigmoid activation for independent class probabilities
- Epochs: 20 (with early stopping)
- Batch size: 16
- Learning rate: 0.001
- Optimizer: Adam with weight decay (1e-4)
- Loss: Binary Cross-Entropy
- Scheduler: ReduceLROnPlateau
Data Augmentation:
- Random horizontal/vertical flips
- Random rotation (ยฑ15ยฐ)
- Color jitter (brightness, contrast)
pcb-defect-detection/
โโโ notebooks/
โ โโโ 01_EDA_exploration.ipynb
โโโ src/
โ โโโ dataset_utils.py
โโโ models/
โ โโโ best_model.pth
โ โโโ best_model_regularized.pth
โ โโโ .....
โโโ results/
โ โโโ complete_training_comparison.png
โ โโโ confusion_matrices_regularized.png
โ โโโ per_class_performance.png
โ โโโ sample_predictions.png
โ โโโ IEEE_Project_Report.md
โ โโโ .....
โโโ DeepPCB/
โโโ requirements.txt
โโโ .gitignore
โโโ README.md
- Python 3.9+
- 4GB+ RAM
- GPU recommended (optional)
# Clone repository
git clone https://github.com/yourusername/pcb-defect-detection.git
cd pcb-defect-detection
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Download DeepPCB dataset
git clone https://github.com/tangsanli5201/DeepPCB.gitUse Pretrained Model:
import torch
from torchvision import transforms
from PIL import Image
# Load model
model = PCBDefectClassifier()
model.load_state_dict(torch.load('models/best_model_regularized.pth'))
model.eval()
# Prepare image
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
img = Image.open('path/to/pcb_image.jpg')
img_tensor = transform(img).unsqueeze(0)
# Predict
with torch.no_grad():
predictions = model(img_tensor)
# Results
defects = ['open', 'short', 'mousebite', 'spur', 'copper', 'pin-hole']
for defect, conf in zip(defects, predictions[0]):
if conf > 0.5:
print(f"{defect}: {conf:.2%} confidence")Experiment: Tried BLIP (vision-language model) for automated defect descriptions
Result: Complete failure โ
Input: PCB with open circuit defect
BLIP: "a circuit board with chip chip chip chip..."
Needed: "Open circuit detected in trace at confidence 94%"
Why it failed:
- Trained on natural images (cats, dogs), not technical PCBs
- No PCB-specific vocabulary
- Can't recognize circuit patterns
Lesson: For technical domains, domain knowledge + simple rules > fancy AI without fine-tuning
Original training showed erratic validation loss with dramatic dips. Solutions tried:
| Approach | Impact |
|---|---|
| More epochs (20) | โ Better convergence |
| Dropout + Weight Decay | โ โ Significant improvement |
Result: Validation loss std dev reduced by 81%
Explored 1000+ threshold combinations. Finding: Default 0.5 was already near-optimal!
This indicates:
- Model calibration is good
- Transfer learning worked well
- No need for complex threshold tuning
Question: Should we optimize for 92% F1 vs current 91.2%?
Analysis:
- Development time: 8+ hours
- Performance gain: ~2% F1
- Business impact: 11% fewer manual reviews
- Annual savings: ~$300
Decision: Not worth it. Time better spent on deployment prep.
| Aspect | Manual Inspection | Our AI System | Savings |
|---|---|---|---|
| Setup Cost | $0 | $2,000 | - |
| Annual Labor | $120K-300K | $10K maintenance | $110K-290K/year |
| Throughput | 20-30 boards/hour | 2000+ boards/hour | 50-100ร faster |
| Detection Rate | ~85% | 98.3% | Fewer field failures |
| Consistency | Varies | 24/7 consistent | No degradation |
ROI Timeline: 6-12 months
- โ Consumer electronics (smartphones, IoT devices)
- โ Automotive (ADAS, EV battery management)
- โ Medical devices (pacemakers, imaging equipment)
- โ Aerospace & defense (avionics, satellites)
- โ Contract manufacturers (high-volume production)
- Web Dashboard - Flask/Streamlit UI for inspectors
- Defect Localization - Add bounding boxes (YOLO v8)
- Confidence Calibration - Platt scaling for better probabilities
- Active Learning - Continuously improve with production data
- Explainable AI - GradCAM visualization showing detection reasons
- Ensemble Models - Combine multiple architectures
- Root Cause Analysis - ML model to predict defect causes
- End-to-End Platform - Integrate with MES/ERP systems
- Edge Deployment - On-device inference with TensorRT/ONNX
Honest Assessment:
- Size: 1,500 images vs 100K+ in commercial systems
- Diversity: Single PCB type; may not generalize to flex PCBs, HDI, RF boards
- Class Imbalance: Real manufacturing has 100:1 defect-to-clean ratios
- No Localization: Detects presence, not exact defect location
- Fixed Input Size: 640ร640px; may miss small defects on large boards
- False Alarms: Some false positives (acceptable in QC)
- Novel Defects: Model only knows 6 trained classes
- Environmental Factors: Lighting, camera angle affect performance
- Tang et al., "PCB Defects Detection Using Deep Learning", arXiv 2019
- He et al., "Deep Residual Learning for Image Recognition", CVPR 2016
- DeepPCB: github.com/tangsanli5201/DeepPCB
- PyTorch: pytorch.org
- OpenCV: Image processing
- DeepPCB Team - For open-sourcing the dataset
- PyTorch Community - Excellent framework
- ResNet Authors - Transfer learning foundation
Fardin Hossain Tanmoy
๐ MSc Data Science | New York Institute of Technology
๐ BEng Electrical & Electronics Engineering | University of Southampton
๐ง Email: fardintonu@gmail.com
๐ LinkedIn:www.linkedin.com/in/fardin-hossain-tanmoy
๐ GitHub: https://github.com/fardinhossain007
Give it a star โญ on GitHub!
Built with ๐ฅ by an EEE grad exploring the intersection of hardware and AI
"The best AI solution balances accuracy, speed, cost, and interpretability."