Sistem Otomasi Quality Control pada Kemasan Produk dengan Embedded AI

Project Domain
This project falls under ESP32-based Embedded AI for Quality Control, focusing on reliably identifying and classifying product packaging conditions (e.g., damaged vs. intact) by integrating computer vision, AI inference, and camera modules using an ESP32 microcontroller.
Meet Our Team
- MUHAMMAD DZAKA AUFA F.ย (225150200111053)
- MOHAMMAD AMRIZAL K. (225150300111003)
- MOUDY LESTARI TULUS WIDODO (225150300111006)
- AQNA MUMTAZ `ILMI (225150300111022)
- CHANDRA FERDINAN SIHITE (225150300111024)
Problem Statements
- packaging damage still occurs frequently and is difficult to detect quickly and consistently.
- Manual quality control processes are often inefficient, prone to human error, and unreliable at large scale.
Goals
-
Develop an on-device AI model to classify product packaging as either damaged or intact with high accuracy.
-
Deploy the AI model on an ESP32 microcontroller, enabling real-time inferencing using ESP32-CAM.
-
Improve efficiency and reliability of the quality control process by reducing human intervention and error.
-
Create a fault-tolerant system using dual ESP32s for redundancy and improved reliability.
-
Enable remote access and monitoring, providing real-time feedback and control over the inspection process.
Solution Statements
- ๐ท Use ESP32-CAM to capture real-time images of product packaging for visual inspection.
- ๐งช Run on-device AI inference using a pre-trained MobileNetV2 model to classify packaging as damaged or intact directly on the ESP32 without cloud dependency.
- ๐ก Integrate ESP32 with Wi-Fi to enable remote monitoring, feedback, and system control via Platform IoT Blynk.
- ๐ Use a dual ESP32 redundant system, where a backup microcontroller takes over automatically in case of failure, ensuring high reliability.
- ๐ฆ Trigger actuators (servo) to sort out damaged products from intact ones based on inference results.
- ๐บ Display real-time status and classification results on Platform IoT Blynk.
๐งฐย Prerequisites โ Component Preparation
Hardware
- Main ESP32: Set up system logic, send status data to Blynk, and control servos.
- Backup ESP32: Taking a picture of the packaging, and performing AI inference.
- Conveyor Belt: Moving products packaging past inspection points.
- Motor Servo SG90: Moving a board that will separate the boxes that are detected as defective.
Software
- Edge Impulse: Building, training, and testing AI models.
- Blynk: QC status monitoring and real-time notification.
- Arduino IDE: Writing program code and setting up QC logic.
Dataset

To train the AI model, a custom image dataset was prepared consisting of two main classes:
-
Intact packaging
-
Damaged packaging (e.g., torn, dented, or misaligned)
The dataset included over 100 labeled images per class, captured in varying lighting and background conditions to enhance model generalization. Data augmentation techniques such as rotation, flipping, zoom, and brightness adjustment were applied to increase dataset diversity.
The model was trained using MobileNetV2 with transfer learning and quantized to int8 TensorFlow Lite format for deployment on the ESP32. Performance was evaluated using accuracy, precision, and confusion matrix on a validation set.
System Workflow
- System Initialization
The ESP32-WROOM-32 and ESP32-CAM are powered on. The system displays the status “Ready” on the OLED and stands by to start the inference process. - Image Capture
When a product passes in front of the camera, the ESP32-CAM automatically captures an image. - AI Inference
The captured image is sent to the ESP32-WROOM-32 or processed directly on the ESP32-CAM (if the model is lightweight enough). The AI model classifies the image as:- Intact โ the product is passed through
- Damaged โ the product is diverted to the reject path
- Decision Making and Action
Based on the classification result:- A servo or actuator directs the product to the appropriate path (accepted or rejected)
- The OLED display shows the classification result and the count of passed vs. rejected products
- Monitoring and Redundancy
The system can be remotely monitored via a dashboard (web/mobile app).
If the main ESP32 fails, the backup ESP32 is activated through a watchdog system and takes over the process.
Schematic

This diagram illustrates the flow of data and control in the proposed quality control system using ESP32 microcontrollers. The system is divided into three main stages: Input, Process, and Output.
- Input Stage:
The ESP32-CAM module functions as both a sensor and AI inference engine, capturing images of product packaging and classifying them as either intact or damaged using an embedded neural network model. - Processing Stage:
A secondary ESP32 microcontroller receives the classification results for decision-making. It handles:- Communication with the cloud or mobile interface via Wi-Fi.
- Logic control to determine actuator behavior.
- Optional redundancy management in case of system faults.
- Output Stage:
The results are:- Sent to the User Interface (e.g., Blynk) for real-time notifications and system monitoring.
- Used to control actuators (e.g., servo motors) to separate defective products automatically.
This architecture ensures efficient, real-time decision-making and system feedback in a compact, low-power embedded setup, suitable for industrial quality control applications.
๐ฌย Demo and Evaluation
-
๐ ๏ธย Setup: Assemble all hardware components, including the ESP32-CAM, dual ESP32 boards (MAIN & BACKUP), Arduino Nano, L298N motor driver, DC motor for the conveyor belt, and MG996R servo motor. Upload the developed firmware to each microcontroller using the Arduino IDE. Ensure required libraries such as Blynk, Servo, and WiFi are installed.
This project operates in a Wi-Fi-based IoT environment, where the ESP32 connects to a network to send real-time notifications to the Blynk application. Additionally, the AI model for defect detection is deployed on the ESP32-CAM via the Edge Impulse platform, enabling local inference (Embedded AI) without cloud processing.

-
๐ย Demo: This section demonstrates the full operation of the automated inspection system. The ESP32-CAM captures real-time images of products on a conveyor belt and classifies them using an embedded AI model. Based on the classification result, the main ESP32 controls a servo motor to sort the product into the appropriate lane โ either โpassโ or โreject.โ All decisions and system status updates are also sent to the Blynk app for real-time remote monitoring.
-
๐ฅ Watch the full demo on YouTube to see the system in action: https://youtu.be/mm5qb9Eb6Bw

-
๐ฌย Evaluation: The system was evaluated based on three key aspects:
-
Detection Accuracy: The embedded AI model running on the ESP32-CAM was tested using multiple product samples under different lighting and movement conditions. The model achieved reliable classification performance for distinguishing between defective and non-defective items.
-
System Responsiveness: Communication between ESP32-CAM, main ESP32, and backup ESP32 was tested for latency and failover handling. The heartbeat mechanism ensured rapid takeover by the backup ESP32 within 3 seconds if the main unit failed.
-
App Integration: The integration with the Blynk IoT platform was tested for real-time updates. Notifications about product status and ESP32 availability were successfully pushed, allowing for remote monitoring from a smartphone.
Overall, the system performed reliably under varying conditions, demonstrating robustness in both defect detection and fault-tolerant response.
-
โ ย Conclusion
This project delivers a reliable and intelligent solution for automating quality control of product packaging using ESP32-based embedded AI. By combining real-time image capture via ESP32-CAM, on-device classification using a lightweight CNN model, and remote monitoring over Wi-Fi, the system reduces reliance on manual inspection and minimizes human error, while also increasing scalability and consistency in industrial environments.