{"id":4335,"date":"2025-06-30T10:41:11","date_gmt":"2025-06-30T03:41:11","guid":{"rendered":"https:\/\/filkom.ub.ac.id\/project\/?p=4335"},"modified":"2025-06-30T10:41:11","modified_gmt":"2025-06-30T03:41:11","slug":"bangunin-a-yolov8-based-drowsiness-detection-system-integrated-with-cloud-computing-and-a-fault-tolerant-system","status":"publish","type":"post","link":"https:\/\/filkom.ub.ac.id\/project\/2025\/06\/bangunin-a-yolov8-based-drowsiness-detection-system-integrated-with-cloud-computing-and-a-fault-tolerant-system\/","title":{"rendered":"bangunIN : A YOLOv8-Based Drowsiness Detection System Integrated with Cloud Computing and a Fault-Tolerant System"},"content":{"rendered":"<p><b>Introducing bangunIN\ud83d\udc40<\/b><b><br \/>\n<\/b><b>A YOLOv8-Based Drowsiness Detection System Integrated with Cloud Computing and a Fault-Tolerant System<\/b><\/p>\n<p><img fetchpriority=\"high\" decoding=\"async\" class=\"wp-image-4411 aligncenter\" src=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-scaled.jpg\" alt=\"\" width=\"249\" height=\"332\" srcset=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-scaled.jpg 1920w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-225x300.jpg 225w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-768x1024.jpg 768w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-1152x1536.jpg 1152w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/gambar-alat-1536x2048.jpg 1536w\" sizes=\"(max-width: 249px) 100vw, 249px\" \/><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\ud83d\udcddProject Domain:<\/b><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">Project Short Description<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">This project focuses on the development of a driver drowsiness detection system using YOLOv8 object detection, embedded within a Raspberry Pi 4, and integrated with Cloud Computing and a Fault Tolerant System. It is designed to enhance driving safety, particularly in public transportation settings such as buses.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">Using a webcam, the system continuously captures facial images of the driver in real time to detect drowsiness indicators such as closed eyes, head tilts, or fatigued facial expressions. The YOLOv8 model, which has been pre-trained specifically for this task, runs locally on the Raspberry Pi (without relying on the internet) and triggers a buzzer and red LED alert when signs of drowsiness are detected.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">Additionally, detection data is sent to a cloud-based dashboard (Blynk) where fleet managers can monitor real-time conditions and review historical logs. The system also features a Fault Tolerant mechanism that automatically switches to a backup camera if the main one fails, and maintains error logs locally to ensure uninterrupted functionality.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">By integrating Artificial Intelligence, IoT, and Cloud services, this solution offers a real-time, reliable, and scalable approach to improving road safety in commercial transportation.<\/span><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">\ud83e\uddd1\u200d\ud83e\udd1d\u200d\ud83e\uddd1Meet Our Team<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0 \u00a0 Project Leader\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Software Engineer\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 UI\/UX\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Hardware Engineer\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0AI Engineer\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\"><img decoding=\"async\" class=\"alignnone wp-image-4477\" src=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS.png\" alt=\"\" width=\"834\" height=\"156\" srcset=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS.png 1657w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS-300x56.png 300w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS-1024x192.png 1024w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS-768x144.png 768w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/PPT-Laporan-Akhir-FTS-1536x287.png 1536w\" sizes=\"(max-width: 834px) 100vw, 834px\" \/>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0 \u00a0 \u00a0 \u00a0 Rashid F.\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Zakaria R.\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Syakhish N.\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Michael Y.\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0M. Rasyid<\/span><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\u2757 Problem Statements<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\ude97 Traffic accidents caused by drowsy drivers remain a serious issue, particularly during peak travel periods like mudik.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\ude34 Lack of drowsiness detection systems increases the risk of accidents due to driver fatigue.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\u26a0\ufe0f PO Setianegara\u2019s reputation and passenger trust could be at risk due to inadequate safety measures.<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\ud83c\udfaf Goals<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udce1 Develop an automated real-time system to detect signs of drowsiness in bus drivers.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83e\udd16 Integrate Embedded AI, Cloud Computing, and Fault Tolerant Systems to enhance detection accuracy and reliability.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udea8 Alert drivers with a buzzer when drowsiness is detected to prevent accidents.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcbe Store and analyze detection data on Cloud Computing for ongoing monitoring and management.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd04 Ensure continuous system operation even during component failures by utilizing local storage during disruptions.<\/span><\/li>\n<\/ul>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\ud83d\udde3\ufe0fSolution Statement :<\/b><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83e\udde0 <\/span><b>Use the YOLOv8 model to detect driver facial expressions in real time.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> YOLOv8 is used for its high speed and accuracy in detecting objects such as closed eyes, gaze direction, head tilting, and yawning expressions. This model is trained on facial datasets to recognize signs of drowsiness with high precision.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><b>\ud83d\udcf7 Use a USB camera as the visual input to capture the driver&#8217;s face.<\/b><b><br \/>\n<\/b><b> The camera is mounted facing the driver and continuously captures facial images, which are then sent to the Raspberry Pi or the cloud for analysis.<\/b><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><b>\ud83d\udda5\ufe0f Use Raspberry Pi 4 as the main computing device with Embedded AI.<\/b><b><br \/>\n<\/b><b> Raspberry Pi 4 processes the images locally using YOLOv8 without relying entirely on cloud connectivity. This allows the system to operate offline through edge processing.<\/b><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udce1 <\/span><b>Use Cloud Computing for detection result storage, monitoring, and reporting to management.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> Each detection result, timestamp, and driver status is sent to a cloud platform (such as Blynk Cloud) to enable centralized remote monitoring by management.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udea8 <\/span><b>Use a buzzer to directly alert the driver when drowsiness is detected.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> If the system detects signs of drowsiness, the buzzer is activated to emit a loud warning sound, prompting the driver to regain alertness and prevent accidents.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd01 <\/span><b>Implement a Fault Tolerant System with backup models and dual cameras.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> If the main YOLOv8 model or the primary camera fails, the system automatically switches to a backup model or redundant camera to ensure continuous operation.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcbe <\/span><b>Use a local buffer on the Raspberry Pi to store data temporarily during cloud disconnection.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> If the internet connection is lost, detection data is saved locally in a temporary directory and synchronized with the cloud once connectivity is restored.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcca <\/span><b>Display real-time data through a web dashboard or cloud-based application (e.g., Blynk).<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> The system shows the driver&#8217;s status (normal or drowsy), detection logs, and virtual LED alerts on a dashboard, allowing management to monitor the system anytime.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd12 <\/span><b>Use data encryption and authentication for secure cloud transmission.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> All images and logs are transmitted via secure protocols and accessible only to authorized parties, ensuring compliance with personal data protection regulations.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">\u2699\ufe0f <\/span><b>Use Agile Development methodology and Object-Oriented Programming for modular and efficient system development.<\/b><b><br \/>\n<\/b><span style=\"font-weight: 400\"> Agile enables iterative development with periodic evaluations, while OOP simplifies the management of system modules such as camera, detection, cloud communication, and buzzer control.<\/span><\/li>\n<li style=\"font-weight: 400;text-align: justify\"><span style=\"font-weight: 400\">With all these solutions, the system can detect drowsiness quickly and accurately, remain resilient against failures (fault-tolerant), and assist fleet management in monitoring driver conditions\u2014enhancing public transport safety.<\/span><\/li>\n<\/ul>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\ud83d\udee0\ufe0f\u2060\u2060Prerequisites \u2013 Component Preparation :<\/b><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">Before developing and implementing the drowsiness detection system, several hardware and software components must be properly prepared and configured to ensure optimal system performance. The following is a list of essential requirements and setup steps for each component:<\/span><\/p>\n<p style=\"text-align: justify\"><b>\ud83d\udda5\ufe0f 1. Raspberry Pi 4 Model B (4GB RAM)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Main computation unit for the embedded AI system.<\/span><b><br \/>\n<\/b><b> Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Install the operating system (Raspberry Pi OS 64-bit).<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Configure WiFi and SSH access.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Install Python 3, OpenCV, PyTorch, and supporting libraries.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Set up the USB camera as the visual input source.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udcf7 2. USB Camera \/ Webcam (Primary and Redundant)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Captures real-time images of the driver&#8217;s face.<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Test USB connectivity and compatibility with the Raspberry Pi.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Set optimal resolution (e.g., 640&#215;480).<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Prepare a backup camera to ensure fault tolerance in case the primary camera fails.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83e\udde0 3. YOLOv8 Model (Custom Trained)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Detects drowsiness based on visual features (closed eyes, head tilt, yawning).<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Train the model using a dataset of facial expressions and drowsiness indicators.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Export the trained model to <\/span><span style=\"font-weight: 400\">.pt<\/span><span style=\"font-weight: 400\"> format for inference.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Transfer the model to the Raspberry Pi and create a backup directory.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udd0a 4. Buzzer (Alarm Output)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Provides audio warnings when drowsiness is detected.<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Connect the buzzer to the Raspberry Pi\u2019s GPIO pins.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Implement a Python script to trigger the buzzer based on detection results.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83c\udf10 5. WiFi Connectivity &amp; Cloud Platform (Blynk Cloud)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function:<\/b><span style=\"font-weight: 400\"> Sends detection data to the cloud and displays it on a real-time dashboard.<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Register and create a template on<\/span><a href=\"https:\/\/blynk.cloud\"> <span style=\"font-weight: 400\">blynk.cloud<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Set up widgets such as driver status, virtual LED, and terminal log.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Store the authentication token and integrate it into the Python code on the Raspberry Pi.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udca1 6. LED (Optional)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Provides visual indicators of system status (e.g., normal, drowsy, offline).<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Connect the LED to GPIO pins.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Program the LED to light up based on the system\u2019s detection output.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udcbd 7. Cloud Storage \/ Logger (Optional)<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Stores detection logs and error reports for historical analysis.<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul>\n<li style=\"list-style-type: none\">\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Set up local logging on the Raspberry Pi.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Enable synchronization with cloud services such as Firebase, Google Drive API, or Blynk Terminal.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udd10 8. Data Security &amp; Privacy<\/b><\/p>\n<p style=\"text-align: justify\"><b>Function: <\/b><span style=\"font-weight: 400\">Protects facial image data and detection logs from unauthorized access.<\/span><b><br \/>\n<\/b><b>Preparation:<\/b><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"list-style-type: none\">\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Implement end-to-end encryption for data transmissions.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Enable two-factor authentication (2FA) on the cloud dashboard to enhance access control.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\ud83d\udcc4<\/b><b>Dataset :<\/b><b>\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400\">\ud83d\uddbc\ufe0fPrimary Dataset<\/span><\/p>\n<p><img decoding=\"async\" class=\"wp-image-4536 aligncenter\" src=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-primer.png\" alt=\"\" width=\"488\" height=\"375\" srcset=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-primer.png 965w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-primer-300x231.png 300w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-primer-768x591.png 768w\" sizes=\"(max-width: 488px) 100vw, 488px\" \/><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">All datasets used in this project were annotated using Roboflow, a visual data annotation platform that supports standard formats for object detection model training, including YOLOv8. The annotation process was conducted manually by marking key areas such as the eyes, face, and expressions that indicate signs of drowsiness. By utilizing Roboflow, the data could be prepared in a structured manner, making it directly compatible for the training process.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">To support the training of the drowsiness detection model, the team used a primary dataset sourced from facial images of each team member. Each member was asked to capture images of their face in various states, particularly when fully alert and when simulating drowsiness through actions like closing their eyes, lowering their head, or yawning. In total, hundreds of images were collected, featuring a variety of viewing angles and lighting conditions.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">All collected images were then manually annotated using the Roboflow platform. This process involved labeling important areas like the eyes, face, and specific expressions that serve as indicators of drowsiness. This dataset formed the fundamental basis for training the model, as it was sourced directly from the system&#8217;s actual users and represents the conditions the model will face upon implementation.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\ud83d\uddbc\ufe0fSecondary Dataset<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4631\" src=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-sekunder.png\" alt=\"\" width=\"1422\" height=\"330\" srcset=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-sekunder.png 1422w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-sekunder-300x70.png 300w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-sekunder-1024x238.png 1024w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/dataset-sekunder-768x178.png 768w\" sizes=\"(max-width: 1422px) 100vw, 1422px\" \/><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">In addition to the data collected by the team, a secondary dataset was used to augment the training data and enrich its visual diversity. This dataset consists of facial images sourced from open platforms, such as Roboflow Universe and various other online sources. The faces included are from a diverse and random pool of individuals, not limited to just the project team members.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">This secondary data encompasses a wide range of facial expressions and conditions, such as individuals wearing glasses, those with beards, or people partially wearing masks, along with a broader spectrum of lighting conditions and viewing angles. The objective of incorporating this secondary dataset is to enhance the model&#8217;s ability to recognize faces with more varied characteristics, thereby enabling it to perform more effectively in complex, real-world scenarios.<\/span><\/p>\n<p>&nbsp;<\/p>\n<h3><b>\ud83e\udde9 Schematic<\/b><\/h3>\n<p><span style=\"font-weight: 400\">\ud83d\udcc8Workflow :<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcf7<\/span><span style=\"font-weight: 400\">Camera &amp; Redundant Camera<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The primary camera is used to capture real-time facial images of the driver. The system is also equipped with a redundant camera that functions as a failover in case the primary camera experiences a failure. The Raspberry Pi automatically switches to the backup camera upon detecting an error in the visual input stream.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udda5\ufe0f<\/span><span style=\"font-weight: 400\">Raspberry Pi<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The Raspberry Pi serves as the central control unit for the system. This device receives visual input from the camera and then processes the images using a pre-embedded Machine Learning model. Additionally, the Raspberry Pi controls output components such as the LED and buzzer, records detection results in a log, and transmits data to the cloud via a WiFi network. Inference processing is performed locally on the device to guarantee system speed and stability, even when disconnected from the internet.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83e\udde0<\/span><span style=\"font-weight: 400\">Machine Learning Model<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The YOLOv8 Machine Learning model is utilized to detect signs of drowsiness based on the driver&#8217;s facial images. This model runs locally on the Raspberry Pi to ensure a fast and responsive inference process, even when operating without an internet connection.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcbe<\/span><span style=\"font-weight: 400\">Logger<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The system records detection results, fault-tolerant events (such as camera switching), and inference status into a local log file. This information can also be transmitted to the cloud for remote monitoring.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udea8<\/span><span style=\"font-weight: 400\">LED &amp; Buzzer<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The system provides notifications to the driver via an LED and a buzzer. The LED illuminates as a visual indicator when the driver is detected in a drowsy state, while the buzzer emits an audible warning to immediately alert the driver. Both components are controlled directly by the Raspberry Pi based on the classification results from the YOLOv8 model.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udedcWiFi<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The WiFi connectivity module on the Raspberry Pi is used to transmit data in real-time to the cloud platform, as well as to enable two-way communication between the local system and the monitoring dashboard.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udcca<\/span><span style=\"font-weight: 400\">Blynk Cloud<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The Blynk Cloud platform functions as a central hub for data storage and visualization. All information regarding detection results and system status is sent here, allowing it to be monitored by management or vehicle supervisors.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\ud83e\uddf7Diagram :<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-4638 aligncenter\" src=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/diagramm.png\" alt=\"\" width=\"273\" height=\"335\" srcset=\"https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/diagramm.png 511w, https:\/\/filkom.ub.ac.id\/project\/wp-content\/uploads\/sites\/3\/2025\/06\/diagramm-244x300.png 244w\" sizes=\"(max-width: 273px) 100vw, 273px\" \/><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><b>\u2699\ufe0fDemo and Evaluation :\u00a0<\/b><\/p>\n<p style=\"text-align: justify\"><b>\ud83d\udd27 <\/b><span style=\"font-weight: 400\">Demo \u2013 System Workflow (Step-by-Step)<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd0c System Initialization<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Raspberry Pi 4 is powered on and automatically executes the main Python script.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">The USB camera is detected and starts capturing the driver\u2019s facial images continuously.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83e\udde0 Real-Time Drowsiness Detection<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Captured images are processed by the YOLOv8 model running locally on the Raspberry Pi.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">YOLOv8 detects key drowsiness features such as:<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Eyes closed for a specific duration<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Head tilting or nodding<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Yawning facial expressions<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">If no drowsiness is detected \u2192 system status remains \u201cNormal\u201d.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">If drowsiness is detected \u2192 the buzzer is triggered and data is sent to the cloud.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udea8 Warning Alert Triggered<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A loud buzzer activates to alert or wake the driver.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">An optional LED indicator turns red.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">The system sends the \u201cDrowsy\u201d status to Blynk Cloud and displays it in real-time.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\u2601\ufe0f Cloud Synchronization<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Detection results are transmitted via WiFi to the Blynk Cloud platform.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Data such as:<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Detection timestamp<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Driver status<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Event logs<\/span><span style=\"font-weight: 400\"><br \/>\n<\/span><span style=\"font-weight: 400\"> are displayed directly on the Blynk web or mobile dashboard.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd01 Fault Tolerance Scenarios<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">If the primary camera fails \u2192 the system automatically switches to a backup camera.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">If the internet connection is lost \u2192 data is temporarily stored in local buffer storage.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">If the main YOLOv8 model is corrupted \u2192 the system loads a backup model from a secondary directory.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83e\udde9 <\/span><b>Video Link:<\/b><b><br \/>\n<\/b><a href=\"https:\/\/drive.google.com\/file\/d\/1Hd3p5Jb8xAX4xQtBbkJM4rxf8jYIBlB4\/view?usp=sharing\"><b>https:\/\/drive.google.com\/file\/d\/1Hd3p5Jb8xAX4xQtBbkJM4rxf8jYIBlB4\/view?usp=sharing<\/b><\/a><\/p>\n<p style=\"text-align: justify\"><b>\ud83d\udcca Evaluation \u2013 Test Results and Analysis<\/b><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\u2705 1. Functional System Testing<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">The system was tested with drivers in two simulated conditions:<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Alert: Eyes open, head upright.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Drowsy: Eyes closed for &gt;2 seconds, head tilted or nodding.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Result: The system accurately distinguishes between both conditions and activates the buzzer only when drowsiness is detected.<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\u23f1\ufe0f 2. Response and Latency Testing<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Average time from image capture \u2192 detection \u2192 buzzer activation: &lt; 2 seconds<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Conclusion: The system is responsive in real time, with low latency depending on WiFi stability and inference speed.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83c\udfaf 3. Detection Accuracy Testing<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">The trained YOLOv8 model was evaluated using a dedicated test dataset:<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Precision: 91.4%<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Recall: 88.2%<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">F1-Score: 89.7%<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">mAP@0.5: 92.6%<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Conclusion: The model is reliable and accurate in detecting facial expressions related to driver drowsiness.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83d\udd01 4. Fault Tolerant System Testing<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Internet disconnected: System continues saving data locally, and syncs automatically when reconnected.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Primary camera removed: System immediately switches to backup camera.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Main model deleted: System loads and runs the backup model.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Conclusion: The fault-tolerant mechanisms perform well in maintaining system reliability.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">\ud83c\udf10 5. Cloud Integration Testing<\/span><\/p>\n<ul style=\"text-align: justify\">\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Detection data successfully appears on Blynk Dashboard:<\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Driver status: \u201cNormal\u201d or \u201cDrowsy\u201d<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Timestamp of each detection<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Virtual LED and terminal logs<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Conclusion: Management can monitor driver conditions remotely and seamlessly via the cloud.<\/span><\/li>\n<\/ul>\n<p style=\"text-align: justify\"><b>\ud83d\udccc Evaluation Summary<\/b><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The drowsiness detection system works <\/span><b>accurately<\/b><span style=\"font-weight: 400\">, <\/span><b>in real time<\/b><span style=\"font-weight: 400\">, and remains <\/span><b>robust under failure conditions<\/b><span style=\"font-weight: 400\">. With real-time alerts (buzzer), seamless cloud integration, and effective fault tolerance, this system is well-suited for real-world deployment in public transportation, particularly for PO Setianegara bus operations.<\/span><\/p>\n<hr \/>\n<p style=\"text-align: justify\"><b>\u2705Conclusion:<\/b><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">This project presents a comprehensive and intelligent solution for detecting driver drowsiness in real time, addressing a critical safety issue in the transportation industry. By integrating advanced computer vision with YOLOv8, edge-based processing on Raspberry Pi 4, and seamless cloud connectivity, the system ensures high detection accuracy while maintaining performance under various environmental conditions.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">The implementation of Embedded AI allows for real-time inference directly on the device, enabling the system to function even without an internet connection. In the event of connectivity loss or hardware failure, the built-in Fault Tolerant System automatically switches to backup hardware or model files, ensuring continuous operation and reliability. Meanwhile, the buzzer and LED alert mechanisms offer immediate, local warnings to drivers, helping to reduce the risk of accidents caused by fatigue.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">In addition to real-time feedback, the integration with Blynk Cloud and a web dashboard provides centralized data logging and visibility for management. This supports proactive safety monitoring across vehicle fleets and enables long-term analysis of driver behavior patterns. <\/span><span style=\"font-weight: 400\">Overall, this solution not only supports safer driving through direct, real-time intervention, but also empowers transportation companies to monitor, evaluate, and improve driver performance. The project demonstrates how embedded AI, IoT, and cloud systems can be harmonized to solve real-world problems in a scalable and practical way.<\/span><\/p>\n<p style=\"text-align: justify\"><span style=\"font-weight: 400\">Future developments could focus on enhancing detection in low-light environments using infrared or thermal cameras, personalizing detection sensitivity through individual driver profiles, and implementing adaptive fatigue management strategies to reduce alert fatigue. The system could also be integrated into a larger fleet management platform to provide predictive analytics and automated incident reporting.<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introducing bangunIN\ud83d\udc40 A YOLOv8-Based Drowsiness Detection System Integrated with Cloud Computing and a Fault-Tolerant System &nbsp; \ud83d\udcddProject Domain: Project Short Description This project focuses on the development of a driver drowsiness detection system using YOLOv8 object detection, embedded within a Raspberry Pi 4, and integrated with Cloud Computing and a Fault Tolerant System. It is&#8230;<\/p>\n","protected":false},"author":349,"featured_media":4411,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_kad_post_transparent":"default","_kad_post_title":"default","_kad_post_layout":"default","_kad_post_sidebar_id":"","_kad_post_content_style":"default","_kad_post_vertical_padding":"default","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"_kad_post_classname":"","footnotes":""},"categories":[9,1],"tags":[],"class_list":["post-4335","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-of-thing-aiot","category-capstone"],"_links":{"self":[{"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/posts\/4335","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/users\/349"}],"replies":[{"embeddable":true,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/comments?post=4335"}],"version-history":[{"count":6,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/posts\/4335\/revisions"}],"predecessor-version":[{"id":4777,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/posts\/4335\/revisions\/4777"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/media\/4411"}],"wp:attachment":[{"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/media?parent=4335"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/categories?post=4335"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/filkom.ub.ac.id\/project\/wp-json\/wp\/v2\/tags?post=4335"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}