<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Object Detection |</title><link>https://www.fabricionarcizo.com/tags/object-detection/</link><atom:link href="https://www.fabricionarcizo.com/tags/object-detection/index.xml" rel="self" type="application/rss+xml"/><description>Object Detection</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sat, 11 Apr 2026 00:00:00 +0000</lastBuildDate><item><title>Edge AI in Action: Mastering On-Device Inference</title><link>https://www.fabricionarcizo.com/events/cvpr2026/</link><pubDate>Sat, 11 Apr 2026 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/events/cvpr2026/</guid><description/></item><item><title>Edge AI in Action: Technologies and Applications</title><link>https://www.fabricionarcizo.com/events/cvpr2025/</link><pubDate>Wed, 11 Jun 2025 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/events/cvpr2025/</guid><description/></item><item><title>Using Machine Learning to Improve the Whiteboard Experience</title><link>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</link><pubDate>Thu, 23 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Virtual meetings and conferences are getting more common and more mainstream in the workplace. This shift in use of technology means that other work practices has to adapt to work with virtual meetings. One such practice is the use of writing on whiteboards. Just like some people prefer to read from a book instead of a monitor, a practice like writing on a whiteboard might never be replaced by writing on a tablet. This creates a set of problems of how can the whiteboard be integrated to work with the virtual world. There&amp;rsquo;s many ideas and potential solutions for this like using text recognition, but most of these solutions require the whiteboard to be detected in the first place. To solve this issue, a whiteboard detection model is proposed which is composed of a convolutional neural net to classify whiteboards in real-time videos through semantic image segmentation and computer vision to process the outline of the classified whiteboards into a set of points which can be used for further analysis and processing.&lt;/p&gt;</description></item><item><title>Object Tracking System (VidIT)</title><link>https://www.fabricionarcizo.com/supervisions/pil2021/</link><pubDate>Fri, 04 Jun 2021 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/pil2021/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This thesis investigates, if an IT product can increase learning in an online setting. Information is included in regards of learning and the development of VidIT, which is an automated tracking system powered by a smartphone and an Arduino. The system can track people with the help of a motorized pan tilt mount. The purpose of VidIT is to enhance learning during COVID-19, by enabling students and teachers to record themselves single-handily while moving around. A survey, a user test and a performance test was conducted to gather data on the current situation of teaching in an online setting, testing of the usability and performance of VidIT. Based on the tests, it was concluded that the resulting system worked as intended. However, some improvements are needed to effectively improve learning and teaching in an online setting. These improvements includes but are not limited to, streaming functionality, movement prediction and faster computation in relation to the objection detection algorithm.&lt;/p&gt;</description></item><item><title>Automated Lecturer-Tracking System</title><link>https://www.fabricionarcizo.com/supervisions/balas2018/</link><pubDate>Mon, 10 Sep 2018 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/balas2018/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Development of technology has brought some significant changes to the educational system, resulting in some new means of gathering the knowledge. In this project, we focus on the video lectures that provide significant benefits for all the students. We aim to enhance the way video lessons are recorded by introducing the system for automatic lecturer-tracking. This thesis introduces the new approach in implementing an automated lecturer-tracking system by using a smartphone as the replacement for a camera device and a processing unit. The proposed solution uses the YOLO real-time object detection system and tracking algorithms from iOS Vision framework to detect and track the lecturer. A motorized pan-tilt head rotates the smartphone based on the input the smartphone sends to it. Experimental results show that the system can perform the desired behavior of lecturer-tracking, eliminating the need for human help in the process of recording.&lt;/p&gt;</description></item></channel></rss>