<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Classification |</title><link>https://www.fabricionarcizo.com/tags/classification/</link><atom:link href="https://www.fabricionarcizo.com/tags/classification/index.xml" rel="self" type="application/rss+xml"/><description>Classification</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 26 May 2023 00:00:00 +0000</lastBuildDate><item><title>Retrieval, Visualization, and Analysis of Graffiti in Copenhagen</title><link>https://www.fabricionarcizo.com/supervisions/espersen2023/</link><pubDate>Fri, 26 May 2023 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/espersen2023/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This project investigates the relationship between the occurrence of graffiti and various factors in the Amager districts of Copenhagen. The research is focused on social factors such as population density and income groups. However, the relation to crime and areas with graffiti is also examined. Through geospatial analysis and machine learning models, the project explores patterns and correlations associated with graffiti. The geospatial analysis showed that certain areas have a higher concentration of graffiti, with urban areas exhibiting more compared to residential areas. The machine learning models showed limited success in predicting the occurrence of graffiti solely based on income and population density but achieved moderate accuracy in identifying graffiti tags. The findings suggest that factors beyond income and population density may contribute to graffiti occurrence. Further research is needed to explore additional factors and improve the predictive models. Overall, this project provides valuable insights into the distribution and potential influencing factors of graffiti, contributing to a better understanding of this urban phenomenon.&lt;/p&gt;</description></item><item><title>Correct Disc Golf Form: Classification of the Backhand Throw using Neural Networks</title><link>https://www.fabricionarcizo.com/supervisions/jensen2022/</link><pubDate>Fri, 24 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/jensen2022/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Form is essential when analyzing and reviewing a backhand disc golf throw. The form defines if the throw is performed correctly and the poses of the body define the form. By looking at the body poses the throw can be classified, critiqued, and improved upon. The form consists of different motions which are analyzed using 3D data collected using machine learning solutions on a data set of recorded disc golf throws. By processing the 3D data from recorded throws the form is classified into three classes that represent the start, mid, and end of the throw. The three classes are shown as clusters using Principal Component Analysis (PCA). The PCA showed more overlapping clusters for the start and middle of the throw compared to the end. Classification solutions include a variation of trained LSTM networks and a solution using MediaPipe Pose Classification. The paper concludes that LSTM models perform faster and more accurately than the solution using MediaPipe Pose Classification when analyzing disc golf throws. However, the classification only provides insight for classifying the different forms and not the quality of form.&lt;/p&gt;</description></item><item><title>Using Machine Learning to Improve the Whiteboard Experience</title><link>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</link><pubDate>Thu, 23 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Virtual meetings and conferences are getting more common and more mainstream in the workplace. This shift in use of technology means that other work practices has to adapt to work with virtual meetings. One such practice is the use of writing on whiteboards. Just like some people prefer to read from a book instead of a monitor, a practice like writing on a whiteboard might never be replaced by writing on a tablet. This creates a set of problems of how can the whiteboard be integrated to work with the virtual world. There&amp;rsquo;s many ideas and potential solutions for this like using text recognition, but most of these solutions require the whiteboard to be detected in the first place. To solve this issue, a whiteboard detection model is proposed which is composed of a convolutional neural net to classify whiteboards in real-time videos through semantic image segmentation and computer vision to process the outline of the classified whiteboards into a set of points which can be used for further analysis and processing.&lt;/p&gt;</description></item><item><title>Machine Learning in Android Applications</title><link>https://www.fabricionarcizo.com/supervisions/karlsson2020/</link><pubDate>Fri, 29 May 2020 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/karlsson2020/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This project is motivated by the ever-increasing popularity of machine learning techniques for solving repetitive everyday tasks, as well as the availability and importance of smartphones in today&amp;rsquo;s society. The combination of the two creates an environment in which the use of machine learning for simplifying mundane tasks in mobile applications may be experimented with. This project is a study in the use of machine learning in the context of such a mobile application, and specifically uses the Firebase ML Kit mobile SDK in that pursuit. The project includes the development of an application that allows users to generate descriptions of electronic devices they wish to post for sale on online marketplaces. The application utilizes machine learning and natural language generation to present the user with a textual description of the image submitted to the application. This project gives an overview of central machine learning principles, and goes into detail about the concepts relevant to solving the problem in question, namely classification, and neural networks. It also describes the process of implementing the application and how Firebase ML Kit provides machine learning capabilities, as well as how SimpleNLG provides natural language generation functionality to the application. The project further reflects on the application created and the use of ML Kit therein.&lt;/p&gt;</description></item><item><title>Computer Vision til EvoBotten</title><link>https://www.fabricionarcizo.com/supervisions/schnack2016/</link><pubDate>Mon, 13 Jun 2016 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/schnack2016/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Automated production using robots play a significant role in chemistry, biotechnology and microbiology. Robots that are designed to perform a specific task are in the long run cheaper and much more efficient than humans. In research, however, most tasks are done by humans even though many tasks are cumbersome and repetitive. What complicates the introduction of robots in research are the small variations in the task sequences that are frequently introduced. In this Master thesis we will contribute to a project called the EvoBot which seeks to make robots an integral part of research in order to lower costs and speed up the process of experimenting. We develop a proof-of-concept for a computer vision application for the EvoBot that enables it to find, locate and classify petri dishes and well plates. We present the design of the system implemented using the OpenCV framework along with physical modifications to the EvoBot. In addition to the vision system we develop a framework to test its precision and accuracy and lay ground work for future improvements. We evaluate different approaches based on accuracy and precision results of the detection methods that are experimented with. The evaluation indicates that detection without the use of tagging is feasible for use in the industry, with the introduction of some future improvements.&lt;/p&gt;</description></item></channel></rss>