<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Neural Networks |</title><link>https://www.fabricionarcizo.com/tags/neural-networks/</link><atom:link href="https://www.fabricionarcizo.com/tags/neural-networks/index.xml" rel="self" type="application/rss+xml"/><description>Neural Networks</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 24 Jun 2022 00:00:00 +0000</lastBuildDate><item><title>Correct Disc Golf Form: Classification of the Backhand Throw using Neural Networks</title><link>https://www.fabricionarcizo.com/supervisions/jensen2022/</link><pubDate>Fri, 24 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/jensen2022/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Form is essential when analyzing and reviewing a backhand disc golf throw. The form defines if the throw is performed correctly and the poses of the body define the form. By looking at the body poses the throw can be classified, critiqued, and improved upon. The form consists of different motions which are analyzed using 3D data collected using machine learning solutions on a data set of recorded disc golf throws. By processing the 3D data from recorded throws the form is classified into three classes that represent the start, mid, and end of the throw. The three classes are shown as clusters using Principal Component Analysis (PCA). The PCA showed more overlapping clusters for the start and middle of the throw compared to the end. Classification solutions include a variation of trained LSTM networks and a solution using MediaPipe Pose Classification. The paper concludes that LSTM models perform faster and more accurately than the solution using MediaPipe Pose Classification when analyzing disc golf throws. However, the classification only provides insight for classifying the different forms and not the quality of form.&lt;/p&gt;</description></item><item><title>Using Machine Learning to Improve the Whiteboard Experience</title><link>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</link><pubDate>Thu, 23 Jun 2022 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/sandstrom2022/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Virtual meetings and conferences are getting more common and more mainstream in the workplace. This shift in use of technology means that other work practices has to adapt to work with virtual meetings. One such practice is the use of writing on whiteboards. Just like some people prefer to read from a book instead of a monitor, a practice like writing on a whiteboard might never be replaced by writing on a tablet. This creates a set of problems of how can the whiteboard be integrated to work with the virtual world. There&amp;rsquo;s many ideas and potential solutions for this like using text recognition, but most of these solutions require the whiteboard to be detected in the first place. To solve this issue, a whiteboard detection model is proposed which is composed of a convolutional neural net to classify whiteboards in real-time videos through semantic image segmentation and computer vision to process the outline of the classified whiteboards into a set of points which can be used for further analysis and processing.&lt;/p&gt;</description></item><item><title>Machine Learning in Android Applications</title><link>https://www.fabricionarcizo.com/supervisions/karlsson2020/</link><pubDate>Fri, 29 May 2020 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/supervisions/karlsson2020/</guid><description>&lt;h3 id="abstract"&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;This project is motivated by the ever-increasing popularity of machine learning techniques for solving repetitive everyday tasks, as well as the availability and importance of smartphones in today&amp;rsquo;s society. The combination of the two creates an environment in which the use of machine learning for simplifying mundane tasks in mobile applications may be experimented with. This project is a study in the use of machine learning in the context of such a mobile application, and specifically uses the Firebase ML Kit mobile SDK in that pursuit. The project includes the development of an application that allows users to generate descriptions of electronic devices they wish to post for sale on online marketplaces. The application utilizes machine learning and natural language generation to present the user with a textual description of the image submitted to the application. This project gives an overview of central machine learning principles, and goes into detail about the concepts relevant to solving the problem in question, namely classification, and neural networks. It also describes the process of implementing the application and how Firebase ML Kit provides machine learning capabilities, as well as how SimpleNLG provides natural language generation functionality to the application. The project further reflects on the application created and the use of ML Kit therein.&lt;/p&gt;</description></item></channel></rss>