Browsing by Subject "neural networks"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item A Computational Approach To Cultural Resource Management: Autodetecting Archaeological Features In Satellite Imagery With Convolutional Neural Networks(2019-05-01) Willett, Mallory; Walthall, AlexMy thesis proposes the use of convolutional neural networks for automatic detection of archaeological features in satellite imagery. Cultural heritage sites require constant management, and archaeologists are increasingly turning to satellite imagery to identify and monitor sites from afar. Given the huge amount of visual information present in these images and the amount of time it takes to do this job with the human eye, I propose a different approach for identifying and mapping archaeological features: using computer vision, specifically an algorithm called a convolutional neural network, or CNN. By training a CNN on a labeled set of hundreds of the same class of archaeological features in a landscape, the CNN can learn to identify new instances of the same class of features in previously unseen satellite imagery. This approach reduces the amount of labor required by analog approaches to feature extraction or traditional survey, and allows archaeologists to more swiftly identify and therefore protect areas of cultural significance. My research on CNNs in other fields and inroads made on a proof-of-concept CNN to identify archaeological features demonstrate the feasibility of using this type of algorithm to automatically detect archaeological features in satellite imagery.Item Exploiting data parallelism in artificial neural networks with Haskell(2009-08) Heartsfield, Gregory Lynn; Ghosh, Joydeep; Julien, ChristineFunctional parallel programming techniques for feed-forward artificial neural networks trained using backpropagation learning are analyzed. In particular, the Data Parallel Haskell extension to the Glasgow Haskell Compiler is considered as a tool for achieving data parallelism. We find much potential and elegance in this method, and determine that a sufficiently large workload is critical in achieving real gains. Several additional features are recommended to increase usability and improve results on small datasets.Item Focus On Science, Winter 2002(University of Texas at Austin, 2002) University of Texas at AustinItem Optimization of Laser Process Parameters Using Machine Learning Algorithms and Performance Comparison(2022) Theeda, Sumanth; Ravichander, Bharath Bhushan; Jagdale, Shweta Hanmant; Kumar, GoldenLaser powder bed fusion (L-PBF) can be used to produce near net-shaped functional metal components. Despite offering high flexibility in producing components with intricate geometries, L-PBF has many constraints in terms of controllability and repeatability because of large number of processing parameters. There is a need for a robust computational model which can predict the properties of L-PBF parts using a wide range of processing parameters. In this work, several Machine learning-based algorithms like Random Forest, k Nearest Neighbors, XGBOOST, Support Vector Machine (SVM), and Deep Neural Networks are used to model the property- processing parameters relation for SS 316L samples prepared by LPBF. Laser power, scan speed, hatch spacing, scan strategy, volumetric energy density, and density are used as the input to these models. The developed model is then used to predict and analyze the surface roughness of as- fabricated SS 316L specimens. The prediction and experimental results are compared for the above-mentioned models to evaluate the capabilities and accuracy of each model.Item Predicting and Controlling the Thermal Part History in Powder Bed Fusion Using Neural Networks(University of Texas at Austin, 2019) Merschroth, Holger; Kniepkamp, Michael; Weigold, MatthiasLaser-based powder bed fusion of metallic parts is used widely in different branches of industry. Although there have been many investigations to improve the process stability, thermal history is rarely taken into account. The thermal history describes the parts’ thermal situation throughout the build process as a result of successive heating and cooling with each layer. This could lead to different microstructures due to different thermal boundary conditions. In this paper, a methodology based on neural networks is developed to predict and control the parts’ temperature by adjusting the laser power. A thermal imaging system is used to monitor the thermal history and to generate a training data set for the neural network. The trained network is then used to predict and control the parts temperature. Finally, tensile testing is conducted to investigate the influence of the adjusted process on the mechanical properties of the parts.Item Using Supervised Learning Techniques to Predict Television Ratings(2020-05) Hassell, JacksonHow well a given TV show does is scored by a metric called “rating,” which denotes the percentage of households watching live TV at the time that are tuned into that particular show. To maximize ratings, being able to reliably predict them is necessary. For my thesis, in collaboration with Austin’s public-television station KLRU-TV, a variety of techniques were tested in order to discern the most accurate model for predicting the ratings of a television-show airing. To accomplish this, I created nine regression models, each using a different algorithm that has been proven to work across many kinds of problems. These were a linear regression model, a k-nearest-neighbors model, a SVM model, a decision tree model, a bagging ensemble model, a gradient boosting ensemble model, two kinds of fully connected neural networks, or MLPs, and a recurrent neural network. I also created several feature sets, which included Nielsen, IMDb, and engineered features. Each model was tested across every combination of feature sets and exhaustively hyperparamatized to find what method produced the best results. Most models did similarly well under at least one combination of hyperparameters and feature set, with the only exception being the linear regression model, which performed poorly across the board. The best model was a tie between the k-nearest-neighbors model and the bagging ensemble model, which both received an R2 score of .64 when run on all features. Though this is not a perfect score, it means the mean average error was just .2, which is small enough to be useful when optimizing program schedules and selling ad space.