Browsing by Subject "Neural network"
Now showing 1 - 20 of 22
- Results Per Page
- Sort Options
Item A study of instrumental method for suiting fabric hand evaluation and classification(2014-08) Wang, Keqing; Chen, Jonathan Yan; Craig, JaneIn the textile and apparel industry, fabric end-use preference and selection criteria are largely based on fabric hand because it relates to both the mechanical properties and aesthetic appearance of fabrics. This paper examines a method to grade fabric hand based on Kawabata’s measurements and neural network modeling. The proposed method is verified by comparing the hand graded by the neural network model to Kawabata’s total hand value. Ninety-five commercial fabrics from different manufacturers were tested using Kawabata evaluation system (KES-FB). Cluster analysis using SAS classified the suiting fabric samples into four groups in this study. The test results of fabric mechanical properties show similarities and dissimilarities between woven and knitted suiting fabrics. In comparison, woven suiting fabrics are less subject to shear and bending deformation. Knitted fabrics have a higher total hand value than woven fabrics with a smoother surface. Cluster analysis well divided the suiting fabric samples into four groups describing different fabric performance. The training dataset in the neural network model was selected based on information from the clustering results. The training model was proved to be accurate with a low MSE of 4 × 10-8. The model successfully graded the test samples with values ranged from 0 to 1. Additionally, the validity for grading fabric hand using the neural network technique was examined by analyzing the correlation between the hand graded by neural network model and Kawabata’s equations. The regression analysis shows a relatively strong correlation (p<0.0001, R2= 0.6363) between neural network grades and Kawabata’s grades.Item Adaptation in a deep network(2011-05) Ruiz, Vito Manuel; Pillow, Jonathan W.; Miikkulainen, Risto; Fiete, Ila; Geisler, Wilson; Seidemann, EyalThough adaptational effects are found throughout the visual system, the underlying mechanisms and benefits of this phenomenon are not yet known. In this work, the visual system is modeled as a Deep Belief Network, with a novel “post-training” paradigm (i.e. training the network further on certain stimuli) used to simulate adaptation in vivo. An optional sparse variant of the DBN is used to help bring about meaningful and biologically relevant receptive fields, and to examine the effects of sparsification on adaptation in their own right. While results are inconclusive, there is some evidence of an attractive bias effect in the adapting network, whereby the network’s representations are drawn closer to the adapting stimulus. As a similar attractive bias is documented in human perception as a result of adaptation, there is thus evidence that the statistical properties underlying the adapting DBN also have a role in the adapting visual system, including efficient coding and optimal information transfer given limited resources. These results are irrespective of sparsification. As adaptation has never been tested directly in a neural network, to the author’s knowledge, this work sets a precedent for future experiments.Item Assisted history matching workflow for unconventional reservoirs(2019-05-13) Tripoppoom, Sutthaporn; Sepehrnoori, Kamy, 1951-The information of fractures geometry and reservoir properties can be retrieved from the production data, which is always available at no additional cost. However, in unconventional reservoirs, it is insufficient to obtain only one realization because the non-uniqueness of history matching and subsurface uncertainties cannot be captured. Therefore, the objective of this study is to obtain multiple realizations in shale reservoirs by adopting Assisted History Matching (AHM). We used multiple proxy-based Markov Chain Monte Carlo (MCMC) algorithm and Embedded Discrete Fracture Model (EDFM) to perform AHM. The reason is that MCMC has benefits of quantifying uncertainty without bias or being trapped in any local minima. Also, using MCMC with proxy model unlocks the limitation of an infeasible number of simulations required by a traditional MCMC algorithm. For fractures modeling, EDFM can mimic fractures flow behavior with a higher computational efficiency than a traditional local grid refinement (LGR) method and more accuracy than the continuum approach. We applied the AHM workflow to actual shale gas wells. We found that the algorithm can find multiple history matching solutions and quantify the fractures and reservoir properties posterior distributions. Then, we predicted the production probabilistically. Moreover, we investigated the performance of neural network (NN) and k-nearest neighbors (KNN) as a proxy model in the proxy-based MCMC algorithm. We found that NN performed better in term of accuracy than KNN but NN required twice running time of KNN. Lastly, we studied the effect of enhanced permeability area (EPA) and natural fractures existence on the history matching solutions and production forecast. We concluded that we would over-predict fracture geometries and properties and estimated ultimate recovery (EUR) if we assumed no EPA or no natural fractures even though they actually existed. The degree of over-prediction depends on fractures and reservoir properties, EPA and natural fractures properties, which can only be quantified after performing AHM. The benefits from this study are that we can characterize fractures geometry, reservoir properties, and natural fractures in a probabilistic manner. These multiple realizations can be further used for a probabilistic production forecast, future fracturing design improvement, and infill well placement decision.Item Automatic channel detection using deep learning(2019-06-20) Pham, Nam Phuong; Fomel, Sergey B.Picking 3D channel geobodies in seismic volumes is an important objective in seismic interpretation for hydrocarbon exploration. Manual detection of channel geobodies is a time-consuming and subjective process. The interpreter can calculate different seismic attributes such as coherence to aid for manual detection of channel geobodies in seismic volumes. However, these attributes still do not directly identify 3D channel geobodies. Machine learning and deep learning are data-driven techniques that have been getting more attention recently in different fields, such as medical imaging and computer vision. With large volumes of available data in different types and a development of powerful computational resources, geophysics is a promising field for applying machine learning and deep learning. Many seismic interpretation steps are analogous to different problems in computer vision that have been solved successfully using deep learning. Channel detection in seismic volumes is analogous to segmentation problems for images. Applying deep learning to seismic interpretations, specifically to automatic channel detection in 3D seismic volumes, can make the process faster and the workflow less subjective. Decision-making based on interpretations is uncertain; so uncertainties in interpretation results are very important. Deep learning with different algorithms can also help interpreters quantify this uncertainty.Item Building effective representations for domain adaptation in coreference resolution(2018-05-04) Lestari, Victoria Anugrah; Durrett, GregOver the past few years, research in coreference resolution, one of the core tasks in Natural Language processing, has displayed significant improvement. However, the field of domain adaptation in coreference resolution is yet to be explored; Moosavi and Strube [2017] have shown that the performance of state-of-the-art coreference resolution systems drop when the systems are tested on datasets from different domains. We modify e2e-coref [Lee et al., 2017], a state-of-the-art coreference resolution system, to perform well on new domains by adding sparse linguistic features, incorporating information from Wikipedia, and implementing a domain adversarial network to the system. Our experiments show that each modification improves the precision of the system. We train the model on CoNLL-2012 datasets and test it on several datasets: WikiCoref, the pt documents, and the wb documents from CoNLL-2012. Our best results gains 0.50, 0.52, and 1.14 F1 improvements over the baselines of the respective test sets.Item Carbonate factory response and recovery after Ocean Anoxic Event 1a, Pearsall Formation, Central Texas(2020-08-13) Pedersen, Esben Skjold; Kerans, C. (Charles), 1954-; Larson, Toti ErikOcean Anoxic Events (OAEs) are major carbon cycle perturbations that occurred several times in the Mesozoic. OAEs are commonly found to have been caused by a combination of climatic warming and increased surface weathering delivering surface nutrients to the oceans. This feedback loop leads to the expansion of the oxygen minimum zone of the waterbody and increased influx of terrigenous material. The resultant dysoxic to euxinic conditions are thought to have played a prominent role in the suppression of the benthic carbonate factory and deposition of organic-rich mudstones. The establishment of these oceanographic conditions are postulated to have imparted a lasting effect on the deposition of stressed-carbonate facies during the recovery phase of OAEs. Major questions regarding OAE events remain, including the degree of variability in the impact that OAEs have on carbonate factories and the drivers for this variability, on both global and regional scales. This study builds upon previous work and further investigates the regional Early Cretaceous (Aptian) OAE-1a signal that is recorded in the Pearsall Formation in Central Texas, with a particular focus on the record of carbonate factory recovery observed in transects from the San Marcos Arch to the Pearsall Arch. Shoreline-proximal data include outcrops and 8 cores with 1530 ft of coverage. Distal cores include 7 subsurface exploration wells (total 1745 ft core) from the San Marcos Arch to the Pearsall Arch, a strike-parallel distance of 210 km. Physical characterization of stratigraphic data was paired with the multivariate statistical analysis of 10 pXRF datasets, involving Principal Component Analysis (PCA) segmentation, which led to the establishment of five end member chemofacies. These chemofacies allow for high-resolution identification of mineralogic variability across OAE-1a, including the documentation of pulses of terrigenous input as well as cycles of dysoxic to euxinic oceanographic conditions at a sub-lithofacies scale. When paired with the development and application of a deep learning neural network trained by a type-pXRF training dataset, this study outlines a new methodology that allows for the direct comparison of pXRF data across core control through a unified chemofacies schema. The oceanographic conditions identified with this workflow are then used to delineate oceanographic variability and pulses of terrigenous enrichment in association with the recovery from OAE-1a. The characterization of these geochemical processes is particularly relevant in the mudrock component of depositional systems, where biologic productivity, bottom-water redox conditions, and any subsequent diagenesis are critical determinants for the ultimate preservation of TOC in organic-rich shales. TOC rich shale intervals then create potential for an economical petroleum source rock and successive charge of either conventional or unconventional reservoirs. The incidence of OAE-1a is found to be a fundamental driver of facies evolution and faunal composition in the three composite sequences studied: the James (Aptian) composite sequence, the Bexar (Aptian-Albian) composite sequence, and the Glen Rose composite sequence (Albian) (cf. Phelps et al., 2014). OAE-1 is coincident with the drowning of the antecedent Sligo reef margin and deposition of the Pine Island Shale. This drowning event was a result of environmental stressors posed by the OAE and the resultant suppression of sedimentation rates on the platform as the carbonate factory was substantially weakened. Partial recovery of the carbonate factory from OAE-1a is expressed in the deposition of the Cow Creek Member before punctuation of deposition due to the subaerial exposure event at the top-James composite sequence boundary. A second phase of recovery is documented in the Bexar and Glen Rose composite sequences, including reef systems in the platform interior that are coeval with transgression and deposition of the Hensel Formation, as well as the progradation of Lower Glen Rose carbonates and the aggradation of microbial-coral-rudist bioherms in highstand depositional sequences of the Glen Rose Formation. Recovery of the carbonate factory was fundamentally different between the San Marcos Arch and Pearsall Arch areas. Earliest recovery fauna in the Cow Creek Member is comprised of monospecific echinoid-mollusk packstones-grainstones in shoreline proximal settings and oyster-oncoid rudstones distally. Combined observations from pXRF data and the heightened prevalence of pyrite in oncoid cortices on the San Marcos Arch compared to the Pearsall area is interpreted to represent a higher degree of dysoxic and/or euxinic conditions on the San Marcos Arch. During later stages of recovery, the Cow Creek in the Pearsall Arch area is shown to have maintained healthier carbonate deposition in comparison to the San Marcos Arch, including the sustained deposition of reefal assemblages, such as the sequence of stromatoporoid boundstone present in the Tenneco Sirianni well. This combined core-outcrop framework demonstrates the superimposed regional variability inherent even in global carbon cycle perturbations such as OAE-1a, driven by the degree of shelf restriction, oceanographic circulation patterns, basin geometry, and the degree of terrigenous influx. The documented differences in oceanographic conditions and carbonate factory recovery on the regional scale of OAE-1a will aid in better understanding the multi-scaled geochemical and environmental evolution associated with these events, and ultimately pushes towards the development of predictive concepts for future studies.Item Deep domain adaptation for label-efficient and generalizable bearing fault diagnosis(2022-05-06) Liu, Chenkuan; Ward, Rachel, 1983-; Bajaj, ChandrajitThis study presents the application of deep domain adaptation techniques in bearing fault diagnosis. Deep domain adaptation is a type of data-centric, label-efficient transfer learning approach based on deep neural network construction. Bearing fault diagnosis is a field which aims to perform analysis on the vibration signals generated from bearing apparatus. Different types of model structure and experiments are implemented on selected datasets, with associated ablation studies to analyze the effectiveness of domain adaptation compared to vanilla approaches. Our experiments show great performance and potential of the combination between bearing fault diagnosis and advanced machine learning techniques. In addition, detailed analysis also suggests the influence of the ways the bearing data are collected over the resulting performance, which leads to future directions for more accurate and specific approaches for bearing fault diagnosis tasks.Item Deep learning and representation : translating deep learning to medicine(2020-05-07) Snyder, Christopher George; Vishwanath, Sriram; Markey, Mia K; Valvano, Jonathan W; Caramanis, ConstantineDeep learning is a powerful method using neural networks to learn functional representations that relate variables of interest. This paper examines the manner of representation of those variables by neural networks and of neural networks by humans. In the first section, we examine causal relations among variables with CausalGAN. The following section will explore a theoretical connection between neural networks and support vector machines (SVMs) representing neural network functions through a sample compression scheme. The third section reparamaterizes neural networks using Min and Max combinations of linear functions and examines the connection with generalization and interpretation. The final section explores applications of this method to ECG model interpretation.Item Design and computational optimization of a flexure-based XY nano-positioning stage(2019-07-09) Thirumalai Vasu, Sridharan; Cullinan, MichaelThis thesis presents the design and computational optimization of a two-axis nano-positioning stage. The devised stage relies on double parallelogram flexure bearings with under-constraint eliminating linkages to enable motion in the primary degrees-of-freedom. The structural parameters of the underlying flexures were optimized to provide a large-range and high bandwidth with sub-micron resolution while maintaining a compact size. A finite element model was created to establish a functional relationship between the geometry of the flexure elements and the stiffness behavior. Then, a neural network was trained from the simulation results to explore the design space with a low computational expense. The neural net was integrated with a genetic algorithm to optimize the design of the flexures for compactness and dynamic performance. The optimal solutions resulted in a reduction of stage footprint by 14% and an increase in the first natural frequency by 75% relative to a baseline design, all while preserving the same 50mm range in each axis with a factor of safety of 2. This confirms the efficacy of the proposed approach in improving stage performance through an optimization of its constituent flexures.Item Development of algorithms for service robots in domestic environments(2020-05-11) Mulder, Dominick Anthony; Sentis, LuisThis thesis focuses on developing software for mobile robots which enables these platforms to perform household tasks. A set of tasks was proposed by the RoboCup Federation for the 2019 RoboCup@Home competition. The primary goal of this competition is to develop a unified robotic system with the capability to assist humans in their daily lives. Efforts toward this competition were made through collaboration with the Austin Villa team at The University of Texas at Austin. Toyota's Human Support Robot served as the robotic platform for the competition. This thesis primarily focuses on a task in which the robot autonomously collects trash bags and transports them to a designated location. Cooking is another household task addressed in this thesis. In particular, this work examines methods of enabling a robotic system to understand the cooking process and use this understanding to make informed decisions. The case study that was analyzed involves using a Convolutional Neural Network to process image data and estimate when a pancake has been sufficiently cooked. The work in this thesis contributes toward development of tools for mobile service robots in domestic environments. These tools are also implemented and shown to be effective in enabling robots to improve quality of life by assisting in household chores.Item Dialog for natural language to code(2017-05-04) Chaurasia, Shobhit; Mooney, Raymond J. (Raymond Joseph); Gligoric, MilosGenerating computer code from natural language descriptions has been a long-standing problem in computational linguistics. Prior work in this domain has restricted itself to generating code in one shot from a single description. To overcome this limitation, we propose a system that can engage users in a dialog to clarify their intent until it is confident that it has all the information to produce correct and complete code. Further, we demonstrate how the dialog conversations can be leveraged for continuous improvement of the dialog system. To evaluate the efficacy of dialog in code generation, we focus on synthesizing conditional statements in the form of IFTTT recipes. IFTTT (if-this-then-that) is a web-service that provides event-driven automation, enabling control of smart devices and web-applications based on user-defined events.Item Extending capability of formal tools : applying semiformal verification on large design(2019-05-10) Wang, Yuxin, M.S. in Engineering; Abraham, Jacob A.Simulation and formal verification are the two most commonly used techniques for verifying a digital design described at the Register-Transfer Level (RTL). Compared to simulation, formal verification shows an advantage in terms of exhaustive design coverage. However, due to state-space explosion, it is limited in size of designs that can be analyzed, and this capacity problem remains a big issue for application in large designs, such as processors. In this thesis, a waypoint-based semiformal verification (SFV) method is proposed in order to extend formal tool capacity for large designs. Our algorithm involves formal engines to explore traces to hit waypoints, reducing the computation time and memory required to reach a desired state. In addition, an automatic waypoint generation tool is developed. Criteria are developed to identify important flip-flops in the design to generate the waypoints, based on information from the synthesized netlist. A neural network is trained to score all the flip-flops in the target design. Based on the predicted scores, we set a threashold to select the critical flip-flops and then generate waypoint guides for RTL verification. The process is first studied using a small FIFO example. Then an expandable end-to-end ISA verification framework designed around a RISC-V core is evaluated with the proposed SFV techniques. The results show that waypoint-based SFV and the automatic waypoint generation algorithm have great potential in RTL verification. SFV can save a substantial amount of the time and memory required to cover all important scenarios, compared to direct application of FV.Item Grid cell attractor networks: development and implications(2015-12) Widloski, John Eric; Fiete, Ila; Marder, Michael P., 1960-; Gordon, Vernita; Pillow, Jonathan; Swinney, HarryAt the foundation of our ability to plan trajectories in complex terrain is a basic need to establish one’s positional bearings in the environment, i.e., to self-localize. How does the brain perform self-localization? How does a net- work of neurons conspire to solve this task? How does it self organize? Given that there might be multiple solutions to this problem, with what certainty can we say that any such model faithfully captures the neural structure and dynamics as it exists in the brain? This thesis presents a collection of three theoretical works aimed at addressing these problems, with a particular focus on biological plausibility and amenability to testing experimentally. I first introduce the context within which the work in the thesis is situ- ated. Chapter 1 provides a framework for understanding algorithmically how the brain might solve the problem of self-localization and how a neural circuit could be organized to perform self-localization based on the integration of self-motion cues, an operation known as path integration. We also introduce the neurobiology that underlies self-localization, with special emphasis on the cell types found in and around the hippocampus. We discuss the case that a particular class of cells – grid cells – subserve path integration, because of their peculiar spatial response properties and their anatomical positioning as the recipients of self-motion information. Continuous attractor models are introduced as the favored description of the grid cell circuit. Key open questions are introduced as motivation for the subsequently described work. I next focus on the question of how the grid cell circuit may have organized. In Chapter 2, it is demonstrated that an unstructured immature neural network, when subjected to biologically plausible inputs and learning rules, can learn to produce grid-like spatial responses and perform path integration. This model makes a number of predictions for experiment which are described at length. In Chapter 3, I describe a theoretically motivated experimental probe of the organization and dynamics of the grid cell circuit. The proposed experiment relies on sparse neural recordings of grid cells together with global perturbations of the circuit (and is thus experimentally feasible). It promises to yield special insight into the hidden structure of the grid cell circuit. Finally, in Chapter 4, I provide an analytical treatment of pattern formation dynamics in the grid cell circuit. This work focuses on nonlinear effects.Item hIPPYLearn : an inexact Stochastic Newton-CG method for training neural networks(2017-06-23) Liu, Di, active 21st century; Ghattas, Omar N.; Dawson, Clint NIn recent years, neural networks, as part of deep learning, became pop- ular because the ability to extract information from data and generalize it for new input. More and more classic problems get better solutions with help of neural network. One example is Google’s AlphaGo, built with neural network, beat Lee Sedol, a Go champion. In this paper, we study the use of Newton methods to train neural network. Our algorithms are implemented in hIP- PYLearn, a new package based on TensorFlow, the Google machine learning software. Newton CG demonstrates the improvement of speed and accuracy over steepest descent on solving neural network. In this report, we also compare stochastic Newton CG method with batch Newton CG method. Stochastic Newton CG shows great improvement in speed at the cost of a small loss in accuracy. The choice of the optional amount of regularization is also discussed. We discovered that a good selection of β value will speed up training, avoid overfitting and therefore lead to good accuracy.Item Integration of spatial data context into machine learning models(2022-10-03) Liu, Wendi; Pyrcz, Michael; Foster, John T; Lake, Larry W; Prodanović, Maša; Prochnow, Shane JThe oil and gas industry, over its long history, has accumulated a large volume of spatial data from various resources like seismic surveys, well logs and production information, which provide a huge potential for data analytics and machine learning application to assist physics-based models. The ongoing digitalization transformation in the oil and gas industry also emphasizes this opportunity. In addition, the nature of unconventional resources poses the challenges of high uncertainty among the data measurements and less well-understood production mechanisms. The challenges bring data-driven solutions like machine learning to our attention to support reservoir modeling and decision-making. However, the data-driven tools tend to ignore essential spatial contexts when they are directly used for subsurface data, which may cause unrealistic results and high-cost risk. The spatial context mentioned here refers to the spatial continuity, heterogeneity, sampling bias, data sparsity relative to the reservoir scale, etc. Also, in traditional workflows, machine learning especially deep learning are used as a black-box tool that is hard for domain experts to interrogate and the data sparsity renders the models with a large number of parameters prone to overfit. Due to the high value of subsurface development decisions, models must be understandable to be used to support decision making. Therefore, it is critical to customize the current machine learning and data analytics techniques to account for the spatial, multivariate, and multiscale information among the data as well as bring domain knowledge and physics constraints into the workflow, providing model interpretability. This dissertation bridges the gap between the widely applied data-driven workflows and complex spatial context among subsurface data. This dissertation presents interpretable, data-driven workflows specifically designed for spatial, multivariate, multiscale data with domain expertise and physics constraints for more accurate subsurface predictions with greater confidence to decision-support. The proposed workflows improve classical data-driven predictions in subsurface applications by integrating the spatial context, physics constraints and domain expertise with innovative use of geostatistics and graph neural networks. The novel, comprehensive workflows cover spatial data analytics steps from data pre-processing, including sampling bias mitigation, spatial feature engineering, anomaly segmentation, to production forecasting to support decision-making.Item Investigatory Brain-Computer Interface utilizing a single EEG sensor(2013-05) Shamlian, Daniel G.; Abraham, Jacob A.A Human-Machine Interface is a device that allows humans to inter- act with and use machines. One such device is a Brain-Computer Interface which allows the user to communicate to a computer system through thought patterns. A commonly used technique, electroencephalography, uses multiple sensors positioned on the subject’s cranium to extract electrical changes as a representation of thought patterns. This report investigates the use of a single EEG sensor as a user-friendly BCI implementation. The primary goal of this report is to determine if specific mental tasks can be reliably detected with such a system.Item Kubernetes provenance(2020-09-14) Lin, William, M.S. in Computer Sciences; Chidambaram, Vijay; Rossbach, Christopher J.The field of machine learning (ML) has experienced a period of renaissance since the 2000s. First, exponential increase in computational power and improvements in hardware has finally allowed machine learning algorithms to process the same amount of data in minutes and hours rather than hundreds of years. Second, the model of cloud computing made large scale clusters inexpensive and available to anyone at the click of a button, allowing them to scale their algorithms without having to personally maintain hundreds or even thousands of machines. However, despite the huge rise in popularity of machine learning in both research and industry, the ML community is facing a crisis of being able to reproduce results. Although the existing machine learning frameworks all have the ability to re-execute the same piece of code saved by a researcher, the typical workflow could involve different frameworks and accesses to data on remote machines. These cross-framework workflows can not be replicated by a single frameworks provenance system, and often contain customized scripts and processes that can further obscure the ability for future replication and repeatability. I make the argument in this thesis that because of machine learning’s need for scale and frequent training on large clusters, Kubernetes serves as a good common layer for the systems community to interpose a layer of provenance collection to aid the ML community in reproducing results that make use of multiple machines, frameworks, and hardware platforms. In addition, I also propose two new mechanisms for collecting fine-grained provenance information from Kubernetes without modifying the application or host operating system.Item Pre-injection reservoir evaluation at Dickman Field, Kansas(2011-08) Phan, Son Dang Thai; Sen, Mrinal K.; Srinivasan, Sanjay; Grand, StephenI present results from quantitative evaluation of the capability of hosting and trapping CO₂ of a carbonate brine reservoir from Dickman Field, Kansas. The analysis includes estimation of some reservoir parameters such as porosity and permeability of this formation using pre-stack seismic inversion followed by simulating flow of injected CO₂ using a simple injection technique. Liner et at (2009) carried out a feasibility study to seismically monitor CO₂ sequestration at Dickman Field. Their approach is based on examining changes of seismic amplitudes at different production time intervals to show the effects of injected gas within the host formation. They employ Gassmann's fluid substitution model to calculate the required parameters for the seismic amplitude estimation. In contrast, I employ pre-stack seismic inversion to successfully estimate some important reservoir parameters (P- impedance, S- impedance and density), which can be related to the changes in subsurface rocks due to injected gas. These are then used to estimate reservoir porosity using multi-attribute analysis. The estimated porosity falls within a reported range of 8-25%, with an average of 19%. The permeability is obtained from porosity assuming a simple mathematical relationship between porosity and permeability and classifying the rocks into different classes by using Winland R35 rock classification method. I finally perform flow simulation for a simple injection technique that involves direct injection of CO₂ gas into the target formation within a small region of Dickman Field. The simulator takes into account three trapping mechanisms: residual trapping, solubility trapping and mineral trapping. The flow simulation predicts unnoticeable changes in porosity and permeability values of the target formation. The injected gas is predicted to migrate upward quickly, while it migrates slowly in lateral directions. A large amount of gas is concentrated around the injection well bore. Thus my flow simulation results suggest low trapping capability of the original target formation unless a more advanced injection technique is employed. My results suggest further that a formation below our original target reservoir, with high and continuously distributed porosity, is perhaps a better candidate for CO₂ storage.Item Quadcopter stabilization with neural network(2016-12) Burman, Prateek; Julien, Christine, D. Sc.UAVs (Unmanned Aerial Vehicle), also known as drones, are becoming attractive in the consumer space due to their relatively low cost and their ability to operate autonomously with minimal human intervention. A user could program the drone with GPS coordinates, and the drone would comply with utmost precision. In order for the drone to operate a preprogrammed flight path, it requires a host of sensors for it to gather data and operate on that data in real time. For instance, a consumer drone typically has obstacle avoidance sensors, a GPS sensor for routing and navigation, and an IMU (Inertial Measurement Unit) for tracking position and orientation. These sensors play a crucial role in both stabilization and navigation of the drone. This report aims to investigate, analyze and understand the complexity involved in designing and implementing an autonomous quadcopter; specifically, the stabilization algorithms. In general, stabilization is achieved using some form of control algorithm. The report covers a popular approach for stabilization (PID Control) found with many open source libraries and contrasts it with an alternative machine learning approach (Neural Networks). Finally, a machine learning based algorithm is implemented and evaluated on a prototype quadcopter, and its results are presented.Item Standardization for intelligent detection and autonomous operation of non-structured hardware, and its application on railcar brake release operation(2015-05) Hammel, Christopher Scott; Tesar, Delbert; Ashok, PradeepkumarThis thesis introduces a standard framework for evaluating and planning for desired autonomous (or semi-autonomous) operations, then applies the framework, in detail, to the task of automating emergency brake release before rail-car decoupling. A significant hurdle to be accounted for is the lack of standardization of much of the hardware of interest in industry. Non-standardized rail car components must be formally structured as fully as possible to improve the reliability of the robotic automation. This brake release task requires either pushing or pulling a “bleed rod” that protrudes from the side of each rail car. The requirements for each step of the evaluation and planning process will be laid out in this thesis, as an example of how it should be applied to future automation tasks.