Browsing by Subject "3D"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item 3D motion : encoding and perception(2018-05) Bonnen, Kathryn L.; Huk, Alexander C.; Cormack, Lawrence K.; Carvalho, Carlos; Fiete, Ila; Geisler, Wilson; Hayhoe, MaryThe visual system supports perception and inferences about events in a dynamic, three-dimensional (3D) world. While remarkable progress has been made in the study of visual information processing, the existing paradigms for examining visual perception and its relation to neural activity often fail to generalize to perception in the real world which has complex dynamics and 3D spatial structure. This thesis focuses on the case of 3D motion, developing dynamic tasks for studying visual perception and constructing a neural coding framework to relate neural activity to perception in a 3D environment. First, I introduce target-tracking as a psychophysical method and develop an analysis framework based on state space models and the Kalman filter. I demonstrate that target-tracking in conjunction with a Kalman filter analysis framework produce estimates of visual sensitivity that are comparable to those obtained with a traditional forced-choice task and a signal detection theory analysis. Next, I use the target-tracking paradigm in a series of experiments examining 3D motion perception, specifically comparing the perception of frontoparallel motion with the perception of motion-through-depth. I find that continuous tracking of motion-through-depth is selectively impaired due to the relatively small retinal projections resulting from motion-through-depth and the slower processing of binocular disparities. The thesis then turns the neural representation of 3D motion and how that underlies perception. First I introduce a theoretical framework that extends the standard neural coding approach, incorporating the environment-to-retina transformation. Neural coding typically treats the visuals stimulus as a direct proxy for the pattern of stimulation that falls on the retina. Incorporating the environment-to-retina transformation results in a neural representation fundamentally shaped by the projective geometry of the world onto the retina. This model explains substantial anomalies in existing neurophysiological recordings in primate visual cortical neurons during presentations of 3D motion and in psychophysical studies of human perception. In a series of psychophysical experiments, I systematically examine the predictions of the model for human perception by observing how perceptual performance changes as a function of viewing distance and eccentricity. Performance in these experiments suggests a reliance on a neural representation similar to the one described by the model. Taken together, the experimental and theoretical findings reported here advance the understanding of the neural representation and perception of the dynamic 3D world, and adds to the behavioral tools available to vision scientists.Item Electromagnetic and Structural Analyses of an Integrated Launch Package(IEEE, 1999-01) Liu, H-P.; Ting, Y.L.; Zowarka, R.C.; Alexander, A.In this paper, detailed three-dimensional (3D) transient electromagnetic (EM) analyses with temperature-dependent material properties were performed using a state-of-the-art analysis tool to calculate current densities, body force densities, and temperature distribution in launch package and rail conductors. The body force densities, temperature distribution, and package accelerations generated by the EM model were then provided to a 3D multiple-step nonlinear static structural model for detailed mechanical analyses. The combined 3D EM and structural analyses can be used to accurately predict the EM launching performance and launch package structural integrity. Furthermore, armature optimization and package survivability enhancement can also be achieved with the help of these analysesItem Fluvial-deltaic and shallow-marine seismic geomorphology of the West Natuna Basin : implications for reservoir architecture and basin evolution(2006) Flint, Jed McKay; Wood, Lesli J.Quantitative seismic geomorphology and morphometric analysis of modern and paleo-landforms are powerful tools for extracting data about geologic history, processes, and fill architecture of terrestrial and marine basins. This study of the hydrocarbon-rich West Natuna Basin of Indonesia combines 3,154 km² of high-quality 3D seismic and analysis of 15 well logs to explain how 3D reservoir shape, size, and distribution evolve and are preserved within fluvial-deltaic and shallow-marine environments. The study also addresses how these characteristics vary in response to changes in tectonic regime and base level. We made statistically robust, morphometric measurements of channel form and character using PC software, and where well logs have penetrated channels, we determined lithology and calculated sand percentages. The area has a variety of channel morphologies, ranging in sinuosity from 1 to 4.67 and varying in width from 30m to 4458m. Geometries range from single thread to wide, multithread. Crevasse splays and interdistributary creeks and lakes lie adjacent to channels. Sand percentages vary greatly between element types, and many multikilometer-scale elements show complex accretionary architectural composition. The result of combining morphometric measurements with lithologic characterization is a predictive model that strengthens the relationship between channel form and sediment content. Architectural and sedimentologic changes have been assessed in the context of the basin's tectonic, climatic, and sea-level history. Results of this study apply not only to development of the study area, but to explaining how reservoir shape, size, and distribution evolve and are preserved within fluvial-deltaic and shallow-marine environmentsItem Human detection and action recognition using depth information by Kinect(2012-05) Xia, Lu, active 21st century; Grauman, Kristen Lorraine, 1979-Traditional computer vision algorithms depend on information taken by visible-light cameras. But there are inherent limitations of this data source, e.g. they are sensitive to illumination changes, occlusions and background clutter. Range sensors give us 3D structural information of the scene and it’s robust to the change of color and illumination. In this thesis, we present a series of approaches which are developed using the depth information by Kinect to address the issues regarding human detection and action recognition. Taking the depth information, the basic problem we consider is to detect humans in the scene. We propose a model based approach, which is comprised of a 2D head contour detector and a 3D head surface detector. We propose a segmentation scheme to segment the human from the surroundings based on the detection point and extract the whole body of the subject. We also explore the tracking algorithm based on our detection result. The methods are tested on a dataset we collected and present superior results over the existing algorithms. With the detection result, we further studied on recognizing their actions. We present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.’s method. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms existing algorithm on most of the cases.Item Multiazimuth velocity analysis using velocity-independent seismic imaging(2011-05) Burnett, William Andrew, 1983-; Fomel, Sergey B.; Stoffa, Paul L., 1948-Multiazimuth seismic data contains information about how the Earth’s seismic response changes with azimuthal direction. Directional-dependence of the seismic response can be caused by anisotropy or heterogeneity, associated with subsurface features such as fractures, stresses, or structure. Characterizing azimuthal variations is done through velocity analysis, which provides a link between an acquired data set and its image, as well as between the image and subsurface geology. At the stage which conventional velocity analysis is applied, it is difficult to distinguish the geologic cause of observed azimuthal velocity variations. The inability to distinguish the similar effects of anisotropy and heterogeneity leads to positioning errors in the final image and velocity estimates. Regardless of the cause, azimuthally variable velocities require at least three parameters to characterize, as opposed to the conventional single-parameter isotropic velocity. The semblance scan is the conventional tool for seismic velocity analysis, but it was designed for the isotropic case. For multiple parameters, the semblance scan becomes computationally impractical. In order to help address the xiissues of geologic ambiguity and computational efficiency, I develop three methods for multiazimuth seismic velocity analysis based on “velocity-independent” imaging techniques. I call this approach, velocity analysis by velocity-independent imaging, where I reverse the conventional order of velocity estimation followed by image estimation. All three methods measure time-domain effective-velocity parameters. The first method, 3D azimuthally anisotropic velocity-independent NMO, replaces the explicit measurement of velocity with local slope detection. The second method, time-warping, uses local slope information to predict traveltime surfaces without any moveout assumption beforehand, and then fit them with a multiparameter velocity model. The third method, azimuthal velocity continuation, uses diffraction image focusing as a velocity analysis criterion, thereby performing imaging and velocity analysis simultaneously. The first two methods are superior to the semblance scan in terms of computational efficiency and their ability to handle multi-parameter models. The third method is similar to a single multi-parameter semblance scan in computational cost, but it helps handle the ambiguity between structural heterogeneity and anisotropy, which leads to better positioned images and velocity estimates.Item Oil and Plankton: DROPPS Takes a Closer Look Using Laser Technology(2015-07-16) University of Texas Marine Science InstituteItem The Physics of Digital Microfabrication with Molten Microdrops(1993) Gao, Fuquan; Sonin, Ain A.Precise deposition of molten microdrops under controlled thermal conditions provides a means of 3D "digital microfabrication" , microdrop by microdrop, under complete computer control, much in the same way as 2D hard copy is obtained by ink-jet printing. This paper describes some results from a study of the basic modes of microdrop deposition and solidification (Gao & Sonin, 1993). The conditions required controlled deposition are discussed, and some experimental results and theoretical analyses are given for various basic deposition modes. These include columnar (Le. drop-on-drop) deposition at low and high frequencies, sweep deposition of continuous beads on flat surfaces, and repeated sweep deposition for buildup of larger objects or materials.Item Recognizing human activity using RGBD data(2014-05) Xia, Lu, active 21st century; Aggarwal, J. K. (Jagdishkumar Keshoram), 1936-Traditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First, visible light images are sensitive to illumination changes and background clutter. Second, the 3D structural information of the scene is lost when projecting the 3D world to 2D images. Recovering the 3D information from 2D images is a challenging problem. Range sensors have existed for over thirty years, which capture 3D characteristics of the scene. However, earlier range sensors were either too expensive, difficult to use in human environments, slow at acquiring data, or provided a poor estimation of distance. Recently, the easy access to the RGBD data at real-time frame rate is leading to a revolution in perception and inspired many new research using RGBD data. I propose algorithms to detect persons and understand the activities using RGBD data. I demonstrate the solutions to many computer vision problems may be improved with the added depth channel. The 3D structural information may give rise to algorithms with real-time and view-invariant properties in a faster and easier fashion. When both data sources are available, the features extracted from the depth channel may be combined with traditional features computed from RGB channels to generate more robust systems with enhanced recognition abilities, which may be able to deal with more challenging scenarios. As a starting point, the first problem is to find the persons of various poses in the scene, including moving or static persons. Localizing humans from RGB images is limited by the lighting conditions and background clutter. Depth image gives alternative ways to find the humans in the scene. In the past, detection of humans from range data is usually achieved by tracking, which does not work for indoor person detection. In this thesis, I propose a model based approach to detect the persons using the structural information embedded in the depth image. I propose a 2D head contour model and a 3D head surface model to look for the head-shoulder part of the person. Then, a segmentation scheme is proposed to segment the full human body from the background and extract the contour. I also give a tracking algorithm based on the detection result. I further research on recognizing human actions and activities. I propose two features for recognizing human activities. The first feature is drawn from the skeletal joint locations estimated from a depth image. It is a compact representation of the human posture called histograms of 3D joint locations (HOJ3D). This representation is view-invariant and the whole algorithm runs at real-time. This feature may benefit many applications to get a fast estimation of the posture and action of the human subject. The second feature is a spatio-temporal feature for depth video, which is called Depth Cuboid Similarity Feature (DCSF). The interest points are extracted using an algorithm that effectively suppresses the noise and finds salient human motions. DCSF is extracted centered on each interest point, which forms the description of the video contents. This descriptor can be used to recognize the activities with no dependence on skeleton information or pre-processing steps such as motion segmentation, tracking, or even image de-noising or hole-filling. It is more flexible and widely applicable to many scenarios. Finally, all the features herein developed are combined to solve a novel problem: first-person human activity recognition using RGBD data. Traditional activity recognition algorithms focus on recognizing activities from a third-person perspective. I propose to recognize activities from a first-person perspective with RGBD data. This task is very novel and extremely challenging due to the large amount of camera motion either due to self exploration or the response of the interaction. I extracted 3D optical flow features as the motion descriptor, 3D skeletal joints features as posture descriptors, spatio-temporal features as local appearance descriptors to describe the first-person videos. To address the ego-motion of the camera, I propose an attention mask to guide the recognition procedures and separate the features on the ego-motion region and independent-motion region. The 3D features are very useful at summarizing the discerning information of the activities. In addition, the combination of the 3D features with existing 2D features brings more robust recognition results and make the algorithm capable of dealing with more challenging cases.Item Three transdimensional factors for the conversion of 2D acoustic rough surface scattering model results for comparison with 3D scattering(2013-12) Tran, Bryant Minh; Wilson, Preston S.; Isakson, Marcia J.Rough surface scattering is a problem of interest in underwater acoustic remote sensing applications. To model this problem, a fully three-dimensional (3D) finite element model has been developed, but it requires an abundance of time and computational resources. Two-dimensional (2D) models that are much easier to compute are often employed though they don’t natively represent the physical environment. Three quantities have been developed that, when applied, allow 2D rough surface scattering models to be used to predict 3D scattering. The first factor, referred to as the spreading factor, adopted from the work of Sumedh Joshi [1], accounts for geometrical differences between equivalent 2D and 3D model environments. A second factor, referred to as the perturbative factor, is developed through the use of small perturbation theory. This factor is well-suited to account for differences in the scattered field between a 2D model and scattering from an isotropically rough 2D surface in 3D. Lastly, a third composite factor, referred to as the combined factor, of the previous two is developed by taking their minimum. This work deals only with scattering within the plane of the incident wave perpendicular to the scatterer. The applicability of these factors are tested by comparing a 2D scattering model with a fully three-dimensional Monte Carlo finite element method model for a variety of von Karman and Gaussian power spectra. The combined factor shows promise towards a robust method to adequately characterize isotropic 3D rough surfaces using 2D numerical simulations.Item Three-dimensional (3D) Simulation of Micro-Void Development within Large Scale Polymer Composite Deposition Beads(2022) Awenlimobor, Aigbe; Smith, Douglas E.; Wang, Zhaogui; Luo, ChenjunManufacturing (BAAM) are largely affected by inherent bead microstructural features such as voids. Unfortunately, our understanding of void nucleation and evolution during polymer deposition process is lacking. Flow modeling focused on the associated microstructural formation provides a means for better understanding the process-structure-properties relations in large area extrusion deposition additive manufacturing of fiber reinforced composites. Our prior computational effort that investigated mechanisms that may promote micro-void formation was based on 2-dimensional planar models of a single ellipsoidal fiber motion in purely viscous polymer extrusion/deposition flow through a BAAM nozzle. Here we present a 3D finite element modelling approach to simulate single fiber out-of-plane rotations utilizing velocity and velocity gradient values computed along streamlines obtained from a 3D extrusion/deposition simulation of the BAAM polymer deposition process. The pressure distribution on the fiber’s surface along the flow path provides new insight into potential micro-void nucleation mechanism. Results show low pressure regions occur near the fiber’s surface which varies across the printed bead and through its thickness.Item Three-dimensional geoacoustic perturbative inverse technique for the shallow ocean water column(2012-12) Bender, Christopher Matthew; Wilson, Preston S.; Ballard, Megan S.This work focuses on developing an inversion scheme to estimate water-column sound-speed fields in three dimensions. The inversion scheme is based on a linearized perturbative technique which utilizes estimates of modal travel times. The technique is appropriate in the littoral ocean where measurements are made across range and cross-range distances greater than 10 km to ensure sufficient modal dispersion. Previous applications of then inversion technique has been limited to one or two dimensions and/or focused primarily on the seabed. Compared to past applications, the accuracy and uncertainty of the solution is improved by employing approximate equality constraints within the context of \textit{a priori} estimates of model and data covariances. The effectiveness of the constrained technique is explored through a one-dimensional example. The robustness of the technique is illustrated by introducing different types of errors into the inversion and considering the accuracy. A further examination of the technique is given by exploring a three-dimensional example. Several case studies are presented to investigate the effects of different levels of environmental variability and spatial sampling.Item Three-Dimensional Rail Cooling Analysis for a Repetitively Fired Railgun(IEEE, 1991-01) Liu, H-P.;A three-dimensional rail cooling analysis for fabrication and demonstration of a stand-alone, repetitive-fire, compulsator-driven 9 MJ gun system has been performed to assure that the entire rail can be maintained below its thermal limit for multiple shots. The three-dimensional rail thermal model can predict the temperature, pressure, and convective heat transfer coefficient variations of the coolant along the 10 m long copper rail. The 9 MJ projectiles are fired every 20 s for 3 min. Water cooling was used in the model for its high cooling capacity. Single liquid phase heat transfer was assumed in the cooling analysis. For multiple shots, the temperature difference between the rail and the water was enhanced due to accumulated heat in the rail. As a result, the heat removal by water increased from shot to shot. The rail temperature initially increased and finally stabilized after a number of shotsItem Using 3D/4D CAD modeling for traffic management : development, review, and communication(2012-05) Goyat, Jean; O'Brien, William J.; Machemehl, Randy B.Developing a traffic management plan for a transportation infrastructure project is a complex activity which requires input from many different fields. Ideally, design, construction and traffic engineers all work towards devising an integrated traffic management plan that would consider a variety of aspects. Current literature about traffic management modeling mostly focuses either on high-level tools not adequate to perform required phasing and constructability analysis of construction activities, or study pre- and post-construction conditions. On the other hand, computer aided design (CAD) modeling of infrastructure projects mentions potential benefits for traffic management plan reviewing. However, it does not delve into using 3 dimensional (3D) and 4 dimensional (3D and time) CAD to study (1) the impact of construction activities on traffic, (2) during construction, (3) at a level that allows detailed phasing and constructability reviews for traffic management plan elaboration and communication. Using 3D and 4D CAD modeling can alleviate the complexity of devising traffic management plans for transportation projects by providing an integrated framework from which developers can elaborate, review and communicate their strategies. The many visualization benefits of 3D/4D modeling used in conjunction with their advanced analysis capabilities can tremendously aid compared to more traditional 2D methods. The three transportation projects described in this study aim at illustrating the benefits and added value these models provide to more efficiently and rapidly visualize and analyze intended traffic management strategies, at different stages of the construction process.