Browsing by Subject "Spatial statistics"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Reservoir description via statistical and machine-learning approaches(2022-07-22) Pan, Wen, Ph. D.; Torres-Verdín, Carlos; Pyrcz, Michael; Duncan, Ian J.; Lake, Larry W; Heidari, ZoyaDescription of subsurface reservoirs is important for decision-making in the development of hydrocarbon resources. Reservoir description concerns (1) geophysical interpretation: prediction of rock properties from geophysical measurements such as borehole and seismic amplitude data, and (2) reservoir modeling: modeling of the spatial distribution of rock properties conditioned by geophysical interpretations (geological modeling), and simulation of fluid-transport, elastic, mechanical, and electromagnetic phenomena, among others, taking place in a geological model (reservoir simulation). Reservoir description based on stochastic reservoir modeling and conditioned by fluid production history enables uncertainty estimation for hydrocarbon reserves and fluid production forecast. Accurate reservoir description assists the management of risk and profit during the exploration and development of hydrocarbon production resources. As one of the most important components of reservoir description, the interpretation of well logs provides high-resolution estimations of in situ rock properties around the wellbore, such as lithology, porosity, fluid saturation, permeability, and elastic moduli. However, conventional petrophysical models are often too simplistic to reproduce the complex relationship between well logs and rock properties, especially permeability. Therefore, data-driven inferential methods, such as machine learning modeling, are needed for more accurate permeability prediction in spatially complex rocks. The accurate prediction of permeability across multiple wells is even more challenging because of variable borehole environmental conditions (e.g., drilling fluid and borehole size), different logging instruments (e.g., induction vs. lateral resistivity logs), and their vintage (e.g., logging-while-drilling vs. wireline logs). To mitigate biases introduced by both variable borehole environmental conditions and borehole instruments, well-log normalization is commonly implemented prior to performing multi-well interpretation projects. However, conventional well-log normalization methods ignore the correlation among different well logs and require much effort and expertise by the interpreter. The first objective of this dissertation is to develop a data-driven interpretation workflow that uses machine-learning methods to perform automatic well-log normalization by considering the correlation among different well logs and to accurately estimate permeability from the normalized well logs. The workflow consists of four steps: (1) identifying well-calibrated wells (type wells) for the wells that need correction (test wells), based on the statistical distance of the associated well logs. (2) Obtaining training data from type wells to train the machine-learning model to minimize the mean-squared error (MSE) of permeability prediction. (3) Performing well-log normalization for the test well logs by minimizing the divergence to the type-well well logs. (4) Predicting the permeability of test wells using normalized well logs. The new interpretation workflow is applied to predict the permeability of 30 wells in the Seminole San Andres Unit (SSAU). Compared to the permeability prediction model without well-log normalization, the new workflow decreases the mean-squared error (MSE) of permeability prediction by 20-50% and greatly accelerates well-log preprocessing with the automatic well-log normalization step. Stochastic reservoir models conditioned by petrophysical and geophysical interpretations are important for uncertainty management during reservoir exploration and development. Conventional geostatistical methods, such as Kriging and multiple-point simulation, are commonly used for conditional reservoir modeling. However, it is difficult to use these methods to construct reservoir models that reproduce long-range geological patterns that are important for fluid-transport prediction, such as the continuity of channels in a turbidite channel sedimentary system. The second objective of this dissertation is to develop a new machine learning method to construct stochastic reservoir models that reproduce important long-range patterns and are conditioned by the interpretation of well logs and seismic amplitude data. This method consists of three steps: (1) calculating training images of a depositional system, such as a turbidite channel or a deepwater lobe system, with rule-based modeling methods. (2) Training a new conditional generative adversarial model, referred to as the stochastic pix2pix model, to generate reservoir model realizations that reproduce patterns in the training images and are conditioned by well logs and seismic amplitude data. (3) Using the trained model to generate conditional reservoir model realizations. However, limitations on computer memory make it difficult for the new method to generate reservoir model realizations with over millions of voxels, such as models with multi-scale architectural elements. To further improve the computational efficiency to generate large and detailed reservoir models, a hierarchical modeling workflow is developed which uses the stochastic pix2pix model to simulate architectural elements from the largest to the smallest scale. The stochastic pix2pix method is verified by comparing the generated lobe and fluvial channel model realizations to reservoir models constructed with the rule-based modeling method. Comparisons indicate that conditioning data, such as rock facies interpreted from well logs and depositional surfaces identified from seismic amplitude data, are well reproduced in model realizations generated with the new method. Statistical metrics, such as semi-variogram, multiple-point histogram (MPH), compensational stacking index, geometrical probability map, and rock facies histogram were calculated to confirm that model realizations accurately reproduce the patterns observed in the training images. Metrics of performance indicate a good reproduction of patterns, for example, the mean-absolute error of geometrical probability is below 2%, while the MPH difference is below 5%. The combination of well-log normalization and interpretation workflow with machine learning-based stochastic reservoir modeling enables more accurate formation evaluation and better estimates of uncertainties associated with rock property distributions than possible with standard modeling approaches.Item Spatial applications of Markov random fields and neural networks for spatio-temporal denoising, causal inference and reinforcement learning(2022-08-16) García Tec, Mauricio Benjamín; Scott, James (Statistician); Zigler, Corwin Matthew, 1983-; Zhou, Mingyuan; Walker, Stephen G; Stone, Peter HDiscrete spatial structures are ubiquitous in statistical analysis. They can take the form of images, grids, and more generally, graphs. This work develops novel methodology leading to broadly applicable algorithms of graph smoothing and neural newtorks to improve statistical learning in a variety of tasks and spatially-structured domains, including temporal and sequential decision-making processes. Thus, each chapter corresponds to a case study with applications in spatio-temporal denoising, causal inference, and reinforcement learning. Graph smoothing methods are used in all of them and their effectiveness is evaluated. In addition, some chapters develop more specialized methods that further exploit the spatial and statistical structure of the data. One of the objectives sustained throughout the work will be developing scalable algorithms to handle high-resolution spatial data or other computationally demanding scenarios.Item Spatial interpolation with Gaussian processes and spatially varying regression coefficients(2015-08) Mitchell, Daniel Lewis; Keitt, Timothy H.; Scott, James GLinear regression is undoubtedly one of the most widely used statistical techniques, however because it assumes independent observations it can miss important features of a dataset when observations are spatially dependent. This report presents the spatially varying coefficients model, which augments a linear regression with a multivariate Gaussian spatial process to allow regression coefficients to vary over the spatial domain of interest. We develop the mathematics of Gaussian processes and illustrate their use, and demonstrate the spatially varying coefficients model on simulated data. We show that it achieves lower prediction error and a better fit to data than a standard linear regression.Item Transfer learning for classification of spatially varying data(2010-08) Jun, Goo; Ghosh, Joydeep; Aggarwal, J. K.; Crawford, Melba M.; Caramanis, Constantine; Sanghavi, Sujay; Grauman, KristenMany real-world datasets have spatial components that provide valuable information about characteristics of the data. In this dissertation, a novel framework for adaptive models that exploit spatial information in data is proposed. The proposed framework is mainly based on development and applications of Gaussian processes. First, a supervised learning method is proposed for the classification of hyperspectral data with spatially adaptive model parameters. The proposed algorithm models spatially varying means of each spectral band of a given class using a Gaussian process regression model. For a given location, the predictive distribution of a given class is modeled by a multivariate Gaussian distribution with spatially adjusted parameters obtained from the proposed algorithm. The Gaussian process model is generally regarded as a good tool for interpolation, but not for extrapolation. Moreover, the uncertainty of the predictive distribution increases as the distance from the training instances increases. To overcome this problem, a semi-supervised learning algorithm is presented for the classification of hyperspectral data with spatially adaptive model parameters. This algorithm fits the test data with a spatially adaptive mixture-of-Gaussians model, where the spatially varying parameters of each component are obtained by Gaussian process regressions with soft memberships using the mixture-of-Gaussian-processes model. The proposed semi-supervised algorithm assumes a transductive setting, where the unlabeled data is considered to be similar to the training data. This is not true in general, however, since one may not know how many classes may existin the unexplored regions. A spatially adaptive nonparametric Bayesian framework is therefore proposed by applying spatially adaptive mechanisms to the mixture model with infinitely many components. In this method, each component in the mixture has spatially adapted parameters estimated by Gaussian process regressions, and spatial correlations between indicator variables are also considered. In addition to land cover and land use classification applications based on hyperspectral imagery, the Gaussian process-based spatio-temporal model is also applied to predict ground-based aerosol optical depth measurements from satellite multispectral images, and to select the most informative ground-based sites by active learning. In this application, heterogeneous features with spatial and temporal information are incorporated together by employing a set of covariance functions, and it is shown that the spatio-temporal information exploited in this manner substantially improves the regression model. The conventional meaning of spatial information usually refers to actual spatio-temporal locations in the physical world. In the final chapter of this dissertation, the meaning of spatial information is generalized to the parametrized low-dimensional representation of data in feature space, and a corresponding spatial modeling technique is exploited to develop a nearest-manifold classification algorithm.