Browsing by Department "Civil, Architectural, and Environmental Engineering"
Now showing 1 - 20 of 1823
- Results Per Page
- Sort Options
Item 3-D modeling and graphical simulation of mobile cranes to assist planning of heavy lifts(1991) Dharwadkar, Parmanand, 1964-; Not availableItem 3D Printing and Structural Testing of Quick Connection(2018-05) Jung, Kee Young; Clayton, Patricia3D printing technology allows three-dimensional solid objects to be created from a digital file. It is an additive manufacturing process that creates objects by laying down successive layers of material in one continuous process. The use of 3D printing has been explored in many different industries due to its potential to revolutionize the manufacturing process. The construction industry is not an exception to this trend as it is going through a major change associated with the automation of the construction process. 3D printing offers the construction industry a possibility of creating construction elements of unique, complex geometry that can be custom-made and mass-produced. This study aims to utilize the benefits of the 3D printer by printing connection members of complex geometry inspired from Japanese woodworking and the modern proprietary connection from the ConXTech. The connection models will be tested for its strength and serviceability and their performance will be analyzed to assess the connection’s potential for structural and non-structural application. By combining the “fastener-free” and the “quick and easy to connect” abilities of the wooden joineries and the ConXTech connections with the benefit of the 3D printing technology, the study will explore the process of innovation that can result from interdisciplinary research.Item 3D strut-and-tie design recommendations for column and drilled shaft anchorages in drilled shaft footings(2022-08-05) Yi, Yousun; Bayrak, Oguzhan, 1969-; Williamson, Eric B.; Ferche, Anca C.; Murcia-Delso, Juan; Hrynyk, Trevor D.A drilled shaft footing is a deep structural member subjected to nonlinear strain distribution, known as D-regions, and therefore recommended to be designed using the strut-and-tie method (STM). Drilled shaft footings supported on four drilled shafts require a three-dimensional (3D) configuration of struts and ties to reproduce the internal force flow of the footings. However, a lack of clear design guidelines and experimental verification for the anchorages of the column and drilled shaft reinforcement in drilled shaft footings hinder the practical use of the 3D STM in the design of these types of components. This dissertation covers part of a research project on drilled shaft footings subjected to various loading scenarios (Phase I: uniaxial compression-only; Phase II: axial compression combined with moderate uniaxial flexure; and Phase III: axial compression combined with large uniaxial flexure). A comprehensive study, comprising large-scale tests and finite element analyses, presented here is specifically focused on drilled shaft footings subjected to combined axial compression and two different uniaxial bending moments: moderate (Phase II) and large uniaxial bending moment (Phase III). The anchorage behavior of column and drilled shaft reinforcement subjected to tension in drilled shaft footings loaded under the combined axial compression and uniaxial bending moments were investigated experimentally and numerically. Phase II large-scale tests were conducted on four footing specimens designed with different column bar anchorage details: straight bars, hooked bars with two different hook orientations, and headed bars. All column reinforcement in Phase II specimens could yield during the tests regardless of the anchorage types. Furthermore, all anchorage types developed reinforcing bar stresses in the vicinity of the anchorage region, except for the hooked bars oriented outwards to the column. The properly-oriented hooked bars, considering the internal force flow of the strut-and-tie model, and the headed bars developed a relatively uniform reinforcing bar stress distribution throughout their length than the straight bars. Based on experimentally-measured stress distributions for the column reinforcement, a critical section was also proposed to establish the anchorage requirement for the column reinforcement in a 3D strut-and-tie model. Additionally, four large-scale tests were conducted on drilled shaft footing specimens employing an equivalent loading condition of Phase III loading scenario by introducing tension in the drilled shaft reinforcement. Three different anchorage details were tested: straight bars, hooked bars, and headed bars. The drilled shaft reinforcement was capable of developing its full yield strength in tension in all the tests, regardless of the anchorage detail. The tensile stresses in drilled shaft bars were primarily developed in the region of the embedment length closest to the interface between the drilled shaft and the footing, while negligible stress and slip were measured in the vicinity of the unloaded end of the bars. Based on the findings of the experimental program, a critical section was also proposed to establish the anchorage requirement for the drilled shaft reinforcement in a 3D strut-and-tie model. Numerical parametric studies were also conducted to include more design parameters that can affect the position of the proposed critical sections for the column and drilled shaft reinforcement to solidify the proposed critical sections. The analysis results could verify the conservativeness of the proposed critical sections for the column and the drilled shaft reinforcement. Lastly, a series of the 3D STM design guidelines were proposed by refining the current two-dimensional (2D) STM design guidelines on the basis of the test data and insights obtained from the experimental and numerical parametric studies. To the author’s knowledge, this study firstly established the database of drilled shaft footings subjected to uniaxial flexural compression loading scenarios; therefore, only a total of nine test specimens in this study could be evaluated using the refined guidelines. As a result, the accuracy of the ultimate strength predictions was improved without any unconservative or overly conservative predictions. A design example of a drilled shaft footing subjected to various uniaxial loading scenarios is also provided.Item 4-dimensional process-aware site-specific construction safety planning(2015-12) Choe, Sooyoung; Leite, Fernanda L.; Caldas, Carlos; Thomas, Stephen; Zhang, Zhanmin; Alves, ThaisThe construction industry has one of the worst occupational health and safety records of all industries. In spite of stringent regulations and much attention towards reducing risks in the physical environment, the construction industry continues to be associated with high levels of accidents, injuries, and illnesses. Construction safety management activities are typically categorized into safety planning and execution processes. Despite the interdependent relationship between safety planning and execution processes, current safety planning processes lack a systematic approach because of limited safety tools and site-specific information available. As a result, safety planning and execution processes are generally segregated and, consequently, most safety execution processes rely on ad-hoc safety activities during construction. The objective of this research is to systematically formalize the construction safety planning process in a 4-dimensional (4D) environment to address site-specific temporal and spatial safety information, by leveraging project schedules and information technology to improve current construction safety management practices. Prior to developing a specific framework, this research presents a safety risk generation and control model to describe the phenomenon of dynamic safety risk, incorporating construction domain knowledge. The proposed model addresses how the inherent risk of a worker can be transformed by different measurable contexts of activities. Based on the theoretical model, this research assessed safety risk of different construction trades in a quantitative manner. By integrating multiple national injury databases, safety risks of different construction occupations were analyzed to explain common risk types, sources of injury, and risk scenarios associated with each occupation type. With results of safety risk analysis as a reference, a formalized safety planning framework to aid in developing a long-term safety risk prediction plan was proposed. The proposed framework analyzed activity, work period, and work zone safety by integrating a project schedule and a 3D model. The proposed safety planning process was tested in a real-world project. This research advances safety knowledge, integrating site-specific temporal and spatial information, and significantly affecting the construction safety planning process. The proposed safety planning approach can provide safety personnel with a site-specific proactive safety planning tool that can be used to better manage jobsite safety by predicting activity risk, work period risk, and work zone risk in advance. In addition, visual safety materials can also aid in training workers on safety and, consequently, being able to identify site-specific hazards and respond to them effectively.Item A data-driven methodology for prioritizing traffic signal retiming operations(2018-12) Dunn, Michael Robert; Machemehl, Randy B.Signal retiming is one of the chief responsibilities of municipal transportation agencies and is an important means for reducing congestion and improving transportation quality and reliability. Many agencies conduct signal retiming and adjustment in a schedule-based manner. However, leveraging a data-driven, need-based approach to the prioritization of signal retiming operations could better optimize use of agency resources. Additionally, the growing availability of probe vehicle data has made it an increasingly popular tool for use in roadway performance measurement. This thesis presents a methodology for utilizing segment-level probe-based speed data to rank the performance of traffic signal corridors for retiming purposes. This methodology is then demonstrated in an analysis of 79 traffic signal corridors maintained by the City of Austin, Texas. The analysis considers 15-minute speed records for all weekdays in September 2016 and September 2017 to compute metrics and rank corridors based on their relative performance across time periods. The results show that the ranking methodology compares corridors equitably despite differences in road length, functional class, and traffic signal density. Additionally, results indicate that the corridors prioritized by the ranking methodology represent a much greater potential for improving travel time than the corridors selected under the schedule-based approach. This methodology is then packaged into a web-based tool for integration into agency decision-making. Finally, consideration is given to how this methodology might be used to identify candidate corridors for implementing adaptive signal control techniques.Item A decision-based approach to establish non-informative prior sample space for decision-making in geotechnical applications(2022-12-21) Feng, Kai (Ph.D. in civil engineering); Gilbert, Robert B. (Robert Bruce), 1965-; Lake, Larry W.; Rathje, Ellen M; Nadim, Farrokh; Boyles, StephenBayes’ theorem is widely adopted for risk-informed decision-making in natural hazards (which often have limited data), but the prior sample space based on the existing methods may lead to inconsistent, irrational, and not defensible results. Therefore, Decision Entropy Theory (DET) is under development to improve the assessment of small probabilities when limited information is available for non-informative prior sample space in assisting Bayesian decision-making. The key idea to establish a non-informative prior sample space with DET is that the value of new information is as uncertain as possible, or the entropy of the new information is maximized. The mathematical formulation includes prior decision analysis by maximizing the relative entropy of the value of perfect information and pre-posterior decision analysis by maximizing the relative entropy of the value of imperfect information given each value of perfect information. The goal of this research includes (1) apply the theory to simple problems to demonstrate and study its rigorous implementation, evaluate possible approximations to reduce the computational effort required to implement it rigorously, and develop insight into the results; (2) propose and characterize the likelihood functions to represent subjective judgment for small-probability events in the decision analysis; and (3) demonstrate the application of the theory to real-world cases histories. From this research, the following conclusions are drawn: (1) results of illustrative decision analysis examples show that the non-informative prior probabilities obtained from DET are sensical and address concerns that have been raised about other approaches to establish non-informative prior probabilities that do not consider their impact on decision making; moreover, the DET-based non-informative prior is invariant to transformations of uncertain variables as it depends on the decisions rather than how the states of nature are defined; (2) an approximation to the rigorous DET reduces the computation effort considerably (many orders of magnitude), provides reasonable results for the prior decision and value of perfect information, but is less able to approximate the value of imperfect information; (3) likelihood functions proposed for fractional occurrence models with the Binomial distribution, Poisson distribution, and Multinomial distribution have a maximum at the estimated fraction of occurrences and a Fisher information quantity that is inversely proportional to the estimated fraction and proportional to the length of the record used to estimate the fraction; and (4) the non-informative prior probabilities obtained with DET for the dam case history provide useful insight into the potential impacts of not making assumptions beyond what is actually known. When uncertainty in frequencies of overtopping and the chance of dam failure given overtopping (fragility) are included, the decision to rehabilitate the dam is justified with a cost of dam breach that is more than 100 times smaller than when this uncertainty is neglected and more than 10 times smaller than when uncertainty in the hazard but not the fragility is neglected. In addition, the maximum value of obtaining additional information about frequencies of hazard and fragility is 35% of the cost of rehabilitation. The theory will be advanced in the future by developing more efficient algorithms that optimize the time complexity and space complexity for the numerical implementation of DET and applying it to more complicated and realistic problems.Item A framework for evaluating energy embedded in the United States' food system, including trade-offs between refrigeration and food waste(2019-10-10) Birney, Catherine Irene; Webber, Michael E., 1971-; Allen, David T; Faust, Kasey M; Lieberknecht, Katherine ERecent legislative proposals and publications focus on combating climate change, calling for a reduction of greenhouse gas emissions (GHG) in the United States (US) and worldwide (1-3). Research examining the relationship between energy use and the food system has found that for the last 40 years, the food system accounted for 10-14% of all US energy use, contributing to 14% of national CO₂ emissions (4-6). In 2002, the US food system required the same amount of energy as India's annual consumption (6). Because the food system is resource intensive, it is worth exploring the environmental impact of our consumption habits. In particular, the role refrigeration plays in our food system is worth exploring because it is energy intensive but helps avoid food waste. This body of work assesses the environmental impacts of the average American's diet and food loss and waste (FLW) habits through an analysis of energy, water, land, and fertilizer requirements (inputs) and GHG emissions (outputs). Existing datasets were synthesized to determine the ramifications of the typical American adult's food habits, as well as the environmental impact associated with shifting diets to meet US Department of Agriculture (USDA) dietary guideline recommendations. Results for 2010 indicate that FLW accounted for 35% of energy use, 34% of blue water use (groundwater and surface water), 34% of GHG emissions, 31% of land use, and 35% of fertilizer use related to an individual's food-related resource consumption, i.e., their foodprint. A shift in consumption towards a healthier diet, combined with meeting the USDA and Environmental Protection Agency's (EPA) 2030 food loss and waste reduction goal could increase per capita food related energy use 12%, decrease blue water consumption 4%, decrease green water use 23% (water stored in soil), decrease GHG emissions from food production 11%, decrease GHG emissions from landfills 20%, decrease land use 32%, and increase fertilizer use 12%. Food-related energy use is expected to increase, even with the reduction in FLW, because fruits and vegetables are less calorie dense than meat products. So although produce has a lower energy intensity per gram, consumers need to consume larger quantities of produce than meat to reach the same caloric intake. Refrigeration infrastructure can help prevent food waste, but there are few studies that holistically examine the energy embedded in the US food system. The food cold chain is the temperature-controlled supply chain for refrigerated and frozen foods, beginning after harvesting crops and slaughtering animals and ending at consumption. The cold chain is integral to extending the shelf life of food and preventing food waste. This study estimates the energy consumption and carbon footprint of refrigeration along the US food cold chain using an environmental input-output (EIO) analysis from the USDA. Cold chain energy consumption is calculated by food group, food chain stage, industry, and fuel type. In 2007, food refrigeration accounted for 30% of the 13.17 exajoules (EJ) that the US used to produce and move food from farm to plate. This energy use equates to 3.4% of total US energy use, and 2.6% of total US GHG emissions. Results from this study are extrapolated to assess energy use from 1993-2012 and find that over these 20 years, refrigeration was consistent, accounting for 27-30% of annual food-related energy. This steady energy demand reflects that we increasingly rely on the food cold chain, as over the last 20 years, refrigeration effciency has significantly improved (7). The third component of this dissertation is to create a framework for assessing the energy-intensities for food and FLW at different stages of the supply chain. This project builds on existing analysis to provide an updated assessment of energy embedded in FLW, by capturing both direct and indirect energy inputs through use of an EIO analysis. In 2007, approximately 4.38 EJ of energy were embedded in US food waste, totaling to 33% of food system energy use, or 4% of US energy. Of the energy embedded in FLW, 27% is generated at the household and 20% at the processing stage. Energy from packaging waste contributes 8% of embedded energy, indicating that despite recent media attention, FLW interventions might prove more impactful at the household or processing stages. Taken together, these analyses can be used as a framework for assessing the environmental impact of consumer habits and the technological trade-offs of increased energy for refrigeration compared with the energy savings of avoided food waste.Item A framework for optimized and automated tower crane planning in preconstruction phase(2018-05) Ji, Yuanshen; Leite, Fernanda L.; Alves, Thais; Borcherding, John; Caldas, Carlos; Machemehl, RandyThe construction industry has only been using tower cranes to assist lift tasks for less than eight decades. In this time, various types of tower cranes have been developed to suit specific requirements of different construction job sites. However, tower cranes are less broadly used in North America, especially in the United States, than Europe and Asia. This is partially attributed to ineffective tower crane planning in the preconstruction phase, which lacks a formalized planning process and a commonly agreed upon planning target. In the state-of-practice, Engineers conduct iterative planning in a manual approach using mostly 2D representations of project data and equipment specifications. The chance of identifying an optimal solution is small and site-specific constraints, such as spatial-temporal conflicts and the ever-changing clearance when collaborating with other machinery equipment are subject to being overlooked. A formalized planning process, along with advanced modeling, simulation, and optimization techniques, provides intuitive observation of the multiple dimensional solution space of the tower crane planning problem. It has the potential to automate and advance the planning of machinery equipment used on building construction projects. This dissertation research enhances tower crane planning in the preconstruction phase using parametric modeling, visualization, four-dimensional simulation, rule-based checking, and optimization algorithms. The overall goal of this study is to create a framework that allows engineers and researchers to formalize site-specific constraints of tower crane plans, and to automate the analysis, evaluation, and visualization of multiple alternative plans. In order to identify the optimal solution, an optimization formulation using mixed-integer programming was introduced. An application of such framework is presented in real-world scenarios and the results demonstrate that the effectiveness and efficiency of tower crane planning in the preconstruction phase can be improved and a broad range of alternative plans can be quantitatively assessed for communication, training, or continuous improvement. The main contribution of this study includes the introduction of a machinery equipment plan assessment framework. This framework incorporates project data (e.g., schedule, site-layout, lift demands) and a specialized tower crane planning model to visualize, identify, and analyze the obstructions in alternative plans with respect to spatial, capacity, and safety constraints.Item A framework for processing connected vehicle data in transportation planning applications(2016-12) Deering, Amanda Marie; Bhat, Chandra R. (Chandrasekhar R.), 1964-This thesis presents a framework to process connected vehicle data into a format that is practical for implementation in the transportation planning field. Whereas prior research on connected vehicles has used theoretical models or small data samples for analysis, this study uses the largest public connected vehicle dataset currently available – the Sample Data Environment from the Safety Pilot Model Deployment project out of Ann Arbor, Michigan. This data includes basic safety messages and driving data for 2800 vehicles over two months. An algorithm to process basic safety message data into a trip level dataset is presented. This thesis also includes a process for spatial aggregation of trips into origin and destination zones using a hexagonal grid. These processes are implemented through the combination of a variety of open-source tools including Hadoop and PostgreSQL. Excerpts from the processed data are provided as well as sample analysis applications for the trip and spatial data. Recommendations and guidance are provided on handling the details of such an immense dataset. Since similar future vehicle-to-vehicle communications datasets are likely, it is imperative to develop methods to process and analyze this rich data effectively.Item A generalized procedure for determining when street maintenance activities should be allowed(2019-12-09) Ahsan, Ahmed Samiel; Machemehl, Randy B.Street lane closures are necessary for the completion of transportation infrastructure maintenance and rehabilitation projects, especially for a city as fast growing as Austin. It can be preplanned by allowing lanes to be closed when traffic demand is less than the remaining capacity of the streets for minimizing travel delay. The objective of this study is to consider all available 24-hour traffic volume counts available for Austin streets, compute lane and street capacities, and determine the daily hours when traffic demand could be served with a lane closed. Moreover, with these potential closure hours as a basis, identify geographic areas where streets have the same closure times. Traffic Analysis Zones (TAZ’s) were chosen to depict the geographic areas and six overlays were constructed using travel direction and street type to form sub-classes. The overlays were presented using ESRI ArcGIS, and later were exported to ArcGIS online to add searching and interactive capabilities. It was found that available traffic volume counts were sufficient to delineate some zones with streets of similar lane closure times, however, many streets had no count data available. Moreover, this methodology could be used to expand the overlays should traffic volume counts of more streets become available.Item A GIS-based early warning tool for pavement deterioration due to unusually heavy truck loads(2018-01-25) Li, Tianxin, M.S. in Engineering; Machemehl, Randy B.Budget from transportation agencies cannot meet the increasing requirements of pavement maintenance. Preventative pavement maintenance is accepted as a more cost-effective way to keep pavement in an acceptable level with less investment, compared to the reactive pavement maintenance. Traffic information, especially that about heavy truck trips, can help agencies to determine when and how to take preventative pavement maintenance. Two types of unusually heavy truck trips, associated with energy extraction (oil and gas) and urban development activities, were analyzed in this study. Their equivalent single axile loads (ESALs) were then mapped to the network. In addition, existing pavement condition, measuring by ride score, one of the common indexes of Pavement Serviceability, was considered into the model. A new index, ESALs divided by ride score, was introduced to reflect the priority of pavement maintenance of the network influenced by the unusually heavy truck trips. A GIS-Based model, automating the process of analysis by employing the Python Toolbox and online ArcGIS Add-in, was developed. Results include the maps and Excel tables to help agencies understand the priority of pavement maintenance of the network in terms of ESALs and current pavement condition.Item A joint behavioral choice model of carpool formation and frequency(2023-05-04) Verma, Vivek (M.S. in Engineering); Bhat, Chandra R. (Chandrasekhar R.), 1964-The future of transportation is often characterized by a vision of shared mobility in which multiple individuals ride in the same vehicle together. The most prevalent form of such shared personal mobility is carpooling. Despite decades of efforts to increase carpool mode shares, the share of carpooling for most travel, and especially work travel, has decreased. There is a need for a deeper understanding of the phenomenon and the drivers that influence carpool choice behavior. To this end, this paper presents an examination of carpool choice behavior with a focus on three critical dimensions of interest: frequency of carpooling, choice of companion for carpooling, and the choice of platform or method for making the carpool arrangement. Using a novel data set derived from a survey of commuters, this paper presents a simultaneous equations model of carpool frequency, companion, and formation method with a view to investigate the carpool choice phenomenon in a more holistic behavioral framework that incorporates a multitude of critical carpool choice dimensions. Results show that individuals do not embrace carpooling with strangers and do not use formal carpool programs to arrange their carpool arrangements. Model results show that a host of socio-demographics and built environment and workplace characteristics affect all three dimensions of carpool behavior. Insights from this study would help in identifying policies and technological platforms that would promote carpooling for disparate population subgroups.Item A mechanistic exploration of oil recovery via selective oil permeation(2023-04-21) Cooper, Carolyn M.; Katz, Lynn Ellen; Kinney, Kerry A.; Seibert, Frank; Lawler, Desmond F; Freeman, Benny DOil-water separations are necessary for the reuse of oil-laden wastewater. For example, oil and gas produced water may have influent oil concentrations of up to 2,000 mg/L that must be reduced to <10–35 mg/L to meet regulatory requirements for non-industrial reuse. However, many conventional oil-water separation processes are unable to achieve these effluent concentrations. Selective oil permeation is a promising membrane-based oil-water separation approach that may be able to meet these treatment goals. The process differs from traditional membrane-based oil-water separations by permeating oil (instead of water) through the hydrophobic membrane. Exploitation of the preferential oil wetting of the membrane surface minimizes viscous fouling and generates an oil permeate stream. Previous investigation of selective oil permeation has demonstrated its ability to recover oil over extended durations. Researchers have hypothesized that mechanistic competition between coalescence and permeation controls oil recovery, results in the development of an oil film at the membrane surface, and leads to transport phenomena that deviate from traditional pore flow models. However, further verification is necessary to validate the existence of hypothesized mechanisms within the process and verify its applicability to produced water treatment. Few studies have investigated mechanistic interactions or process performance (i.e., oil flux, oil recovery, permeate quality) for oil concentrations less than 1%. Even fewer have probed relationships between process performance, operating conditions, and water quality characteristics. Understanding the answers to these outstanding questions is crucial to defining the opportunity space for selective oil permeation. This dissertation is the first set of studies to present results that (1) characterize and provide guidance for enhancing the membrane conditioning process, (2) identify how the operative mechanisms are impacted by system characteristics, operating conditions, and water quality characteristics within this lower oil concentration range, and (3) apply selective oil permeation to produced water. Achieving the outlined objectives will both expand our understanding of the role of the two key mechanisms underlying selective oil permeation (coalescence and permeation) and begin to define the opportunity space for oil recovery via selective oil permeation.Item A method for developing the true stress-strain relationship for structural steels based on tension coupon tests(2019-12-02) Jones, Cliff Andrew; Engelhardt, Michael D.; Williamson, Eric B., 1968-; Helwig, Todd; Clayton, Patricia; Taleff, EricPredicting the uniaxial stress-stress response of ductile metals like structural steel can provide valuable insight into a broad range of engineering problems. Despite a wide body of research covering more than a century, the approach and guidance related to developing the true stress-strain relationship for ductile metals—specifically structural steels—continues to change and evolve. In particular, guidance related to accurate prediction of the onset of necking and post-necking response remains a topic of ongoing research and capturing these effects remains a challenge to researchers and engineers. The research presented in this dissertation was undertaken to extend the body of knowledge in this area. Particular emphasis is placed on developing a true stress-strain relationship for structural steels that is capable of capturing the onset of necking and post-necking behavior up to fracture. In addition, as standard tension coupon load-deformation data are often the only available information from which to develop such a model, the processes and guidance presented in this dissertation require only that input information. Therefore, advanced experimental approaches and measurement techniques are not required to leverage the guidance presented herein. This path was chosen in the hopes of providing guidance that would be broadly applicable to a wide range of problems, industries, research, and practicing professionals. This dissertation proposes a method for developing a true stress-strain relationship for structural steels that can be directly used in predictive finite element analysis (FEA) models using three-dimensional (3D) solid elements. The result of this research indicate that such a model should be able to reproduce the experimental results of the tension test quite accurately, providing validation and verification of the assumed material definition. Additionally, three derivative rules are presented. These rules were distilled from existing research and provide simple guidelines for capturing necking, maintaining computational stability and uniqueness, and prohibiting post-necking cold-drawing behavior. The rules are incorporated into the recommended process for developing the true stress-strain relationship for structural steels; however, they are also presented separately so they can easily be incorporated into alternate methods for defining such a constitutive relationship. Finally, while this research has furthered the understanding of the true stress-strain relationship of structural steels, particularly in predicting necking and post-necking behavior, there is still considerable room for additional research on this topic. For example, automation, incorporating error minimizing techniques, and adding local and material-level and microstructural phenomena (e.g., void formation, growth and coalescence) each offer great potential for extending and improving the recommendations presented in this dissertation. Thus, while this effort has intentionally maintained a limited focus, it is the authors hope that it serves others as one more small step toward accurate prediction of the load-deformation behavior of structural steels and other ductile metals.Item A method for estimating the inputs necessary to construct a microsimulation model using only publicly available data(2016-12) Van Hout, Alexander Joseph; Machemehl, Randy B.Standard traffic engineering methodologies rely heavily on traffic data collected in the field for the design and planning of roadways and intersections. This data can be used to build microsimulation models, which are versatile and realistic tools for analyzing traffic scenarios. Sometimes, however, time and budget do not allow for the collection of high quality data in the field, but answers to questions about traffic scenarios are still needed. This thesis provides a review of data that is typically available to the public online as well as existing traffic engineering methodologies that will be useful in manipulating that data. It presents an empirically derived method for estimating left turn, thru, and right turn counts at intersections based on tube counts on surrounding roadways and the characteristics of the intersection. It then presents an exploration of the distribution of directionality of traffic throughout the day. Finally, it presents a method for converting tube counts on an approach to an intersection to equivalent lane volumes so that signal timings can be estimated.Item A methodological framework for cross-asset resource allocations to support infrastructure management(2016-08) Porras-Alvarado, Juan Diego; Zhang, Zhanmin, 1962-; Machemehl, Randy; Walton, Michael; Bhasin, Amit; Gao, LuResource allocation mechanisms have become a major issue for transportation agencies in the United States and around the world. For this reason, transportation agencies are exploring alternatives to modify traditional allocation mechanism due to budgetary challenges generated by the decrease in funding and the increasing cost of preserving and operating transportation systems. Transportation asset management (TAM) practices enable agencies to change the operation and management of transportation infrastructure from the traditional concept of “public-owned” systems to more business-oriented processes. One of the main concerns with the TAM framework and its implementation is the absence of an organized process for cross-asset resource allocations. Additionally, most of the alternative methods for funding allocations focus on maximizing infrastructure performance under budget constraints, but ignore the consideration of equity or fairness. The objective of this study is to develop an innovative methodological framework for cross-asset resource allocations, yielding a data-oriented approach to enhancing infrastructure management. The allocation module is comprised of three resource allocation mechanisms following a top-down approach: a fair division approach based on asset performance, a performance-based multi-objective optimization, and an asset value-based multi-objective optimization. In the first mechanism, the fair division method is used to allocate resources in such a way that all parties involved believe they are receiving a fair share of the available resource based on established utility functions. Then, Collective Utility Functions (CUFs) are employed to perform the resource allocation, which results in total utility and total envy values. These values are used to conduct trade-off analyses of the different allocations based on the CUFs. Under the second procedure, a multi-objective optimization formulation is employed to integrate efficiency and equity, where equity is taken into consideration by using utility and envy concepts, while efficiency is incorporated by maximizing performance. In the third mechanism, an innovative asset value methodology is integrated into the cross-asset resource allocation process, serving as a common comparative measure between assets. To demonstrate the applicability of the proposed methodological framework, a case study was conducted using two asset groups, pavements and bridges, from the roadway network of the Austin District located in Texas. Results from the case study shows that the proposed methodological framework has great potential as a tool to support highway agencies in performing cross-asset resource allocations at the network level.Item A methodology to incorporate load tests into the reliability-based design of deep foundations for the serviceability limit state(2022-05-09) Alotaibi, Faisal Mohammed; Gilbert, Robert B. (Robert Bruce), 1965-The evaluation of the serviceability performance of foundations is important for the geotechnical design. This thesis provides framework to perform reliability analysis for the serviceability design of foundations and implements it on a case study for design of tall towers in Saudi Arabia. First, different prediction models for axial displacement such as t-z analysis and finite element models were examined. Then, a reliability model is proposed that captures both the epistemic and aleatory uncertainties in the predicted settlement. Furthermore, since there are few if any formulations for reliability analysis for serviceability limit state in current LRFD design codes, the accepted level of risk of a serviceability failure is evaluated on the basis of risk and decision analysis. In addition, a methodology is proposed to utilize the proof load tests in the field to update the probability of failure based on the Bayesian technique. Finally, a framework for planning for a proof load test program is provided using Monte Carlo simulations. The results shows the importance of the underlying assumptions in the settlement prediction models. Also, the case study shows that the probability of serviceability failure is reduced when there is an increase in the soil stiffness. Moreover, the results show the significance of obtaining a load-displacement curve from the field and how it contributes to reducing the epistemic uncertainty. The importance of defining the amount of deformation that is problematic to the structure is stressed.Item A molecular biological model describing silver nanoparticle mechanism of toxicity and associated antibiotic resistance(2018-05-04) Chambers, Bryant Allson; Kirisits, Mary Jo; Katz, Lynn E; Saleh, Navid; Hofmann, Hans; Parsek, MatthewControl of microbial growth is key to proper function of engineered systems and human health. Combating biological contamination in engineered processes is complicated due to the limited number of materials that are both able to impede microbial growth and are benign with respect to human and environmental health. Silver nanoparticles (AgNPs) have emerged as a novel biocide, reducing biological fouling in consumer goods and health care materials. Their almost ubiquitous usage is primarily due to their microbial cytotoxicity, limited human toxicity, and their ability to be incorporated into a wide variety of materials. The use of AgNPs is not without challenges; microbial toxicity varies by exposure methodology, and studies have shown that AgNPs have the potential to disrupt engineered biological processes either as nanoparticles or through the dissolution of aqueous silver (Ag([subscript aq])). The use of AgNPs is further complicated by their mechanisms of action; there is significant overlap of their biological targets with the targets of antibiotics. Thus, antibiotic resistance might result from AgNP exposure through the processes of co- and cross-resistance, in which one chemical selects for microbial resistance to a second (unrelated) chemical. In this work, the impact of AgNP aggregation and dissolution on toxicity to Escherichia coli was examined. Data indicate that conditions promoting high fractal dimension promote greater toxicity and induce an oxidative stress response. Subsequent studies on the opportunistic human pathogen Pseudomonas aeruginosa were directed at elucidating the mechanisms of action of AgNPs and the microbial response. Transcriptomic and proteomic studies focused on defining a model of bacterial AgNP interaction and isolated mechanisms of toxicity of AgNPs. Further these data provided the first evidence of AgNP exposure resulting in antibiotic resistance through the expression of multidrug efflux pumps. Transcriptomic data indicated that the stress response systems activated as a result of AgNP exposure were localized to the periplasm while the stress response systems activated as a result of Ag([subscript aq]) exposure were localized to the cytoplasm, which supports a surface attachment model of bacterial AgNP interaction distinct from that of Ag([subscript aq]). Transcriptomic studies revealed that key antibiotic resistance systems, including mexGHI and mexPQ, were stimulated by AgNP exposure. P. aeruginosa cells that were pre-exposed to a sublethal concentration of AgNPs demonstrated increased resistance in subsequent antibiotic challenges, demonstrating that antibiotic resistance can be induced by AgNPs. The findings of this study are an important contribution to our understanding of the impacts of co- and cross-resistance induced by AgNP exposure and will ultimately help inform decisions related to human and environmental healthItem A multiplex network approach to road flooding prediction(2019-07-08) Deo, Isha Padmakar; Passalacqua, PaolaUrban flooding poses risks to life, property, and health every year in the United States. Although accurate models of road, channel, and storm sewer dynamics exist, they are often not deployable at a short time scale suitable for prediction and emergency response. Using a multiplex network model of the road, channel, and storm sewer networks and the Height Above Nearest Drainage (HAND) method, urban flood prediction can be addressed with a network interaction perspective. By redefining the nodal activity during a storm event, critical nodes of the network can be identified using the network betweenness centrality on a larger scale. Here, the multiplex network is constructed on the University of Texas campus, and modeled through the severe Memorial Day 2015 storms. Critical areas of roadway flooding are identified throughout the UT Campus, corresponding to hotspots of high active betweenness centrality throughout the storm. The multiplex network approach serves as an emergency-response-oriented prediction tool for urban flooding.Item A new approach to measuring pozzolanicity of Supplementary Cementitious Materials using existing ASTM standards(2020-05-08) Jang, Jae Kyeong; Juenger, Maria C.G.Supplementary cementitious materials (SCMs) improve the long-term strength and durability of concrete systems through the pozzolanic and/or hydraulic reactions that form additional calcium silicate hydrate (C-S-H) phases. SCMs come in various shapes and forms, and ASTM C618 provides a standard specification that covers coal fly ash and raw or calcined natural pozzolans. However, the two main criteria outlined by the standard, sum of oxides and strength activity index (SAI), are not sufficient for indicating pozzolanicity of materials; and existing test methods for measuring reactivity or pozzolanicity are yet to be standardized. Due to these existing problems, the accelerated mortar bar test (AMBT) outlined by ASTM C1567 and modified SAI testing were implemented in tandem to assess pozzolanicity of materials. Known inert materials and pozzolanic materials that qualify as Class N pozzolans and Class F fly ash were tested per ASTM C1567 to find replacement levels that suppress ASR expansion below 0.10%. Then the same materials were tested at same replacement levels for modified SAI with a fixed water-to-cementitious materials ratio (w/cm) using cylindrical specimens. The data from the two test methods were compared and compiled to assess pozzolanicity. Materials that successfully suppressed ASR expansion below 0.10% and passed modified SAI testing over 75% of control were classified as pozzolanic materials. The proposed method of the thesis successfully screened inert materials that qualify as Class N pozzolans, and successfully identified pozzolanic materials.