Copyright By Chad Maclay Phelan 2009     The Thesis committee for Chad Maclay Phelan Certifies that this is the approved version of the following thesis: City and Regional Planning Software in Context: A Rating Framework for Planning Support Systems APPROVED BY SUPERVISING COMMITTEE: Supervisor: ________________________________________ Steven Moore __________________________________________ Robert Paterson     City and Regional Planning Software in Context: A Rating Framework for Planning Support Systems by Chad Maclay Phelan, B.A. Thesis Presented to the Faculty of the Graduate School of the University of Texas at Austin in Partial Fulfillment of the Requirements for the Degrees of Master of Science in Community and Regional Planning and Master of Science in Sustainable Design The University of Texas at Austin December 2009     To Dana, who made it possible. v    Acknowledgements I would like to thank my readers, Steven Moore and Robert Paterson, as well as the other University of Texas School of Architecture faculty who have provided guidance in this effort. I would also like to thank Robbie Botto at the City of Austin, Amy Anderson at Placeways LLC, and Kiersten Madden at the Mission- Aransas National Estuarine Research Reserve, who provided invaluable knowledge about the design and implementation of planning support systems. vi    City and Regional Planning Software in Context: A Rating Framework for Planning Support Systems by Chad Maclay Phelan, MSCRP; MSSD The University of Texas at Austin, 2009 SUPERVISOR: Steven Moore The difficulty of projecting ecological impacts, as well as the increasing familiarity of planners with Geographical Information Systems and other software technology has led to an increase in the use of Planning Support Systems (PSS) by city and regional planners. Due to their newness and rapid development, there is, of yet, a lack of a comprehensive peer-reviewed literature on the design and implementation of these systems. This thesis proposes and applies a rating framework for PSS in order to facilitate accessibility to and critical investigation of PSS. The rating framework’s criteria are based on the “seven sins” of comprehensive land use models identified by Douglass Lee’s 1973 article “Requiem for Large-Scale Models.” vii    Table of Contents Chapter 1- Introduction……………………………………………………1 Chapter 2- Methodology…………………………………………………...7 Chapter 3- Literature Review……………………………………………14 Chapter 4- Rating Framework Design…………………………………..62 Chapter 5- Rating Framework…………………………………………..72 Chapter 6- Findings/ Conclusion……………………………………….108 Bibliography……………………………………………………………..114 Vita……………………………………………………………………….118 viii    List of Figures Figure 1: Research Design .................................................................................................. 8 Figure 2: 3 major urban models influenced by the Chicago School ................................. 21 Figure 3: Structure of the Lowry model ........................................................................... 25 Figure 4: Structure of the California Urban Futures model .............................................. 29 Figure 5: Rating framework diagram ................................................................................ 64 Figure 6: Numerical Rating Matrix ................................................................................... 66 Figure 7: Applied Planning Support Systems mentioned in chosen sources .................... 67 Figure 8: Numerical Rating Matrix ................................................................................... 72 Figure 9: Map of What If? Build-Out Analysis Output .................................................... 89 Figure 10: Land-Sea Toolkit Diagram .............................................................................. 99 Figure 11: Land-Sea Toolkit Process Diagram ............................................................... 100 1    Chapter 1- Introduction The dominant definition of sustainable development is the one offered in the Brundtland Commision’s Our Common Future report of 1987, that it is development that "…meets the needs of the present without compromising the ability of future generations to meet their own needs.”1 The report goes on to discuss the definition of these needs and the environmental limitations on achieving those needs. Urban expansion contributes to environmental degradation, but thought is not always given to what those impacts will be before they happen. Our Common Future notes that “Cleaning up after the event [of environmental degradation] is an expensive solution. Hence all countries need to anticipate and prevent these pollution problems.”2 Land use policies have a large role to play in achieving this prevention. The commission argues that “altering economic and land use patterns seems to be the best long-term approach to ensuring the survival of wild species and their ecosystems.”3 Our Common Future was very influential and the concept of sustainable development has become widespread, particularly among land use planners. By way of example, www.planning.org, the homepage of the American Planning Association, visited on 11/7/09, had prominent references on its welcoming page to sustainable urbanism, sustainable communities, and sustainable urban districts. However, there have been practical challenges in applying the concept of sustainable development to planning                                                              1 World Commission on Environment and Development, Our Common Future (United Nations, June 1987), chap. 2, http://www.un-documents.net/wced-ocf.htm. 2 Ibid., chap. 2.III.5 #64. 3 Ibid., chap. 6.V #39. 2    practice. Urban planning concerns itself with the actions of large amounts of people over long periods of time. The nature of planning problems has been characterized by Rittel and Webber as “ill and variously defined; often feature a lack of consensus regarding their causes; lack obvious solutions- or even agreement on criteria for determining when a solution has been achieved; and have numerous and unfathomable links to other problems.”4 This description surely applies to applied sustainable development, as the relationship between development and environmental impact can be complex. In order to get a firmer grasp on the connections between development and environmental limitations, there has been increased interest in software planning tools that can “project urban futures and/or estimate impacts.”5 These tools are known by a number of different names, but the most common is planning support systems, or PSS. While they are growing in popularity, however, there is not an extensive literature on how they should be evaluated by an organization that is interested in using them. This is unfortunate, because while these tools can be effectively used by organizations, unfamiliarity can lead an organization to either refrain from using them, or rush into using them without fully anticipating the time and resources required. This thesis will present a rating framework for evaluating planning support systems. In order to formulate the rating system I relied heavily on a paper written by Douglass Lee in 1973 titled “Requiem for Large Scale Models” which was published in the Journal of the American Planning Association. Lee’s article addressed the prevalence                                                              4 Michael P. Brooks, Planning Theory for Practitioners (Chicago, Illinois: Planners Press, American Planning Association, 2002), 12. 5 Richard K. Brail, Planning Support Systems for Cities and Regions (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), i. 3    of computer software models used to forecast urban growth and condemned their use based largely on seven sins that he felt they committed: hypercomprehensiveness, complicatedness, hungriness, grossness, mechanicalness, expensiveness, and wrongheadedness. Douglass Lee set the tone of his May 1973 article with a quote and image from Alice in Wonderland, in which the caterpillar states that “It is wrong from beginning to end…” The it that Lee is using the caterpillar to condemn in his introduction are “large scale urban models,” mathematical models of cities that had become popular in the planning profession beginning in the early 1960s as computers became available for use in carrying out analyses of planning problems. Cities, states, federal agencies, and academic institutions had all become intrigued by the prospect of computer programs that could predict the future of urban development. Planners, computer scientists, physicists, engineers and researchers all began attempting to design models that would do just that. Throughout the 1960s major research projects were begun in Philadelphia, San Francisco, Boston and many other cities to apply these models to urban planning. Model researchers sought to distill aggregate human spatial activity into equations that the new computers could process. However, by the end of the decade, few of the models were achieving meaningful results. While the researchers pressed on to improve the models, policy makers and planners were becoming restless. This was the atmosphere into which Lee’s article “Requiem for Large Scale Models” landed in 1973. In a metaphor-laden article, Lee condemns the models in no uncertain terms. He says that “none of the goals held out for large-scale models have been achieved, and there 4    is little reason to expect anything different in the future.” Planners and policy makers largely agreed with him. This was a historical period in which planners were rethinking the notion of “rational planning,” which said that planning “solutions” could be obtained through logical reasoning. Pivotal events, such as the defeat of Robert Moses’ lower Manhattan expressway, and the demolition of the Pruitt-Igoe housing project, in 1971 and 1972 respectively, had sent shockwaves through the profession. Planners, and the public at large, was having second thoughts about the methods as well as how much power was given to visionary plans for the city. Sir Peter Hall characterized the changes in planning occurring at this time thusly: …in 1955, the typical newly graduated planner was at the drawing board, producing a diagram of desired land uses; in 1965, she or he was analyzing computer output of traffic patterns; in 1975, the same person was talking late into the night with community groups, in the attempt to organize against hostile forces in the world outside.6 Planners were moving towards a more participatory and political model of planning, and it was not clear how the large scale urban models fit into that model. As a result, while research on land use models has continued, there have been few practical applications developed since the 1970s. As Wegener notes: “nowhere in the world have large-scale urban models become a routine ingredient of metropolitan plan making.”7 Modeling and planning have been evolving independently, but theoretical developments in other fields such as geography, business and computer science are making a new integration seem more likely. The emphasis on sustainable development                                                              6 Peter Hall, Cities of Tomorrow, 3rd ed. (Malden, MA: Blackwell Publishing, 2007), 366 7 Michael Wegener, “Operational Urban Models: State of the Art.,” Journal of the American Planning Association 60, no. 1 (1994): 17. 5    and minimizing environmental impacts mentioned above also adds more urgency to the effort. A new form of computer analytical tools appeared in the 1990s known as Planning Support Systems, or PSS. These tools are descendents of Large Scale Urban Models, (LSUMs), but have been designed in response to the changes that have occurred within the planning profession since the 1970s. The designers of these tools specifically respond to the criticisms that Douglass Lee leveled at LSUMs. They are not solely land use models, but seek to respond to the perceived needs of planners in a post-positivist context. As opposed to LSUMs, PSS focus on communication, transparency and context as well as analytical prowess. They seek to do what Lee said that LSUMs could not do: respond to a changed planning context. Do they succeed? This is a difficult question to answer. Due to their novelty and variety, there have not yet been criteria advanced for how to address this question. My hypothesis is that Lee’s 1973 critiques can be of value in answering this question. Although PSS serve many needs for planners, as Terry Moore states, “forecasting - predicting the future - is at the core of PSS.”8 If PSS are still serving this function for planners, then they should still be able to meet Lee’s criteria. These criteria were acknowledged by planners, if not all modelers, as being crucial points. Even if they do not always agree with them as being valid points, it is telling that many articles                                                              8 Terry Moore, “Planning Support Systems: What Are Practicing Planners Looking For?,” in Planning Support Systems for Cities and Regions, ed. R. K. Brail (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), 235. 6    concerning modeling within the discipline of planning begin with a reference to Lee and his impact on the development of urban modeling.9 However, PSS are also different than LSUMs, and there are surely aspects of the new software that Lee was not able to address in 1973. If Lee’s critiques are to be used, how should they be adapted to the changes that have occurred within urban cultures and planning software? I have used a literature review to interpret Lee’s criticisms in light of subsequent developments in planning theory and practice. Using this literature review as a foundation, I have developed a rating system to be used in evaluating PSS. After developing the rating framework, I applied it to two different PSS, both individually and in the context of applied case studies. These applied cases studies are included to give a context for the PSS, so that organizations can evaluate how well the software is able to adapt to applied planning uses. If potential users are better able to gauge the usefulness of the software, they will be able to make better decisions about which software to purchase, and in the long term, their buying preferences will help to shape the development of PSS. There are certainly opportunities for more direct involvement by planners in the development of PSS and other planning software, and I see this rating framework as one step towards making that possible by making the software easier to evaluate and understand.                                                              9 Michael Batty, “A Chronicle of Scientific Planning: The Anglo-American Modeling Experience,” Journal of the American Planning Association 60, no. 1 (1994); Richard K Brail and Richard Klosterman, eds., Planning Support Systems: Integrating geographic information systems, models, and visualization tools (ESRI Press, 2001). 7    Chapter 2- Methodology Ontological/Epistemological Assumptions: I am using a naturalistic frame of inquiry to guide my research. Naturalist inquiry, sometimes known as qualitative, phenomenological, hermeneutic or interpretive/constructivist inquiry, states that there are “multiple, socially constructed realities.”10 The epistemological perspective of naturalism is that “it is neither possible nor necessarily desirable for research to establish a value-free objectivity.”11 In my research, my goal is not objectivity, but utility. This framework is one that fits particularly well with the field of planning. As mentioned in the introduction, the problems that the field of planning investigates are often designated as “wicked problems” that are “ill and variously defined; often feature a lack of consensus regarding their causes; lack obvious solutions or even agreement on criteria for determining when a solution has been achieved; and have numerous and unfathomable links to other problems.”12 With these sorts of problems, the pursuit of objectivity becomes not only difficult to achieve, but misleading as well. Rittel and Webber, among others, argue that “Planning is a component of politics. There is no escaping that truism.”13 In this paper I confront the problem of applying quantitative methods within a naturalistic framework.                                                              10 Linda Groat and David Wang, Architectural Research Methods (Wiley, 2002), 33. 11 Ibid. 12 Brooks, Planning Theory for Practitioners, 12. 13 H. W.J Rittel and M. M Webber, “Dilemmas in a General Theory of Planning,” Policy sciences 4, no. 2 (1973): 169. 8    Methods In this paper I will use a mixed methods approach to propose and test a model rating system for the evaluation of planning support systems. I will use the interpretive- historical method to inform the creation of the rating system. This rating system will be applied to two current PSS as well as two case studies where these PSS have been applied. Based on my experience of applying the rating system in this way, I will offer conclusions and recommendations for further research. This research process is illustrated in figure 1, and will be explained in more detail below. Figure 1: Research Design The numbers in the diagram illustrate the steps in my research design. In the section below I will discuss the major stages of the research design. 9    1. Literature Review The literature review uses the method of interpretive-historical research, which Groat and Wang describe as “investigations into social-physical phenomena within complex contexts, with a view toward explaining those phenomena in narrative form and in a holistic fashion.”14 This method is a good fit because the development of PSS definitely has occurred within a complex context. However, while I do attempt to portray a holistic description of its history, the method I use is to separate the general history into seven histories which each tell a facet of the history. My reasons for this are twofold: first, this method helps break down the complex context of urban modeling and PSS, and second, this method serves my purpose of exploring the relevance of Douglass Lee’s seven sins to formulating a rating system for PSS. By breaking the history into seven parts, each subhistory or narrative is directly relevant to that sin. Through these seven histories, I hope to portray a holistic picture of the complex context of the development of PSS. These seven histories are represented by the seven rectangles to the left of the representation of myself in figure 1. This illustrates their role as a filter through which the flow of history is interpreted. Lee’s 1973 paper is portrayed in the diagram as a circle that is separate from the arrows representing history because it stands apart in terms of the histories that I am describing. The seven rectangles within the circle represent the seven sins that Lee identified.                                                              14 Groat and Wang, Architectural Research Methods, 136. 10    2. Rating Framework Development and Testing Using the foundation that was established by the literature review, I create a rating system utilizing Lee’s seven sins. I will describe the rationale and schematic design of the rating system here, but a more detailed description and the rating matrix can be found in Chapter 4. The seven sins become criteria that the rating system uses in three different ways: numerical ratings, short descriptions, and case study descriptions. These three different approaches are used as a form of triangulation. Todd Jick identifies triangulation as “a vehicle for cross validation when two or more distinct methods are found to be congruent and yield comparable data.”15 Triangulation can improve the range of findings as well as their validity, in both quantitative and qualitative methods. In designing the multiple methods applied in the rating system, I sought to find a combination of methods where “the weaknesses in each single method will be compensated by the counter-balancing strengths of another” as Jick suggests. Numerical ratings are effective because they provide a quick method of communicating a rating, and also can be weighted through mathematical operations. Their weakness is that they obscure the context in which the rating was formulated. Flyvbjerg, among others, argues for the importance of “context-dependent” research. He says that “[c]ontext is central to understanding what social science is and can be.”16 In the proposed rating system, this context is provided, in part, by the short                                                              15 Todd D. Jick, “Mixing Qualitative and Quantitative Methods: Triangulation in Action,” Administrative Science Quarterly 24, no. 4 (December 1979): 602. 16 Bent Flyvbjerg, Making Social Science Matter: Why Social Inquiry Fails and How it Can Succeed Again (Cambridge University Press, 2001), 9. 11    descriptions, which elaborate on the rationale behind the numerical ranking that was generated while exploring and researching the software design. While these descriptions offer contextual explanations for the basis used in the numerical rankings, PSS can only be truly understood in the context of their applied use, or praxis. Flyvbjerg argues that [c]ases exist in context. What has been called the ‘primacy of context’ follows from the empirical fact that in the history of science, human action has shown itself to be irreducible to predefined elements and rules unconnected to interpretation. Therefore, it has been impossible to derive praxis from first principles and theory. Praxis has always been contingent on context-dependent judgment…17 While any individual or organization’s use of PSS will be affected by their own unique context, the case study portion of the rating framework provides a better sense of the software’s ability to respond to the complications of application and how it affected the outcomes of the project that it was used in. The design goal of the ratings framework is that the combination of these three approaches to applying Lee’s criticisms creates a review process that is accessible, holistic, and useful to potential users of PSS. In figure 1, the rectangles to the right of the representation of myself represent the ratings system being applied to the two PSS I investigate in this paper: What If? and CommunityViz, at the short descriptions phase (the boxes) and the applied case study phase (boxes with rough circle). The order in which I carried out these investigations is described via the order of the letters, a-d. The short descriptions were carried out before the case studies because it is useful to have a solid understanding of the software design in order to understand and appreciate the case study. I am using the term “case study”                                                              17 Ibid., 136. 12    based on the definition offered by Groat and Wang: an “empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident.” (p. 346). The phenomenon in this case would be the applied software program, but the point of the case study is to evaluate its interaction with its context. PSS are highly configurable, and can be used in different ways, so it is imperative to evaluate them in this way. Using a classification defined by Robert Stake, my case studies are instrumental case studies, which he defines as cases that are “examined mainly to provide insight into an issue or to redraw a generalization.”18 The issue or generalization here are the ratings criteria. The case studies are used to refine the evaluations done in the numerical and short descriptions phases of rating. The application of the rating framework to the PSS and case studies is meant as an example of the use of the rating system. It has been designed to be flexible enough to be used in slightly different ways based on the needs of the organization or individual that is applying it. I will describe this in more detail in Chapter 4. 3. Conclusion/ Recommendations for further study After completing the test run of the rating system on the software and case studies I reflect on the strengths and weaknesses of the proposed rating system. My conclusions and recommendations are based on my own experience with applying the rating framework and are thus subjective and should be taken as an example of the kind of                                                              18 Robert Stake, “Qualitative Case Studies,” in The SAGE Handbook of Qualitative Research, ed. Norman K. Denzin and Dr. Yvonna Lincoln, 3rd ed. (Sage Publications, Inc, 2005), 445. 13    analysis that is possible with the proposed rating system. Further testing would be required in order to derive any sort of assessment of the effectiveness of the rating framework. My hope is that in the future, myself or others will do such a study using a sufficiently large sample of potential PSS users and different software programs to get more definitive results concerning its effectiveness. My proposal of the rating system and experience with it in this thesis is depicted by the seven rectangles underneath the representation of myself. I envision the audience of this rating system as being academics, decision makers, planners, as well as the general public. As stated above, it is my hope that readers will conduct further research and/or apply the rating framework in useful ways, and in doing so, propose alterations to the ratings framework. The goal is also that through use or critical analysis of the ratings framework, readers will be able to make more productive additions to the nexus between planning practice and theory. While theory and practice are often characterized as separate areas of expertise, each productively informs the other in the best research and applications. These hoped-for feedback responses are depicted as arrows from the audience back to the rating framework as well as to theory and practice. The arrows between theory and practice indicate the important connection between the two. 14    Chapter 3- Literature Review Using an interpretive-historical approach, this Chapter evaluates the history of the theoretical trends that contribute to the development of planning support systems. The Chapter starts with a description of the content and historical context of “Requiem for Large Scale Models”, written by Douglass Lee. Following this are seven histories that are seen through the perspective of one of Lee’s seven “sins of large scale urban models”: hypercomprehensiveness, wrongheadedness, complicatedness, mechanicalness, hungriness, grossness, and expensiveness. Each sub history begins with a quote taken from “Requiem” that summarizes Lee’s description of that sin. Lee’s Article In choosing to use “Requiem” as a template for exploring the history of the development of PSS, I acknowledge that it is not a neutral framework. The article, while almost universally recognized as being influential, is not viewed as neutral. Many have criticized it for contributing to a bitter division between modelers and land use planners that has not been productive for either field. Lee himself, writing in 1994, would apologize for the “argumentative, confrontational, and flamboyant,” tone of the article. While acknowledging this negative aspect of its legacy, the criticisms are still important ones, and should be considered. “Requiem,” as all articles are, was an article of its time. The early 1970s were a time when the weakness of Large Scale Urban Models (LSUMs) were coming to be 15    realized, but not yet in a comprehensive way. Many cities were discovering that the models were not producing the results that had been promised. Public discontent was flaring up over misguided planning efforts and legitimate urban problems that were not being addressed by the models. Lee was convinced that the models had become a waste of money and time, when there were many pressing planning problems that needed to be addressed. As Lee said in a later article, writing “requiem” was a way to give his students at Berkeley cover for rejecting the current direction of the models, as there was not currently much written in academic journals strongly condemning the models.19 Being a part of the planning profession, Lee felt he had no choice but to engage in a debate that was confrontational in nature. He recognized that some would misinterpret the message of the article. He states: Some planners never accepted models as legitimate activity of the field, and they will claim this paper vindicates their position. This is incorrect… It is not our intent to discourage those who would apply quantitative methods to urban problems, but, rather, to redirect their talents into more valuable pursuits than repeating the mistakes of the last decade.20 However, due in part to the tone of the article, as well as the time in which it was written, the article and Lee became associated with the movement against rational planning methods. Rather than respond to the criticisms that Lee offered, many in the urban modeling community retrenched their positions, which contributed to the break between urban modeling and planning in the ensuing years. Britton Harris, who has been                                                              19 Douglass B Lee, “Retrospective on Large-Scale Urban Models.,” Journal of the American Planning Association 60, no. 1 (1994): 36. 20 Douglass B Lee, “Requiem for Large-Scale Models,” Journal of the American Planning Association 39, no. 3 (1973): 163 16    active in developing quantitative planning methods, is a good example of one of these modelers who felt attacked by Lee’s article. He talks about the impact of the article in a 1994 issue of JAPA that considered the impact of Lee’s article 20 years later. He felt that “Requiem” “had extensive adverse effects, partly on modeling but largely on planning” and that the “planning profession…used Lee’s advice to shoot itself in the foot.”21 Harris overstates the influence that Lee had, as I remarked above, there were many other contributing factors to the change of heart that planning had about modeling. However, “Requiem” certainly didn’t help the relationship between modelers and planners, and the break would prove long lasting. In the end, the developments that led to PSS happened in fields outside of planning. Developments in the fields of geography, business, landscape architecture and others would eventually change the discussion within planning and provide the opportunity for reconciliation. Although many of these developments occurred in other disciplines, the eventual design and goals of PSS have largely vindicated and reflected the criticisms that Lee had of LSUMs. This makes them a useful lens with which to conduct a literature review leading to the development of PSS. The criticisms are listed below along with a key quote in which Lee explains the criticism and a portion of the history of the development of PSS.                                                              21 Britton Harris, “The Real Issues Concerning Lee's "Requiem",” Journal of the American Planning Association 60, no. 1 (Winter 1994): 31 17    Critical History: Hypercomprehensiveness “(1) the models were designed to replicate too complex a system in a single shot, and (2) they were expected to serve too many purposes at the same time.” (p. 164) Hypercomprehensiveness emphasizes the difficulty of modeling cities. The complexity of cities is the reason for making models of them, and also makes them difficult to model. There are almost endless methods of modelmaking as well as elements to be modeled. In this critical history I will address primarily what elements should be modeled. There has always been a tradeoff here: If a model doesn’t represent enough of the city, then that model does not have useful explanatory power, but if it includes too much, the difficulty of creating the model increases astronomically. Lee felt that the temptation in urban modeling to include too many variables in LSUMs comes primarily from pursuing too wide of a scope for the models. The task of predicting future urban growth is a daunting, if not impossible one. Lee felt that the danger in pursuing this goal was that model-makers began to add more variables to the model than were prudent. He said that “including more components in a model generates the illusion that refinements are being added and uncertainty eliminated, but, in practice, every additional component introduces less that is known than is not known.”22 He felt that in addition, these additional variables were not based upon sound theory. To understand better how he came to this perspective, it is helpful to look more closely at the development of urban modeling theory.                                                              22 Lee, “Requiem for Large-Scale Models,” 164. 18    Model-making is a convenient method of simplifying a complex problem. The Oxford English Dictionary has a number of definitions for the word, but the definition which most closely fits in this context is the following: “A simplified or idealized description or conception of a particular system, situation, or process, often in mathematical terms, that is put forward as a basis for theoretical or empirical understanding, or for calculations, predictions, etc.; a conceptual or mental representation of something.” So how should the “particular system, situation, or process” mentioned here be defined for the model? This usually depends on the interest or assumptions of the modeler, which, in turn, is often determined by his or her disciplinary training, source of funding , institutional context and a range of contextual factors. Before the creation of computers, economists tended to be interested in the economic activity in a city, transportation engineers in transit demand and networks, planners in land use, urban designers in the physical form of the buildings, etc. Once computers were introduced, modelers had the ambition to attempt to simulate many more of the systems simultaneously, which ultimately led to Lee’s criticism that they had attempted to simulate too much. The urban models that were developed prior to the use of computers have been classified into two categories by Juval Portugali: economic and ecologic models. The economic models relied primarily on location theory, the ideas for which were originally formulated by Johann Heinrich Von Thünen, then developed further by Christaller, Losch, Isard, and Alonso, among others. They are referred to as economic models 19    because their creators are mostly economists, and the models are based in economic theory. Cities had long been identified as centers of economic activity, yet classic economics has often neglected the consideration of location. The recognized pioneer in this area was Johann Heinrich Von Thünen. In his book The Isolated State, published in 1826, he developed a conceptual model of space and economic activity. Von Thünen was concerned with the spatial relation of types of agriculture in the region around an idealized central city in an isolated plain. He noted that agricultural goods that were harder to transport would tend to locate close to the central city. Since land close to the town is scarce, land would be more expensive there than land farther away. There was a limit at which point the transportation cost would equal the value of the land, where the value would become zero.23 While Thunen’s work described agricultural dynamics, later scholars would find many similarities in general urban economic activity. Thünen’s ideas were ahead of his time, and were not expanded until the work of two Germans, Walter Christaller and August Lösch. Both of these men considered how different cities would interact as they grew. In Christaller’s Central Place Theory, which he described in Places in Southern Germany in 1933, he described the relationship between a central city and surrounding cities based on three principles: markets, transportation, and administrative boundaries. In Economics of Location, published in 1945, Lösch developed a more complex system in which he describes an ideal state in which various “isolated” cities grow, eventually interacting and achieving spatial                                                              23 Juval Portugali, Self-Organization and the City (New York: Springer, 1999), 18. 20    equilibrium. In 1964, the development of these theories was summarized in Alonso’s Location and Land Use, which helped to establish the field of Urban Economics as a more accepted one. Establishing the importance of location in economic theory has been an important cornerstone of the development of urban modeling. These theories embrace the concept of equilibrium that is the basis of economic theory, in which the system is assumed to be static, and will attempt to reach equilibrium again if changes are made to it. While the economic foundations of urban modeling were being laid, a group of sociologists based at the University of Chicago in the first half of the 20th century were also working to tie theory closer to a particular place: an ideal city that often looked suspiciously like Chicago. They were known as the Chicago School, and their “novel approach to investigating Chicago was to get out of the office, explore the neighborhoods and communities, then return to campus to discuss what they saw and invent concepts explaining what it meant.”24 The concepts that explained what they found were codified into a theory called human ecology (or sometimes called urban ecology) in which human development was explained through an analogy to the science of ecology. Robert Park defined it as …a study of the spatial and temporal relations of human beings as affected by the selective, distributive, and accommodative forces of the environment. Human ecology is fundamentally interested in the effects of position, in both time and space, upon human institutions and human behavior.25                                                              24 John S. Adams, “Textbooks that moved generations: Hoyt, H. 1939: The structure and growth of residential neighborhoods in American cities. Washington, DC: Federal Housing Administration.,” Progress in Human Geography 29, no. 3 (2005): 322. 25 Robert Ezra Park, The City (Chicago, Ill: Univ. of Chicago Press, 1925), 63-64. 21    The three primary models developed during this time: Burgess’ Concentric Zone Model (1925), Hoyt’s Sector model (1936), and Ullman and Harris’ multiple nuclei model (1945), are shown in figure 2. Figure 2: 3 major urban models influenced by the Chicago School The concentric zone model remains very similar to Von Thunen’s concept in the Isolated State. Agriculture has been replaced with more general land use, but the city is monocentric and perfectly radial. He located typical land uses in annular sectors that surround the central business district, a term that has survived to this day. 22    Based on his studies of the location of residential areas, Homer Hoyt broke the symmetry of Burgess’ model and located different classes of residential development based on “sectors” of the circular idealized city that would spread out in a non uniform way. He acknowledged that people segregated themselves based on geographic features such as roads and rivers, and this led to wedge shaped sectors of differently priced housing that extended from the central business district. These were caused by the tendency for similar uses to expand outward from more central location while remaining adjacent to the previously developed areas. As sprawling growth began to be seen in the cities of the United States, Harris and Ullman noticed the phenomena of multiple nuclei cities, where outlying business districts began to emerge as competitors of the central business district. They conceptualized this in their multiple nuclei model, created in 1945. As opposed to the economic models, Burgess, as well as his fellow Chicagoans, viewed the city as being in a state of flux, rather than equilibrium. They were concerned with the manner in which sectors would “invade” through a process they called “succession” after the same concept in ecology, by which one ecosystem will progress into a series of succeeding ecosystems. Thus, the way that the models were drawn did not represent the final form of the city, but represented a snapshot in time of an ideal city in which various sectors and the trends of their development could be pointed out. For example, in Burgess’ model, he says that “all…of these zones were in its early history included in the circumference of the inner zone, the present business district.”26                                                              26 Ibid., 50. 23    Another difference between the ecological models and economic models is that the economic models tended to be more strictly verifiable, and there was effort made to restrict the variables to strictly economic ones. The ecological models on the other hand, were more loosely related to the concepts of ecology. There was less concern about being able to reduce the models to equations, and description and visual effect were part of the value of the models. While there would be many theories advanced later on that were applied to urban modeling, the differences between these two early theoretical bases were to prove long lasting. Namely, whether time should be represented statically or continuously, and how strictly models should be tied to numerical variables that can be displayed in equations. The development of the computer enabled the models to focus on incorporating real data into the modeling process as a means of projecting the future of an urban area. The computer was an enabling technology by allowing more complicated modeling as well as enabling the development of modern statistical methods and linear programming, both of which would develop separately and also be useful in modeling. When computers began to take off in the late 1950s, there was almost unbounded optimism about what they could accomplish.27 Theories from other fields such as physics that previously would have been too computationally tedious to apply began to be applied and tested on the city. Models based on gravity became particularly prevalent. These models borrowed the physical law of gravity and applied it to geographical proximity to city centers and                                                              27 Britton Harris, “Urban Simulation Models in Regional Science,” Journal of Regional Science 25, no. 4 (1985): 546. 24    other destinations. As Harris notes, the gravity model is one for which “no complete theoretical basis had been laid.”28 However, it has been empirically observed and accepted, and continues to be used to describe urban systems. As seen in human ecology, the borrowing of analogous theories from other disciplines to describe urban processes was not a new practice. In practice, the gravity theory was used to represent the relationship of different variables based on distance and the strength of each variable. The strength of attraction could be altered depending on the variables being considered. In various forms, the gravity concept is still widely used in forecasting and regional analysis. One of the most influential models that utilized the gravity function as part of its modeling was the model developed by Ira Lowry at the RAND institute in 1964. While not widely applied itself, it inspired a number of models to follow its basic approach. In figure 3 is a diagram of the causal structure of the Lowry model:29                                                              28 Ibid., 550. 29 William Goldner, “The Lowry Model Heritage,” Journal of the American Planning Association 37, no. 2 (March 1971): 101. 25    Figure 3: Structure of the Lowry model Using exogenous employment data that was broken into basic and retail sectors, his model distributed growth into a series of one square mile grid cells. The choice of place of employment as the primary input was largely due to the availability of data, with the assumption that workers would live in close proximity to their workplace. The distance between workplace and home for workers is where he utilized the gravity function. The model iteratively placed increases in population based on accepted theories of economic impacts. The problem was that it didn’t work very well for predictions. It used households as a proxy for population, and this in turn was based on employment data, so there were a number of assumptions layered on top of each other. Lowry was attempting to model the link between economic activity and population growth and 26    placement of that growth based on the commute distance to work based on a limited amount of data. It is arguable that he was being overly comprehensive in his modeling goals. This becomes more evident upon looking at the models that the Lowry model inspired. These included the Time Oriented Metropolitan Model (TOMM) in 1964, Projective Land Use Model (PLUM) in 1971, and IRPUD in 1982 among others.30 They became more effective, in part by being more sophisticated about each of the aspects of the model. For example, the PLUM model, which was much more successful in becoming operational, made several alterations to the Lowry model. The ways that it was different were: it used census tracts as opposed to arbitrary grid squares, was more specific in its use of population data rather than extrapolating that information from the number of households, and the model was more disaggregated in the way it located populations: Different densities could be applied to different areas of the city. It also ditched the gravity model method for ascertaining the distance to work, instead using a more detailed method to analyze travel times and distances.31 However, as PLUM’s developers refined different aspects of the model, the model itself became much more complicated. Lee would say of the PLUM model: “PLUM contains enough adjustments, plausible fudge factors, constraints, and empirical descriptive relationships so that it produces ‘reasonable’ results. It also is almost totally opaque.”32 The problems of                                                              30 Harry Timmermans, “Disseminating Spatial Decision Support Systems in Urban Planning,” in Planning Support Systems for Cities and Regions, ed. Richard K. Brail (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), 38. 31 Goldner, “The Lowry Model Heritage,” 104. 32 Lee, “Requiem for Large-Scale Models,” 175. 27    complexity and opacity will be addressed in the complicatedness critical history in Chapter 3. After Lee’s article was published, the development of modeling became increasingly fragmented. This fragmentation occurred mostly between the land use and the transportation aspects of urban environments. The pre-computer models that I described above incorporated transportation in their considerations, but not in a specific way. As the level of detail became more fine-grained, transportation issues became too complicated to be considered within general urban models. There continues to be investigation into transportation modeling. Due to administrative buy-in and legislative requirements, transportation models have been much better funded than land use models, and continue to be well respected in transportation planning. The focus of much of this modeling occurs in Metropolitan Planning Organizations (MPOs) that are federally mandated to carry out long range transportation planning in order to receive federal highway funding. Federal environmental laws such as the Clean Air Act, as modified in 1970 and 1977, have also emphasized the role of modeling in transportation planning. Land use models have largely continued to deal with the effects of transportation indirectly, although more recently it is becoming increasingly easier for land use models and transportation models to communicate their results between each other. Some models are taking advantage of this communication, while others continue to avoid transportation. Two recent models are examples of two prevalent approaches. The California Urban Futures Model, developed by John Landis, doesn’t explicitly deal with 28    transportation, while UrbanSim, developed by Paul Waddell, is meant to interface with various types of transportation models. The California Urban Futures model was created in 1994 and drew upon the Lowry heritage as well as introducing several new approaches. Landis references Lee as being very influential in developing the principles behind the model. Figure 4 illustrates the functions of the California Urban Futures Model33.                                                              33 John Landis, “Imagining Land Use Futures: Applying the California Urban Futures Model,” Journal of the American Planning Association 61, no. 4 (Autumn 1995): 440. 29    Figure 4: Structure of the California Urban Futures model In many ways the California Urban Futures model follows the general process of the Lowry models. It identifies the population growth that is expected, and then allocates growth spatially. Landis uses different methods to identify the population and the best way to determine where it will go, but he is largely attempting to predict the same aspects of the urban system as the Lowry model did. He does a much better job of acknowledging the context in which the model will be used, and his methods are 30    compelling, but the general structure of the approach is very similar to Lowry. He is able to avoid consideration of transportation by using population data that doesn’t have to rely on transportation, as Lowry’s did, since he was extrapolating population from employment. However, it has been widely recognized transportation expansion induces development.34 Thus, a compromise has been devised to incorporate transportation into land use models without creating models that are too complex. This is to use the inputs of transportation models rather than model both simultaneously. Also, transportation models are becoming more adept at handling the output of land use models. UrbanSim, developed by Paul Waddell at the University of Washington is a good example of this approach. This model seeks to integrate many more facets of the urban system than have previously been attempted by leveraging an open source, object- oriented design. 35 Rather than design the entire model, the design philosophy is to provide a structure within which different modeling components can be added to the software. Each component, whether it be transportation, land use, economics, etc. can be developed separately and then added to the model. Several large cities are currently utilizing UrbanSim, including Seattle and Houston. It remains to be seen whether this approach will overcome the dangers that Lee saw in hypercomprehensiveness. He worried that “too broad a scope usually means too                                                              34 Keith Bartholomew and Reid Ewing, “Land Use–Transportation Scenarios and Future Vehicle Travel and Land Consumption,” Journal of the American Planning Association 75, no. 1 (2009): 344. 35 The definition of open source, as used by the open source initiative, can be found at http://opensource.org/docs/osd. Object Oriented Programming: “A programming model in which related tasks, properties and data structures are enclosed inside an object, and work is done when objects make requests and receive results from other objects.” Wade and Sommer 149. 31    many variables and too much detail are included in the model structure.” (p.164) UrbanSim’s methodology still increases the number of variables involved, depending on each model component to be better able to manage these variables, but whether it makes the model too complex is another worry, that will be considered in the complicatedness critical history. Summary Computers enabled urban models to make an exponential leap in the number of variables that were simulated. Model developers continue to wrestle with what variables to include in urban models. This is a decision that is not at all obvious, and some recent models are introducing frameworks that enable users to decide which variables are the most important ones for them. This acknowledges the political nature of these decisions. That who is included in deciding on variables is as important as the variables chosen. Critical History: Wrongheadedness : “the deviation between claimed model behavior and the equations or statements that actually govern model behavior… The deeper problem is that relationships between variables other than the specified ones are implicit in the model and often difficult to perceive.” (p. 165) Given the frequency with which Lee’s article is referenced, the degree of its importance can be overemphasized. The reason that Lee’s article was so influential was 32    partially due to the time in which it was written. Its publication coincided with a general frustration with the general direction of the planning profession and a general cultural frustration with the empty promises of positivist social sciences. In the 1970s planners were not only turning away from modeling, but from the rational model of planning that it represented. The rational model viewed planning questions as problems that could have “optimum solutions.” There was increasing recognition that this was not the case, and that many of the issues that planning faced were contested and complex, dependent on a host of other issues, and often political in nature. Of all of the variables that large scale urban models had sought use as inputs, the political context of planning and models was not something that had yet been addressed directly. This is not to say that modelers were naïve about politics, as is often suggested by those that are critical of rational planning methods. Modelers saw the use of scientific, rational authority as a means of achieving political power. The assumption was that if the model was built correctly, then the rightness of the model would carry political power, much as science does. According to Richard Klosterman, “the literature on model specification can be seen as a continuing effort to move from the ‘art’ of large- scale modeling of the 1960s to a ‘science’ of modeling.”36 This reflected the planning trends that the profession had followed throughout the 20th century, but were now being challenged.                                                              36 Richard Klosterman, “An Introduction to the Literature on Large-Scale Urban Models,” Journal of the American Planning Association 60, no. 1 (Winter 1994): 41-42. 33    As an interdisciplinary profession, the theoretical base on which planning relies has always been a contested one. Historically, the planning profession was often considered the realm of architects and designers. Plans for cities mostly focused on urban design, and were justified on their aesthetic qualities and ability to glorify the state. A prominent example is the Burnham plan for Chicago in 1909. Beautiful renderings of the future of Chicago helped persuade powerful figures to back the plan. Increasingly, however, planning has become based on additional criteria. Since the industrial revolution, there has been increasing concern for planning’s ability to affect the health, safety and welfare of urban citizens. 37 This change was directly related to the successful transformation of urban planning from a side duty carried out by architects to a professional career within a governmental bureaucracy, starting with the development of the first zoning law in New York City in 1916. This law, and the ensuing Supreme Court case of Village of Euclid, Ohio v. Ambler Realty Co. established planning’s legal authority. 38 Local governments were given the power to plan for the future and enforce these plans based on the presumption that effective planning contributed to public health and safety. While not stated legally until later, another, perhaps more important rationale for the planning power was to protect citizens’ quality of life (and property values). By the end of World War II, the rapid growth of cities as well as the changing technologies (especially in transportation) of modern society were challenging planners to attempt to live up to these                                                              37 For a discussion of the conflict involved in this transition and the political factors that contributed, see: Eduardo Aibar and Wiebe E. Bijker, “Constructing a City: The Cerda Plan for the Extension of Barcelona,” Science, Technology and Human Values 22, no. 3 (Winter 1997): 3-29 38 David L. Callies, Robert H. Freilich, and Thomas E. Roberts, Cases and Materials on Land Use, 4th ed. (West, 2004), 20-28. 34    responsibilities. To meet this challenge, they turned in large part to the urban models discussed in the previous critical history. According to Michael Batty, “from 1950 to around 1970, the predominant style of theorizing in economics, sociology, and political science was essentially positivist.”39 Models were used to determine the optimum path for urban cities based on supposedly scientific theories. Lee’s critique of LSUMs were those of “a rationalist and a modeler” and he rejected the current state of modeling rather than the practice altogether. 40 However, his critiques were interpreted by many planners and policy makers as a vote of no confidence in modeling, which many were all too ready to agree with. Many planning and social science theorists no longer embraced the view of planning as a science. This was partly due to the fact that many planners felt threatened by rational models, as Lee indicated in his 1994 reprise of his earlier article. But an additional influence that convinced planners to move away from rationality was the general rise of postmodernism in the arts and social sciences. The Oxford English Dictionary defines postmodernism as “points of view involving a conscious departure from modernism, especially when characterized by a rejection of ideology and theory in favour of a plurality of values and techniques.” As modeling was heavily based on ideology and theory, it faced growing skepticism. Geographers such as David Harvey would express the opinion that planners relying on LSUMs had become “incapable of                                                              39 Michael Batty, Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals (The MIT press, 2007), 3. 40 Wegener, “Operational Urban Models,” 36. 35    saying anything of depth and profundity about [the real problems of society, and] …when we do say something, it appears trite and rather ludicrous.”41 Widely reported planning failures such as the Cabrini Green public housing project and the defeat of the Lower Manhattan Expressway by residents of Greenwich Village continued to enforce the perception that planners needed to re-evaluate their professional roles. Jane Jacobs published her influential book The Death and Life of Great American Cities in 1961 as a direct challenge against rational, centralized planning projects. The relevance and importance of including citizens in urban planning decisions continued to grow. A key issue that planners still struggles with is how to interpret the public interest. The public interest is not a concept that is directly addressed by planning as a science. The goals of planning are assumed to be obvious and shared by all citizens. Postmodernism introduced the concept of multiple publics who might have different ideas about the future from a professional planner. This matters for urban modeling because it affects the assumptions that go into the model. Planners (or modelers) are not objective as they plan. They have responsibilities to their department and public officials that influences their perception of the public interest. If nothing else, they would like to keep their jobs and might make certain assumptions based on what they think (or know) that their boss wants to hear. Wachs quotes William Ascher, who found that “The core assumptions underlying a forecast, which represent the forecaster's basic outlook on the context within which the                                                              41 Portugali, Self-Organization and the City, 32. 36    specific forecasted trend develops, are the major determinants of forecast accuracy. Methodologies are basically the vehicles for determining the consequences or implications of core assumptions that have been chosen more or less independently of the specific methodologies.”42 His conclusion is that forecasts are usually made “on the basis of political rather than economic or technical criteria.”43 If the political factors are so important, then the research underlying urban models is undermined. No matter how accurate the models are, what matters more is how they are implemented and who implements them. As modelers became out of step with what the public, planners and decision makers cared about, their models became more irrelevant as well. The challenge to create relevant models has been one that urban modelers have had a hard time addressing, and it was largely their lack of answers that led to a break between land use planning and urban modeling. As will be discussed in the next section, answers to this dilemma largely came from other fields other than the large scale urban modelers. Perhaps LSUMs poor track record in this regard had something to do with Britton Harris’ claim that Lee’s article “left the theoretical world largely unperturbed.”44                                                              42 Martin Wachs, “Forecasts in Urban Transportation Planning: Uses, Methods, and Dilemmas,” Climatic Change 11, no. 1 (1987): 65, doi:10.1007/BF00138795. 43 Ibid., 63. 44 Harris, “Urban Simulation Models in Regional Science,” 548. 37    Summary Planning decisions are influenced by politics.45 While models can help shape political will, urban modelers were unable to effectively deal with the political implications of the rise of postmodernist critiques of their assumptions. The context in which the model will be used is one of, if not the most important factor in its use. Critical History: Complicatedness: “models contain large but unknown amounts of error and they are too complex” (p. 167) Before the invention of computers, models tended to be more self regulating in level of complexity. Because all of the equations had to be calculated by hand and understood, there was an upper limit of complexity based on human capabilities. This is not to say that the early models did not get complex, but they were always explainable because they had to be. This transparency has not been as easy to achieve in computer based models. Because the computer can run a model once programmed without entering all the steps of the process, the programmer is separated from the operator. It is not necessary for an operator to be conscious of all of the steps that are taken in running the model. This lack of knowledge can cause the model to be used incorrectly and introduce error into the calculations. As the models increase in complexity, these errors become harder to find and quantify.                                                              45 While there are many definitions of politics, the one that most closely fits the usage here is definition 4a from the Oxford English Dictionary: “Actions concerned with the acquisition or exercise of power, status, or authority.” 38    One of Lee’s recommendations is to “Build only very simple models.” This recommendation is based on the notion that a simple model is a more transparent model. Transparency is a goal cited in many post-Lee models, but it is a hard term to evaluate. Transparency is defined by the Oxford English Dictionary as “Easily seen through, recognized, understood, or detected; manifest, evident, obvious, clear.” While this definition is clear in the abstract, or as applied to a window, it is not always clear how it applies to planning support systems or models. It is often defined by what it is not. It is defined as the opposite of opaque, and opacity is often expressed by describing a technology pejoratively as a “black box.” Black box is a term frequently used by engineers as well as social scientists who study the development of technology. Langdon Winner says that a “black box in both technical and social science parlance is a device or system that, for convenience, is described solely in terms of its inputs and outputs.”46 47 The more complicated a technological system becomes, the more it effectively becomes a black box to outside observers. Since one cannot see what is occurring inside the box, it is difficult to understand or critique the technology other than doing so based on its outputs. This is an important point in the case of LSUMs or PSS because of the number of assumptions and interests that are involved in their formulation. Black boxes effectively hide the assumptions and interests present in the model. Lee was concerned about this, and in his view “’Black-box’ models will never have an impact on policy                                                              46 Langdon Winner, “Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology ,” Science, Technology and Human Values 18, no. 3: 365. 47For a more in- depth discussion of the impact of black boxes on technology and society, see: Bruno Latour, Science in Action (Cambridge, Massachusetts: Harvard University Press, 1987) 39    other than possibly through mystique, and this will be short lived and self-defeating.”48 This forecast proved insightful, because especially once many of the LSUMs failed to produce useful results, policy makers and the public, who could not see the workings of the models, became distrustful of them. Urban modelers have not been quick to embrace the idea of making models transparent to decision makers and the general public. Throughout the history of urban modeling, the target audience has been primarily academics and other modelers.49 With this as the target audience, information about models has been distributed primarily in scholarly journals. This has been an effective means of distributing information to researchers, but planning practitioners and decision makers have been largely left out of the chain of information, and are not frequently offered the chance to offer peer criticism. These practitioners are important to reach to ensure the greater acceptance of modeling and computer support. The groundwork for this transparency happened largely in other fields. A blindness of large scale urban modelers was that they did not do a good job of considering the context in which the models would be used, and since transparency is based on knowing your audience, they were unable to make large strides in this area. A field that was much more concerned with this issue was Decision Support Systems, or DSS. Decision Support Systems are a direct forefather of PSS. Britton Harris coined the term PSS at a conference … to call for DSS that more specifically addressed planning                                                              48 Lee, “Requiem for Large-Scale Models,” 175 49 That is, unless they are private models created by consultants, in which case, sometimes they are purposely designed to be opaque. 40    concerns. This was an extension of the effort to develop a spatial DSS (SDSS) initiated by the National Center for Geographic Information. But what are DSS? DSS emerged from management research, which built upon the earlier precedent of operations and systems research. A good general history of their development can be found in Copas 1993.50 The article and book written by Gorry and Scott-Morton in 1971 are often attributed as the beginning of the field, and they coined the term DSS.51 They were concerned with identifying how computers could be used in management decisions. They were primarily interested in supporting “nonprogrammed decisions…Decisions are nonprogrammed to the extent that they are novel, unstructured, and consequential.”52 These were the types of decisions they felt could benefit from the use of DSS. Programmed (or structured, operational) decisions were ones that they felt were well served by current operations research, and systems known as Management Information Systems (MIS). They felt, however, that this sort of support was useful for routine and operational problems, but was not helpful for nonprogrammed decisions, which were generally the decisions that executives or managers (or planners) have to make. These sorts of decisions are very similar to the types of decisions involved in urban planning as well. The publication date of their article and book suggest that the frustration shown in Lee’s article with current models was not limited to the planning                                                              50 Conn V. Copas, “Spatial Information Systems for Decision Support,” in Human Factors in Geographical Information Systems, ed. David Medyckyj-Scott and Hilary M Hearnshaw (London: Belhaven Press, 1993), 158-167. 51 G. Anthony Gorry and Michael S. Morton, “A Framework for Management Information Systems,” Sloan Management Review 13, no. 1 (1971): 55–70; Michael S Scott Morton, Management Decision Systems; Computer-Based Support for Decision Making (Boston: Division of Research, Graduate School of BusinessAdministration, Harvard University, 1971). 52 Gorry and Morton, “A Framework for Management Information Systems,” 4. 41    field. DSS became very influential, especially thanks to the advances in information technology and the “end user revolution” which moved the predominant locus of computing from the mainframe to the desktop.53 Software programs such as spreadsheet models came about as a result of this new movement in software. In mirroring some of the critiques of urban models, it became a logical inspiration for offering new directions for spatial analysis as well. The development of DSS began migrating closer to the realm of urban planning with an initiative undertaken by NCGIA. The National Center for Geographic Information and Analysis, an independent research consortium dedicated to basic research and education in geographic information science, shortly after being founded in 1988, started an initiative to apply DSS to spatial analysis by developing Spatial DSS (SDSS).54 At the simplest level, SDSS are merely DSS that utilize the capabilities of GIS (which will be discussed more in the next critical history) to be applied to spatial problems. They remained focused on decisions within a business context, however, and tended to focus on short term problems. In addition, they don’t address group decision making.55 To those interested in planning problems, these shortcomings illustrated the problem with relying on software that was not developed specifically for planning. It was within this context that Britton Harris is credited with calling for the development of DSS in 1989 at the annual conference of The Urban and Regional Information Systems Association (URISA), that were specifically suited for planning                                                              53 Copas, “Spatial Information Systems for Decision Support,” 161. 54 Ibid. 55 S. Geertman and J. C.H Stillwell, Planning support systems in practice (Springer Verlag, 2003), 7. 42    problems, or PSS.56 Since then, development has moved forward in a number of directions. For a definition, Brail has said that “the term planning support systems has been defined both broadly, to encompass a range of technology-based solutions useful to planners, and more narrowly as GIS-based models that project urban futures and/or estimate impacts. In the broadest sense, PSS encompasses analysis, design, participatory planning, communication, and visualization.”57 The term has been embraced by the planning profession. The 2006 edition of Urban Land Use Planning, one of the central planning introductory texts, uses the term as a central organizing framework for any tools that planners use to carry out planning. The variety of definitions can lead more to confusion than clarification. I will use the more limited definition of PSS that Brail identifies. One of the criteria I used in selecting examples of PSS to evaluate is that they made an attempt to achieve transparency by limiting the complexity of the software by heeding Lee’s recommendation to “Build only very simple models.” Summary Models that becomes too complicated can become black boxes that conceal the assumptions and interests of expert users. In order to achieve transparency, models should be as simple as possible. Decision support systems and planning support systems have adopted the goal of simplicity and have tailored their models specifically to non- expert decision makers and the general public in order to provide more useful information.                                                              56 Michael Batty, “Planning Support Systems: Progress, Predictions, and Speculations on the Shape of Things to Come,” in Planning Support Systems for Cities and Regions, ed. Richard K. Brail (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), 4. 57 Brail, Planning Support Systems for Cities and Regions, xv. 43    Critical History: Mechanicalness: “(1) a solution which is “exact” is effectively impossible, because there is always some amount of rounding error; and (2) all solutions are achieved iteratively, i.e., one step at a time.” (p. 167) As has been stated before, context matters a great deal for understanding something, and the immediate context of the operation and design of urban models is the computer, not the city itself. Of all of the facets of urban modeling, the nature of the computer is what has changed most. The vast increase in computer power, the rise of the internet, the increase in familiarity with the computer, and the improvements in interface design and software proficiency all have altered the computing context. And yet there are facets of computers that do not change. Computers have conventions that have to be followed when writing software to run on them. The most basic rule is that every instruction has to be converted to a set of binary commands. The rules that govern computer software have impacts on the models that are made on them that have nothing to do with the cities that are being modeled. This is true no matter how much computers advance. Acknowledging the ways that they influence the programs that are written on them can help avoid some of the negative effects of computer aided planning. Lessons can be learned from the experience of Geographic Information Systems (GIS), which is similar to the history of PSS, and is also essential since most PSS are designed to run 44    within the GIS environment. As seen above, definitions of PSS frequently reference GIS as part of their definition. According to Wade and Sommer, GIS are defined as “an integrated collection of computer software and data used to view and manage information about geographic places, analyze spatial relationships, and model spatial processes. A GIS provides a framework for gathering and organizing spatial data and related information so that it can be displayed and analyzed.”58 GIS were created as a technological system that was useful in studies of the environment. This context shaped their early development. The term GIS was coined by Roger Tomlinson in the mid- 1960s as part of his work in designing the CGIS, or Canada GIS to map Canada’s natural resources.59 Further development of the basic data structure was done at the Harvard Laboratory of Computer Graphics and Spatial Analysis. GIS were initially developed to serve environmental needs, and land suitability analysis was a primary driver for the design of GIS. While dating back to the 19th century, this technique was popularized by Ian McHarg, a landscape architect, in his Design with Nature, published in 1969. In the non-computer version, this technique entailed overlaying a series of transparent acetate layers with different locational data upon them. These layers were laid over one another to give a visual display of the geographical interaction of different variables. This process was used as an organizing metaphor for the creation of GIS, in which data is often represented as different layers that can be used together for multivariate analysis.                                                              58 Tasha Wade and Shelly Sommer, eds., A to Z GIS (Redlands, California: ESRI Press, 2001), 90. 59 William J. Craig, Trevor M. Harris, and Daniel Weiner, Community participation and geographic information systems (CRC, 2002), i. 45    While the 1970s were disastrous for the use of LSUMs in land use planning, they benefitted GIS because of its association with the environmental movement, which was in the ascendant throughout the 1970s. This is important to note as a counterbalance to Douglass Lee’s criticisms; that computing environments and methods as a whole are not insensitive to acknowledging a variety of needs, but rather LSUMs, at that time, in particular, were. GIS benefitted from having a clear set of goals and interests associated with the environmental movement, rather than an outdated mandate of the planning profession. Where LSUMs were out of touch with the interests and trends within the planning profession, GIS responded to and reflected a growing environmental mindset. The large userbase that GIS attracted, in turn, allowed it to adapt and continue to reflect the needs of its users. In this way, it began to move from the environmental field to be of use by planners and other fields as well. The direction of software development does not always reflect the initial goals of the developers. Successful software programs are able to adapt to input from its users, and this is difficult if the software is a black box to its users. The more that the black box is made transparent, the more that users and other professionals are able to contribute to the success of a technology. While much of the research into GIS happened in academia, the spreading of the software was facilitated by its association with landscape architects, who “had the business acumen to form successful software companies.”60 Jack Dangermond, the founder, president and CEO of the Environmental Systems Research Institute (ESRI), the current worldwide leader in GIS software, had his educational training in landscape                                                              60 William J. Drummond and Steven P. French, “The Future of GIS in Planning: Converging Technologies and Diverging Interests,” Journal of the American Planning Association 74, no. 2 (Spring 2008): 164. 46    architecture at Harvard and was involved in the work at the Laboratory of Computer Graphics and Spatial Analysis. While developed within the environmental movement, the “first really large-scale market for GIS software was provided during the 1980s by the country’s 3,000 counties and 19,000 municipalities, in particular the tax assessors, city planners, and engineers within those governments.”61 This market was the same market that was the target audience for LSUMs, and the success achieved by GIS in this market have led designers of PSS to work within and learn from GIS. Both of the PSS that I will investigate, and most others, work as extensions of GIS, or are designed to take advantage of the data formats and tools within GIS. GIS is not a steady state technology. It continues to develop in ways that have to be considered by any software programs that operate within or in conjunction with it. This development has in turn been influenced by the general trends within software and computers in general. Originally, computers and the software that ran on them were characterized by large mainframes. During the “end-user revolution” in the 1980s, the use of the majority of computing moved to personal desktop (and later laptop) computers. The growth of the internet, especially since 2000, has been leading a trend toward “cloud computing,” where much of the computational power is moving back to mainframes (now called data centers) while users connect with these data centers via the internet. GIS has followed this trend as well. The current cloud computing trend is in the process of developing, and it is still unclear how it will ultimately affect GIS, and by extension PSS. However, one trend that is becoming clear is the divergence of mass market and                                                              61 Ibid. 47    more sophisticated applications of GIS, which will have a profound effect on planners and PSS. The shift to cloud computing has enabled a number of mass market GIS applications such as Google Earth, Google maps, and a host of other online geographic tools that replicate portions of GIS functionality. However, not all of the functions of GIS are surviving this switch. These mass market approaches have emphasized “geocoding, routing, simple user interfaces, and visualization of data, but… [have ignored] overlay analysis, buffer analysis, and other functions that were important to earlier environmental applications.”62 Drummond and French argue that previously, the functions of GIS and the interests of planners have overlapped well, but with the expansion of GIS functionality to a mass market, they will make up a smaller piece of the overall GIS audience and thus have a smaller impact on the direction of GIS development. Many of the environmental functions that are not being addressed in the move to internet, third party software are useful to planners, and it is unclear how these will develop in the future. A debate currently going on is how much planning (and planning education curricula) should emphasize learning the details of GIS and PSS, including computer programming languages such as Python, XML, and C++. Drummond and French argue that learning these skills are important in order for planners to continue to shape the direction of the development of GIS. Longley and Batty, in their 1996 article, agree. They argue that the criticism of quantitative methods has reduced students’ general                                                              62 Ibid., 166. 48    competency in those methods, especially when teaching of technical skills is replaced with other courses. This creates problems for the implementation of new methods in a thoughtful way. There are certainly risks and opportunities for abuse in the application of quantitative methods, but these increase if those who are using (or critiquing them) do not understand the details of the methods. There are many calls for more “transparent” computer systems, but there is not often recognition that achieving transparency is a two way process that depends upon the knowledge of the user as well as the design of the software. Determining the knowledge level of the target user is a major decision to be made before designing any software interface. There is a lot of effort being applied to refining the interfaces and graphical representation of spatial analysis, but Longley and Batty argue that it is also important to continue to research the spatial patterns and models of urban areas. They feel that more of this research should be done by those with planning knowledge in order to have better underlying systems and representations. This will not be feasible without a greater software and modeling knowledge base among planners. In the same issue of JAPA, however, Richard Klosterman argues that judging from the past history of computer aids to planning such as spreadsheets, the mass market- ization of GIS likely won’t hurt the planning profession. It is unrealistic to expect an already crowded list of planning classes to find room for computer programming or other quantitative methods. He argues that there is a lot of public and nonprofit money that is being devoted to developing tools for planners that are yielding results, and that this will 49    continue.63 His emphasis is more on teaching the tools within the current pedagogical context in order to understand how they can be applied to current methods of planning. Whichever decisions are made about the inclusion of computing skills within planning, it is important to acknowledge that computer planning tools cannot be fully understood without looking at the context of their development and application. While modelers such as Britton Harris have argued that the technological problems with LSUMs have been “largely overcome,” this is a misleading statement. As Bent Flyvbjerg argues, “context and judgment are irreducibly central to understanding human action.”64 Thus, the development of software will always remain influenced by its mechanical as well as its cultural context. Summary Urban models should be considered within their computing context. It is important for PSS developers and users to understand the history of GIS, because of its success where LSUMs have failed, and the fact that GIS is often part of the computing context for PSS. While there is disagreement about the proper place of increased technical education, I think that it is a necessary precondition to ensure effective use of PSS in planning. Hungriness: “Data requirements of any model that purports to realistically replicate a specific city are enormous.” (p. 165)                                                              63 Richard Klosterman, “Comment on Drummond and French: Another View of the Future of GIS.,” Journal of the American Planning Association 74, no. 2 (Spring 2008): 174-176, doi:10.1080/01944360801982203. 64 Flyvbjerg, Making Social Science Matter, 4 50    When Lee wrote “Requiem,” the issue with data that he identified was the difficulty of obtaining enough data of sufficient quality and quantity to obtain meaningful results with LSUMs. As he puts it: “Data constitute the window through which the model views a city…. It is sometimes helpful to consider what a model ought to know in order to produce a given output, before asking for the output.”65 Reformulated in a more negative light, the phrase “garbage in garbage out” is a common criticism of the outputs of models that I heard during my case study informal interviews. Obtaining high quality digital data has been a constant problem for urban models. While the ability and ease to obtain data has increased markedly over the years, core issues of data quality and application remain and are unlikely to be solved simply through technological advances. Initially, the lack of standardization meant that data had to be adapted to each model in a different way than for any other model. Spatial data sets were not common, and so gathering data became one of the biggest challenges of creating a model. Over time, digital data has continued to improve. It is more ubiquitous, more compatible with other programs, and more spatially oriented. The popularity of GIS has made the situation much better, since many of the data sets that are used in GIS are also data sets that can be adapted to urban models. One strength of digital data is that while it can take a long time to collect, once it has been gathered, it is easy to share and implement in later                                                              65 Lee, “Requiem for Large-Scale Models,” 165. 51    projects. The development of PSS and the decision to largely work within a GIS framework has made this data interoperability even easier. The internet has also facilitated the sharing of data. Online repositories and clearinghouses of GIS data continue to be developed and expanded and will substantially lower the boundaries to digital approaches of all sorts. Data more specifically designed for urban modeling can also be found on the web. The Berkeley/Penn Urban & Environmental Modeler's Datakit, developed in part by John Landis, is one example of this.66 Drummond and French say that we have entered an era of “ubiquitous data…today’s planners no more expect to key-in attribute data and digitize maps than they expect to draw and code them with colored pencils.”67 While this is somewhat of an overstatement, as I will explore, data availability has certainly improved. Today, the main issues are data quality and how data is used within the planning context. In most modeling or PSS efforts, a variety of data inputs are used in the project. These data will come from a variety of sources, and represent different levels of accuracy, timeliness, bias and relevance. There is a deep GIS and geography literature devoted to the perils of geospatial data, including an annual conference called Accuracy that “focuses on how to measure, model and manage uncertainty in spatial data.”68 Authors such as Couclelis address the range of implications that arise from the uncertainties of data.69 Examining all of the impacts of the uncertainty of data is outside                                                              66 “Berkeley/Penn Urban & Environmental Modeler's Datakit - Home,” http://dcrp.ced.berkeley.edu/research/footprint/. 67 Drummond and French, “The Future of GIS in Planning,” 161. 68 “International Spatial Accuracy Research Association,” http://www.spatial-accuracy.org/. 69 Helen Couclelis, “The Certainty of Uncertainty: GIS and the Limits of Geographic Knowledge,” Transactions in GIS 7, no. 2 (2003): 165-175, doi:10.1111/1467-9671.00138. 52    the scope of this paper, but it is certainly something that must be taken into account when designing or implementing an urban model or PSS. Data validity is important, but so is the type of data that is collected. LSUMs, GIS and other quantitative approaches have been criticized for missing the qualitative perceptions that are often more valid to community members than the “hard” data from the census or other sources. LSUMs, like rational planning, had a history of ignoring the input of community members in decisions that affected them. In rational planning, this led to poor planning decisions and a backlash against rational planning methods. In LSUMs, it increased the distrust in the models. Participatory planning and community visioning approaches have presented methods that directly seek input from community members. These approaches have begun to impact computer approaches to planning as well. One approach uses GIS to facilitate gathering community data and is called “bottom-up GIS” (BUGIS). This approach, pioneered by Emily Talen, in her words, “involves using GIS as a spatial language tool for acquiring local knowledge and communicating residents’ perceptions, rather than conveying only objective facts.”70 She uses GIS as an aid to community meetings in which perceptions and “mental mapping” can be enhanced through the visual representation and analytical tools contained in GIS. Her approach is a promising one, and shows how planners are adapting technologies to fit their needs. There are some drawbacks to her approach as well, however. There are aspects of collecting community data that are cumbersome to carry out in GIS. In Talen’s case studies, BUGIS was “not intended to be done ‘on the fly.’…                                                              70 Emily Talen, “Bottom-up GIS,” Journal of the American Planning Association 66, no. 3 (2000): 280. 53    GIS currently does not effectively support real-time descriptions in collaborative settings.”71 As a result, BUGIS requires substantial mediation and interpretation by experts to translate the local knowledge into data that is usable by GIS. This difficulty is, in part, due to the fact that GIS was not designed to work in a community setting. Breaking down the barriers between expert users and community members, or enabling community members to directly interact with software has the potential to increase the quality and comprehensiveness of the data collected. One of the rationales for the development of PSS is to extend the functions of GIS so that they more closely adhere to the needs that planners have of software. Collecting community input has been identified as important to the planning profession, and is a need that is currently not being served as well by GIS as it could. In designing and implementing PSS, the sources, availability and type of data are all important considerations that should be considered. Summary While data sources are more plentiful today than they were, planners should think carefully about what kinds of data are necessary to have for making planning decisions. PSS should be designed to accommodate the types of data that planners require and that citizens value.                                                              71 Ibid., 284. 54    Critical History: Grossness: “the actual level of detail was much too coarse to be of use to most policymakers” (p. 165) At the time that Lee was commenting on the models, computers had much more limited processing power than they have now. Any computing problem had to consider the computing limitations of the computers that were available. The way that this was generally done in the 50s and 60s was to divide the spatial data into regular grids based on the limit that the computer could handle and then aggregate data to the level of that grid. As computer capabilities improved, it became possible to make these grids smaller and smaller. The grid used by the Lowry model was one mile square. A more recent model, UrbanSim, routinely uses grids that are 150 meters, and capabilities have only increased since then.72 The rise of GIS also enabled a different kind of accounting of the spatial variables. Data could be represented as points and polygons as well as grids. The continued advances in this area have led modelmakers down to the level of the individual. In what is known as Agent-based modeling (ABM), individuals, rather than parcels and other larger structures can be modeled. As most other metrics are generally proxies for individuals, this offers exciting prospects for the future. The Center for Advanced Spatial Analysis (CASA) at University College, London is focusing significant                                                              72 U.S. EPA, Projecting Land-Use Change: A Summary of Models for Assessing the Effects of Community Growth and Change on Land-Use Patterns (Cincinnati, OH: U.S. Environmental Protection Agency, Office of Research and Development, September 1, 2000), 182. 55    research on ABM. Much of their research involves smaller scale modeling, such as how people will respond to fire alarms, but the potential for much larger scale implementation is already being realized in transportation modeling. Much of the theoretical backing of agent based models lies in complexity theory, which posits that many complex systems can be understood through relatively simple rules of interaction. This theoretical background has led to research and development in ABM, and is being used in a variety of ways. Starlogo is an elementary ABM design software that has been successfully used in education and community planning environments to facilitate learning about modeling and its potential uses.73 These models continue to be developed and represent a promising direction for urban modeling research. The scale of urban models continues to get smaller, but the grossness of the relationship between models and reality has proved stubborn. In what has become known as “Box’s Law,” Box and Law argue that “any model is, by definition, a simplification of reality and thus inevitably is wrong in the sense that it leaves out some aspects of reality.”74 What this means is that no matter how detailed the scale of the model becomes, it will be only a gross approximation of reality. While in earlier LSUMs this issue was treated as a problem to be overcome, increasingly it is seen as the nature of modeling that has to be accounted for.                                                              73 V. S Colella, E. Klopfer, and M. Resnick, Adventures in Modeling: Exploring Complex, Dynamic Systems with StarLogo. (Teachers College Press, PO Box 20, Williston, VT 05495-0020 ($29.95). Tel: 800-575-6566 (Toll Free)., 2001). 74 Richard Klosterman, “A New Tool for a New Planning: The What If? Planning Support System,” in Planning Support Systems for Cities and Regions, ed. R. K. Brail (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), 89. 56    One method of dealing with this dilemma is to explicitly acknowledge that the model is not predicting the future, but rather is only showing a potential scenario in relation to other scenarios that are made from the same data. By creating different scenarios based on the same data, comparability can be achieved between the different scenarios. Klosterman argues that it is imperative that planners recognize that these scenarios only represent “what would happen if clearly defined policy choices are made and assumptions concerning the future prove to be correct.”75 This is an important epistemological shift that emphasizes the value of models more in heuristic terms, where during the initial development of LSUMs, it was believed that the models could predict the future. It was beginning to be realized that current and future events are contingent upon, not determined by asocial variables. The emphasis on the creation of scenarios rather than predictions or forecasts has been explored in the literature on Scenario Planning. Scenario planning began in the economics and business sectors. Herman Kahn developed what would be known as scenario planning while working at the RAND institute in the 1950s and 1960s. Kahn did research for the Pentagon on the potential outcomes of a nuclear war. His goal was to find “serious alternatives to annihilation and surrender, and his work had a major impact on the Pentagon’s thinking in the 1950s and 1960s.”76 His methods were eventually adopted by the business world, notably by Royal Dutch Shell beginning in 1967. Pierre Wack, a planner at Royal Dutch Shell, would write about their use of the methodology,                                                              75 Richard Klosterman, “Deliberating About the Future,” in Engaging the Future: Forecasts, Scenarios, Plans and Projects, ed. Lewis Hopkins and Marisa A. Zapata (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2007), 201. 76 R. Bradfield et al., “The origins and evolution of scenario techniques in long range business planning,” Futures 37, no. 8 (2005): 798. 57    and attributed the methodology with enabling them to anticipate oil scarcity in the 1970s. In comparing Scenario planning to traditional forecasting, Wack said that forecasts are “wrong when it hurts most,”77 during periods of rapid change. Wack’s article was influential and he laid out a number of pieces of the approach that would continue to be applied. Wack argued that scenarios have to involve decision makers in constructing the scenarios (p.74). The real goal of scenarios is to change the “microcosms of…decision makers” (p. 84) so that they are able to make effective decisions. Analysis is certainly important for the creation of scenarios, as he saw it, but it was only a portion of the technique. The analysis had to be connected to a range of plausible scenarios that were both relevant and understandable to decision makers. The relationship between the effective communication of scenarios was taken up also by Myers and Kitsuse, who write about bringing scenario planning to urban planning in their 2000 article. They characterize scenarios this way: “At root, scenarios are simply stories about events that would have an impact on planning decisions if they occurred. In the scenario-building process, planners invent a number of stories about equally plausible futures, study the implications of each future for their organization, then strategize their organization's response as though each of these scenarios had in fact come to pass.”78 Planners don’t have to be the only ones that work to create these scenarios,                                                              77 Pierre Wack, “Scenarios: shooting the rapids,” Harvard Business Review 63, no. 6 (1985): 75. 78 D. Myers and A. Kitsuse, “Constructing the future in planning: A survey of theories and tools,” Journal of Planning Education and Research 19, no. 3 (2000): 228. 58    the public and decision makers can and should also be involved, with planners utilizing their expertise to help shape the scenarios in ways that are plausible. The scenario approach acknowledges the fallibility of any projections about the future. As Hopkins and Zapata say in the introduction to their recent book on scenario planning: “…scenarios in a set demonstrate the multiplicity, complexity, and unpredictability of forces shaping the futures of companies, cities or nations.”79 Embracing the uncertainty of the future enables planners and communities to be more prepared for an uncertain future. Positioning the scenario within a storytelling framework connects the future to the context of the past and the options of the present. Summary Shifting from the idea of predicting the future to describing multiple possible future scenarios acknowledges the importance of communication and the context in which models are used. This new stance provides a stronger theoretical foundation that responds to the postmodern context that LSUMs were never able to adapt to. Critical History: Expensiveness: “The cost of most of the modeling efforts ran into the millions of dollars.” (p. 168) The quote above was probably more impressive when Lee initially wrote his article. Due to inflation, the impact of a million dollars is not what it once was. However, the fact remains that developing models was, and remains expensive. Yet                                                              79 Lewis Hopkins and Marisa A. Zapata, eds., Engaging the Future: Forecasts, Scenarios, Plans and Projects (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2007), 8. 59    modeling is not as expensive as it used to be. The cost of computer processing has been reduced considerably since the time of “Requiem.” “Moore’s law”, the observation by Gordon Moore that the relationship of the increase of processing power of computers to time is an exponential one, doubling approximately every two years, has had a huge impact on the cost of computing.80 But computing doesn’t represent the only costs involved in modeling or PSS. Some of the alterations that have affected the best practices of what is becoming known as PSS have made the process more expensive. Involving the public in creating forecasts or gathering data, teaching planners to be more computer literate, creating more accurate and timely data sets…all of these methods are costly. And cost can be measured in other ways than money. What are the opportunity costs of pursuing PSS? If more time, money and effort is being spent on computer approaches, then surely something else is being shortchanged? Lee himself was worried, and remains worried that planners and developers are wasting their time with modeling while cities continue to face major problems. However, if utilizing PSS turns out to be a crucial tool to enable an effective planning solution that the community is happy with, then it is probably worth the cost. There are certainly ways in which utilizing computer methods has reduced the costs of planning, which is part of the reason for the rapid adoption of GIS. Cost is a consideration that is an important one for communities, decisions to invest in PSS must be based on the local situation in which they will be applied. There is not an effective                                                              80 Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics 38, no. 8 (April 19, 1965): 4. 60    metric that I know of that will reveal conclusively whether PSS pass the cost benefit test. Guides like the one produced by the EPA in 200081 are useful ways of comparing costs and features of the software, but in order to be able to judge the ease and cost of utilizing the software in practice, a judgment has to be made based on the nature of the problem at hand, the capabilities available to the organization, as well as the capabilities of the model. Case studies of the sort that I will evaluate in the second portion of this paper can be an effective method of gathering enough information to make the decision. Summary Cost will continue to be an important consideration in deciding whether or not to adopt a PSS. The cost of application should be considered as well as the purchase cost, including the opportunity cost of pursuing other methods. Conclusion Based on my research, Lee’s article still offers cogent criticism of the criteria that should be considered in the development and evaluation of urban models, PSS or other scenario generating software or models. While applying Lee’s criticisms to the present time requires some extrapolation and interpretation, his criticisms remain an effective way to set the context for interpreting current developments in PSS. The field is new enough and diverse enough that there are currently few guidelines for evaluating what these software do and how they should be used. While the tone of Lee’s article was flamboyant and rhetorically harsh, the breadth of his criticism is rare, particularly in the                                                              81 U.S. EPA, Projecting Land-Use Change: A Summary of Models for Assessing the Effects of Community Growth and Change on Land-Use Patterns. 61    field of urban modeling. The rapid development of PSS and their increasing application in the planning field requires an evaluation of them that considers both their theoretical and contextual nature. Lee’s critiques still offer the most comprehensive approach to achieving this. In the next section, I will test the truth of this statement by using Lee’s seven sins as the basis for a rating system of current PSS. While my primary purpose in conducting the literature review was to generate the foundation for a rating system, I feel that the framework for the review of history and theory brought out interesting connections between the many different disciplines involved in similar research directions. Further research on how these disciplines interact using similar or different methodologies would appear to be fertile ground for developing new ideas for models that help planners, policy makers, the public and academics anticipate the future results of current actions. 62    Chapter 4- Rating Framework Design In order to convert Lee’s sins of Large Scale Urban Models into criteria for a rating system, some alterations are in order. “Requiem for Large Scale Models” was presented in antagonistic terms, which was effective at the time for bringing attention to the failings of the models, but in a rating framework would be distracting and counterproductive. Lee is a polarizing figure in planning and modeling literature, and my goal in creating the rating framework is to retain the criteria that he evaluated large scale urban models with while avoiding the pejorative overtones with which he imbued his terms. In order to create a better fit, I have renamed Lee’s sins, attempting to choose terms that reflect the same concerns, but are more neutral in tone. The new terms are: Hypercomprehensiveness = Comprehensiveness Complicatedness = Complexity Hungriness = Data Requirements Grossness = Scale Mechanicalness = Software Architecture Expensiveness = Cost Wrongheadedness = Robustness In addition, in referring to these terms, I will use the word criteria rather than sins to reflect their new role as well as to avoid the negative connotations in Lee’s word choice. Format For the format of the ratings system, I looked at the guide produced by the EPA in 2000 surveyed a number of land use models. While my criteria are different, I found 63    EPA’s approach helpful, and its guide is one of the most widely circulated reviews of land use software that I could find. In the course of the report the authors utilize a number of matrices and measures to distinguish the characteristics of different land use change models. While the number of measures investigated leads to quite a bit of detail about the software, some of the measures are redundant, and at 271 pages, the details of the models get lost in the report’s complexity. In creating my rating framework, I took three basic approaches used in the EPA report; expanded them, and linked them together so that they worked more as a coherent system. These approaches were to: create a matrix with which to assess land use models, to describe the models verbally, and to offer examples of places where the model had been used. The process diagram in figure 5 illustrates the general rating framework that I am proposing. Below, I will address different aspects of the framework in more detail. 64    Figure 5: Rating framework diagram As mentioned in the methods section, my rating framework will contain both evaluations of the PSS themselves as well as a case study analysis of an applied use of the software. These two methods of review will be summarized in both numerical and descriptive format. The intention of having multiple review methods is to provide triangulation of the results, while communicating these results in multiple formats portrays that information in different ways. 65    Numerical Ratings Numerical scores will be assigned by the reviewer for each of the seven criteria for both the PSS evaluation and the case study evaluation. The range of numbers that can be assigned is between 0-3. These correspond with the following subjective valuations: 0: Does not meet expectations 1: Insufficiently meets expectations 2: Meets expectations/No opinion 3: Exceeds expectations I have suggested this range of scores because I feel that it is a useful range for evaluating these software programs. With many more levels of distinction, there could be a false sense of precision given by the numbers. However, the choice of the range of numbers can be altered by the reviewer if they desire further distinction between scores, as long as the range of scores is kept the same between the PSS evaluation and the case study evaluation. The scores from the PSS and case study evaluations are added together to get a combined score. If more than one case study was carried out for a PSS, then the average of the case study scores should be taken before combining the scores. This score can then be multiplied by a weighting factor based on the importance of each criterion to the reviewer. These values should be set somewhere between 0 and 1. If the reviewer is operating under the assumption that all of the criteria carry equal weight, then the weights should all be left set to one, as I have done in this example. Once the combined score has 66    been multiplied by the weight, then the seven scores can be added to get a final score. The matrix on which the numerical scores are marked and combined is shown in figure 6. Figure 6: Numerical Rating Matrix In this example, the two PSS (What If? and CommunityViz) that I am rating are represented. Additional columns could be added if more than two PSS were being evaluated. PSS Selection In choosing PSS to evaluate, I was looking for software that had been widely applied, had the goal of being easy to use, and had an application within Texas that I could look at as a case study. I was looking for two to three examples of software programs to provide comparison and contrast. While there are a number of software that 67    are known as PSS, there are relatively few of them that have been used in practice widely. Based on a survey of the PSS literature, I selected four sources that evaluated PSS that were published recently and evaluated software that had been widely applied. These sources as well as the software that they discuss are shown in figure 7. Figure 7: Applied Planning Support Systems mentioned in chosen sources Based on the results from these sources, I narrowed the list down to INDEX, CommunityViz, UrbanSim and What If?. I was able to find suitable case studies in Texas for both CommunityViz and What If?. I was not able to find a Texas case study of INDEX, and decided not to look at INDEX for that reason. While I did find a case study for UrbanSim, the complexity of the software was such that it didn’t fit my criteria for being easy to use. It requires substantial programming knowledge, a large quantity of data, and detailed statistical knowledge to use. While it looks promising, these requirements put it out of the reach of most community planners. While it would be beneficial to look at more than two PSS, it is a good number for the purpose of this paper: a test run of the rating system that was developed based on Lee’s criticisms. What If? 68    and CommunityViz are good comparisons because they were designed with some of the same goals, but attempt to achieve them in different ways. Furthermore, the organizations that design them are also structured differently. Text Descriptions – PSS Description The seven text descriptions of the criteria offer a way to give context to the numerical score that was given on the matrix. These descriptions are intended to illustrate the thought process involved in coming to the score that was given. The first portion of this section provides a general overview of the PSS. This covers the history of its development and developers, its goals and general software design. This overview will give the reader a quick sense of the model before getting into the detail of the seven criteria. In carrying out this description of each criterion, I found it most effective to write the text descriptions for both PSS that I was evaluating simultaneously in one paragraph (or series of paragraphs). This allowed comparison between the ways that the two models operated. Comparing the two programs made it much easier to come up with a numerical score for each program, because between the two, one usually stood out as performing better in each criterion. As the end result of the rating framework is to select which software program is better for the reviewer, this seems like an effective way to approach the text descriptions. 69    It is difficult to offer a specific framework as to how to approach each criterion. I tried to evaluate the major strengths and weaknesses that each program showed in approaching the criterion, and then compared how the PSS compared to each other. The needs of each reviewer will differ, as will the nature of the PSS, so I feel that it is best to leave this process open to interpretation by the reviewer. Each description should end with the numeric score that was given so that it is immediately apparent to the reader which score corresponds with this description. Text Descriptions- Case Studies The case study text descriptions follow basically the same format as the PSS descriptions. However, one major difference in the way that I carried it out was that I found it impractical to compare the case studies of the different PSS in one paragraph in the way that I did for the PSS evaluation. The case studies were different enough that it would have made direct comparison cumbersome and, in my view, unhelpful. The format of this section will depend somewhat on the type and number of case studies that are carried out. I only carried out one case study for each PSS. If there had been more than one case study for each PSS, then direct comparison within one paragraph between the case studies for each PSS might have been effective, as the paragraphs could be structured around the ways that the context influences the software application. In describing the case studies, the important thing is to make clear how they have added to 70    context-dependent knowledge, as discussed by Flyvbjerg in Making Social Science Matter. I evaluated one case study for each PSS. What If? was used in the Watershed Protection and Development Review department of the City of Austin to evaluate the water quality impacts of projected future development within selected watersheds in the city. CommunityViz was used as part of a collaborative land-sea planning exercise carried out at the University of Texas Marine Science Institute in Port Aransas, Texas. My analysis of these case studies began with research and familiarizing myself with the software through the process of the PSS evaluation rating process. Finding the case studies was accomplished through contacting the software developers. They helped me identify cases that were within Texas and might prove useful. While this probably limited the range of case studies to ones that had relatively positive outcomes, there were few other options that I could identify to find case studies, since few have been published in articles or books. As more organizations use PSS, case studies will become easier to find. Once a case study was chosen, I did additional background research on the context of the case: its history, major actors, and the role that the software played. After some preliminary research, I began informal interviews, which was how I received a large portion of the useful information about each case. Both of my case study interviews began with a single contact . A “snowball” technique was used to identify additional 71    people to talk to that were involved in the case study.82 The research process of each case study will be further described in each case description below. Based on the time and resources available to the reviewer, there are different ways to approach the case studies. While I only looked at one case for each PSS, it would be helpful to carry out multiple case studies to begin to see differences in application. In some cases interviews might not be possible, in which case research could be done based on written sources. However, I recommend interviews, as they are an effective method of obtaining information that might not be published in an official report. Each description ends with the numeric score that was given so that it is immediately apparent to the reader which score corresponds with this description. Chapter Summary This Chapter has laid out the proposed PSS rating framework that I have developed. I have modified Douglass Lee’s terms so that they will function in a more neutral manner as the basis for the rating framework. These modifications were made based on the understanding gleaned from the literature review about the initial context for the sins and subsequent historical developments. The 2000 EPA report on land use models has served as a guide for some of the basic elements of the rating framework, but they have been modified to link to and complement each other. Chapter 5 will discuss the framework as applied to the two PSS that I evaluated.                                                              82 B. L Berg, Qualitative research methods for the social sciences (Allyn and Bacon Boston, 1989), 33. 72    Chapter 5- Rating Framework The following Chapter illustrates the application of the rating framework described in Chapter 4. Figure 8 displays the completed numerical rating matrix with values from my investigation inserted. The sections that follow will describe how these values were derived. Figure 8: Numerical Rating Matrix PSS Evaluation: What If? What If? is “a scenario-based, policy-oriented planning support system (PSS) that uses increasingly available geographic information system (GIS) data to support 73    community-based processes of collaborative planning and collective decision making.”83 It was developed by Richard Klosterman, a professor emeritus of the University of Akron, and the president and CEO of What If? Incorporated, which develops the What If? PSS. What If Inc. was founded in 1996, and has continued to develop the software since that time. The software has been used in 28 countries and 29 States in the US, according to the What If? Inc. website.84 Richard Klosterman has been involved in the debate over the use of analytical tools in planning since he received his PhD in city planning from Cornell in 1976. He finished his dissertation during the period of ferment in a planning profession that was attempting to find a new direction. His dissertation was titled “Toward a Normative Theory of Planning”85 and focused on the “intellectual foundation for normative planning as the rational evaluation of both the means and ends of public policy.” In his dissertation, he sought to find a way to overturn the failings of the “logical-positivist” position while still retaining a place for rational methods within planning. As he noted in an article from that time, “none of the calls for normative planning has outlined the procedures by which planners are to rationally evaluate public policy ends.”86 The design of What If? is one answer to the question of how planners should use rational methods in our current context.                                                              83 Richard Klosterman, “The What if? Collaborative Support System,” Environment and Planning, B: Planning and Design 26 (1999): 396 84 “What If? Homepage,” http://www.whatifinc.biz/. 85 Richard Klosterman, “Toward a Normative Theory of Planning” (Dissertation, Cornell University, 1976). 86 Richard Klosterman, “Foundations for Normative Planning,” Journal of the American Planning Association 44, no. 1 (1978): 38. 74    PSS 2: CommunityViz Rather than being understood in the context of the philosophies of one man, CommunityViz has to be understood within the context of the Orton Family Foundation, which financed the development of the software. The Orton Family foundation grew out of the success of the Vermont Country Store, a national mail order and web retail business. Lyman Orton started the foundation in 1995 along with Noel Fritzinger as a “resource for small cities and towns grappling with change and groping for solutions.”87 The foundation has focused on utilizing technology to achieve these ends, and much of their work has been on software development. CommunityViz has been their flagship software product and has been continually developed since the inception of the foundation. In 2004, in order to focus more on applied planning problems, and to achieve “long term financial sustainability” the foundation transferred the sales, development and support of CommunityViz to Placeways LLC, a for-profit company owned by the foundation. The foundation continues to apply the software in its planning applications, but addresses broader issues, while Placeways works to develop the software and provides technical consulting for users of the software. The design of the original version of CommunityViz occurred at the Environmental Simulation Center (ESC), under the leadership of Michael Kwartler, FAIA. The ESC managed a software development team consisting of: Foresite Consulting, who developed the Scenario Constructor TM module, Multigen Paradigm, who developed the Site Builder TM 3D visualization component, and                                                              87 http://www.orton.org/index.cfm?fuseaction=Page.viewPage&pageID=477&parentID=472&nodeID=1 75    PricewaterhouseCoopers’s Emergent Solutions Group who developed the Policy SimulatorTM , an agent based modeling program that simulated the reaction of citizens to government policies.88 Each of these modules together made up CommunityViz, and the combination of different modules represented a new approach that built upon the strengths of GIS as a platform that can accommodate different programs that work together. Due to the flexibility inherent in the object oriented approach, the modules have been altered over time to reflect changes in technology and intent, but the PSS as a whole has remained viable. The Policy Simulator module was dropped in 2003 from the software due to the extensive data requirements and complexity involved in using it. Although still popular, the 3D visualization component has become somewhat obsolete with the prevalence of Google Earth and Google SketchUp. Version 4.0 of CV will introduce a new 3D rendering component that will be known as Scenario 3D, according to Amy Anderson at Placeways. PSS Criterion: Software Architecture Both of these programs utilize the capabilities of ArcGIS as a base to build upon. This widens the appeal and capabilities of the software, while also limiting the user base to those that are familiar with and willing to pay for ArcGIS. However, the programs approach interoperability with ArcGIS in different ways. What If? is a standalone program but requires ArcGIS to process the data inputs that it uses in its analysis. Data is                                                              88 George M. Janes and Michael Kwartler, “Communities in Control: Developing Local Models Using CommunityViz,” in Planning Support Systems for Cities and Regions, ed. R. K. Brail (Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008), 168. 76    input into the program in shapefile format, and What If? then generates additional shapefiles with the results of its calculations.89 CommunityViz on the other hand functions as a plugin within ArcGIS. It is not a standalone program and all of its functionality is operable from within ArcGIS.90 The integrated format is preferable because it enables much tighter integration between the two programs. Any GIS operations that are carried out in an analysis of which What If? is part of has to be carried out before hand, and applied to data that will then be converted into a single shape file.91 By working within ArcGIS, CommunityViz doesn’t require data to be formatted as specifically in order to function as input for the PSS, but rather builds on the strengths of the GIS environment, which include a graphic user interface (GUI) that has undergone much development, as well as a variety of analytical tools. It is possible for users to develop creative uses of the different capabilities of each tool when they are combined together, where the separate approach of What If? limits the GIS capabilities to specific pathways. What If? Ranking: 2 (meets expectations) CommunityViz Ranking: 3 (exceeds expectations)                                                              89 Richard Klosterman, User's Guide to What If? 2.0 (What If? Inc.), 10.2, http://www.whatifinc.biz/documentation.php. 90 Janes and Kwartler, “Communities in Control: Developing Local Models Using CommunityViz,” 169 91 Klosterman, User's Guide to What If? 2.0, 10.1 77    PSS Criterion: Cost What If? 2.0 sells one version of their software, but offers it at different prices based on whether the purchaser is using it in an academic or business setting. The Professional price is $1,000 per user. There is a $200 per person per year maintenance fee for the software, which includes upgrades and technical assistance that applies after the first year. The Academic version of the software costs $100 per user and there is no maintenance fee. Additionally, What If? Inc. sells sets of data that are preprocessed and ready to use with the software for $1,000 for national level data, and $2,000 for state level data.92 CommunityViz 4.0 offers different price options based on whether you buy technical support, but does not have an academic version. The standalone software is $350, and with one year of tech support and upgrades, costs $850. Additional analytical options are offered with the higher priced version as well. For either version of the software, the purchase is good for either three installs of the program or three seats on a network. Additional years of technical support cost $650.93 Since there are a number of price options for both software packages, the price will vary based on the type of organization that is interested in buying the software, and how many copies of the software is needed. With the exception of the academic license, CommunityViz is cheaper on the whole than What If?. However, the up front costs of the PSS are not the whole story. There are additional costs that need to be considered as well.                                                              92 “Purchase What if?,” http://www.whatifinc.biz/purchase.php. 93 “Placeways, LLC - Purchase CommunityViz,” http://shop.placeways.com/purchase.aspx. 78    Since both of these software packages rely on ArcGIS, it also must be purchased, if not already owned. The cheapest and most limited version of ArcGIS costs $1,500 for one seat, and more complete versions are considerably more expensive.94 Another cost that is more important in the long run is how much staff time the new computer systems will either save or create a need for. This will depend on how knowledgeable the staff is in GIS and computers in general, the ambitiousness of the analysis, how much data is available, and many other factors. Organizations should consider carefully the entire costs of choosing to use a PSS before committing to it. Since I don’t have a specific organization that I am ranking the software for, I will base my ranking on the upfront costs alone. What If?: 2 (meets expectations) CommunityViz: 2 (meets expectations) PSS Criterion: Comprehensiveness According to Richard Klosterman, What If?’s core functionality is to project “future land use, population, and employment patterns by allocating the projected land use demands from a Demand scenario to different locations based on their relative suitability as defined by a Suitability scenario, subject to the allocation controls specified                                                              94 “ArcView : Pricing Information,” http://www.esri.com/software/arcgis/arcview/pricing.html. 79    in an Allocation scenario.”95 The Demand, Suitability, and Allocation scenarios are the heart of the software:  The Suitability scenario is created based on GIS layers that the user provides. The user inputs all of the geographic factors that will affect the suitability of development of different types of land use. Different weights are provided by the user to determine whether certain factors influence the outcome more than others. The suitability map that is produced can be the end result, or it can become the basis for projecting future development.96  Demand scenarios must be developed if the user wishes to project future growth. The scenarios are based on economic and demographic data that is largely available publicly from the census or other government agencies. Future demand can either be provided if it has been projected elsewhere, or What If? will do a trend projection based on past values.97  The Allocation scenario is where the software spatially places the land uses and development based on the suitability map and the demand scenarios.98 What If? defines the comprehensiveness of its analysis, and then tailors its available analysis to that level of analysis. The software has flexibility in that it allows users to input demand numbers calculated elsewhere, which prevents it from being overly rigid. If it were only relying on its trend capabilities for future                                                              95 Klosterman, “A New Tool for a New Planning: The What If? Planning Support System,” 96. 96 Klosterman, User's Guide to What If? 2.0, 7.1 97 Ibid., 8.3. 98 Ibid., 9.1 80    projections, I would say that it was being overly comprehensive for the analytical capabilities that it possesses. CommunityViz has a wider range of functionalities, and thus can carry out more comprehensive analyses. CommunityViz has capabilities similar to What If?’s (the suitability and allocator wizards) but these only make up a portion of its capabilities. Many of these capabilities are not only concerned with modeling and projection. It has tools that aid in visualization, design, and generating indicators describing the impacts of various planning outcomes.99 Its open ended nature allows it to be comprehensive without falling into the trap of hypercomprehensiveness that Lee feared. Because of its design as a plug in within the ArcGIS framework, CommunityViz’s designers have enabled it to interact with other programs that expand its comprehensiveness, without having to program and account for every aspect of the analysis. This enables the level of comprehensiveness to fit the desired goals for the software. What If?: 2 (meets expectations) CommunityViz: 3 (exceeds expectations) PSS Criterion: Data Requirements The data requirements of each PSS relate to their comprehensiveness. What If? makes an attempt to provide for different levels of data. GIS land use and geographical                                                              99 “Common uses of Scenario 360,” http://placeways.com/communityviz/productinfo/scenario360/more.php 81    data is needed for any of the analyses carried out, however. If the suitability option is chosen, this is all that is needed.100 The Demand and allocation functions require population data, and optionally, employment data.101 However, population and economic data are arguably the easiest type of data to obtain. They are published by the census and other public agencies, and are also offered by What If? Inc. The GIS data are less common and can often be the most formidable obstacle in PSS implementation.102 So although one of What If?’s selling points is that it accommodates users with different levels of data, in practice its data requirements are relatively fixed. Users can determine what GIS layers they require to do an analysis, but if it is to be meaningful, they likely will require extensive GIS information. CommunityViz requires similar types of information as What If? does, but due to its flexibility, is more able to rely on minimal input data. Without extensive GIS data it is not possible to do extensive analyses. However, some of the tools, such as the build out wizard, can reliably function with only a land use layer.103 However, CommunityViz requires extensive GIS information to achieve many of its capabilities. Unless an organization has extensive GIS data, it will likely be necessary to digitize spatial information to achieve all that is desired. What If?: 1 (Insufficiently meets expectations)                                                              100 Klosterman, User's Guide to What If? 2.0, 2.3 101 Ibid., 2.4-2.5 102 James N. Levitt, Conservation in the Digital Age: Threats and Opportunities (Washington, D.C.: Island Press, 2002), 231. 103 “Scenario 360 Help: Data required to set up a build-out analysis,” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario_360.htm 82    CommunityViz: 2 (meets expectations) PSS Criterion: Complexity The main problem that complexity brings up is the issue of transparency. When a software program becomes too complex, it’s hard to follow and understand the operations that are carried out. Yet complexity in software architecture can lead to simpler and more straightforward operation if managed well. This is the dynamic that plays out between What If? and CommunityViz. What If? is the simpler software. It doesn’t attempt as much, its operations are simpler, and it doesn’t interoperate with as many other pieces of software. Yet, I would say that CommunityViz is the more simple software to use. It builds on usability conventions that have been built into ArcGIS and other mainstream software over time, and its larger user base and support staff have led to a product that is more thoroughly developed and mature. Help messages are plentiful, and all of the relevant information about calculations and assumptions are made plain. What If? on the other hand makes the attempt at transparency, and succeeds in some ways, but misses on other aspects of the software. The necessity of having to type in all of the data that will be used in the software for each new analysis is tedious, and increases the possibility of error. Help screens are not as plentiful or targeted, and it is not always obvious what processes are running in the background. The design of CommunityViz is more complicated, but its user interface is relatively simple, whereas What If? is more simple in its operation, but its graphic user interface and interoperability are clumsy and don’t meet current standards. 83    What If?: 1 (Insufficiently meets expectations) CommunityViz: 3 (Exceeds expectations) PSS Criterion: Scale What If? is aimed at city and regional scale planning questions. The toolset it contains and the data that it uses for analysis is specifically aimed at this scale. While it could conceivably be used at the neighborhood scale, or at the national scale, that is not what it has been designed for.104 Given the scale that is being evaluated, the results are not as precise as they could be. The analysis area is broken up by the software into Uniform Analysis Zones (UAZs), which are polygons containing similar ratios of attributes as all of the other UAZs.105 These UAZs are then used as the unit over which analysis results are distributed. While non uniform in area, the result is much like a grid system over which the results can be generated. Since the UAZs are not tied to any property or geographical arrangement recognized by the city, the results are very clearly approximations. This fits with What If’s goal of being an advisory tool that does not reach a very precise level of detail, but is a particularly limiting feature of the software. Since the process of UAZ is automated, users have no control over the units of output. Thus, comparability between different locations, or even multiple analyses is severely limited.                                                              104 Klosterman, “The What if? Collaborative Support System,” 393 105 Ibid., 396 84    CommunityViz doesn’t have the limitation of UAZs built into the software. The unit of analysis can be based on whatever units the user wants to adopt. Typically they are structures or property parcels. 106 The flexibility of tailoring the analyses to appropriate units enables the software to run analyses and produce outputs of a greater variety of scales, and with greater precision. The toolsets that are included with CommunityViz are still largely aimed at the same city and regional scale as What If?, but flexibility in this aspect and other parts of the software gives the user more options. If it was necessary for political reasons to produce results that were not at the parcel scale, then there are many operations within GIS that would enable that to be achieved.107 Some of the tools that are incorporated into CommunityViz, such as the Common Impacts Wizard, which calculates the environmental, demographic and fiscal impacts of structures, are not limited to the coarser scale.108 The lack of a prescriptive unit of analysis enables flexibility for the user, and also is a more stable platform for future additions and improvements to the software. What If?: 1 (Insufficiently meets expectations) CommunityViz: 2 (Meets expectations)                                                              106 “Scenario 360 Help: About Scenario 360 decision tools,” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario_360.htm 107 Such as the ArcToolbox Dissolve tool 108 “Scenario 360 Help: About the Common Impacts Wizard,” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario_360.htm 85    PSS Criterion: Robustness Both of these programs are intended as planning support tools that are simple and straightforward to use, but the developers approach this goal with different strategies. What If? is intended to accomplish a relatively narrow set of goals: performing suitability and build-out analyses with varying levels of complexity based on easily accessible data in a transparent manner. These goals are shared with Klosterman’s academic work in planning based on his assessment of the development of analytical methods in planning.109 These goals are all the right ones, and to a large degree he achieves them, but the software is held back by the lack of smooth integration with other software, its failure to include many best practice software innovations, and its rigidity. CommunityViz attempts to achieve many of the same goals and basic functionality of What If?, and it manages to achieve them within a flexible framework that also accommodates additional functionality. As it has been refined over the 15 years since it was introduced, it has become very user friendly and intuitive, despite the complexity of its functionality. Utilizing many of the strengths of the ArcGIS platform while extending its capabilities, it feels like another module within ArcGIS. It remains to be seen whether planners will find it useful enough to adopt it in substantially higher numbers than they are currently, but it seems well placed to help with current planning problems as well as improve over time. What If?: 1(Insufficiently meets expectations)                                                              109 Klosterman, “The What if? Collaborative Support System.” 86    CommunityViz: 2 (Meets expectations) PSS Evaluation Summary PSS Evaluation Total Score What If?: 10 CommunityViz: 17 Based on my evaluation, I found CommunityViz to be the better software by a large margin. In general, I found it to be more flexible and easy to use than What If?. The software architecture meets the standards of contemporary commercial software, and it integrates much more tightly with ArcGIS than What If?. In addition the visual output of CommunityViz is much more polished visually as well as being able to be output to other common software programs. While both developers share admirable goals, What If? Inc. has not fully translated those goals into a software program that would be as effective in integrating into planning practice. This conclusion is based solely on my experience with the software, and might not be the opinion that others would come to. Case Study Evaluation: What If? used at the City of Austin Watershed Protection Department Introduction Water quality has been an important part of Austin politics for quite some time. The southwestern portion of the city lies over a portion of the Edwards aquifer recharge 87    zone, where “highly faulted and fractured Edwards limestone outcrop at the land surface, allowing large quantities of water to flow into the Aquifer.”110 Due to the speed at which the water flows into the aquifer, there is little filtering of the water before it reaches the aquifer. There are a number of sources of pollutants, including erosion, toxic spills, and runoff from roads and gutters. The presence of pollutants can be visibly observed in the impacts on Barton Springs pool, a popular spring-fed swimming pool just south of downtown Austin. Barton Springs has served as an iconic symbol of environmental stewardship and degradation in the Austin area. Many of the city’s conflicts have focused on the impacts to the springs and the wildlife that lives within them. Conflicting community values regarding development and its impacts on the natural environment came to a head in the late 1980s and 1990s. This conflict spawned influential political groups such as the Save Our Springs Alliance111 and others. These groups continue to influence political decisions in Austin, and the city’s Department of Watershed Protection and Development Review (WPDR) continues to be influenced by that political climate. But water quality is not their only concern. The department’s primary goal in relation to water is to “Protect lives, property, and the environment from the impact of flooding, erosion, and water pollution.”112 Flood control is the other main component of their mission. Austin is located at the southern end of the “Highland Lakes,” man-made lakes created by a series of dams built on the Colorado river. While these dams help to control the large, damaging floods that have plagued Austin’s history, large intense rains                                                              110 “Introduction to the Edwards Aquifer,” http://www.edwardsaquifer.net/intro.html. 111 “Save Our Springs Alliance Home Page,” http://www.sosalliance.org/. 112 “Watershed Protection: Mission & Goals,” http://www.ci.austin.tx.us/watershed/mission.htm. 88    can still cause flooding on the many creeks that run through Austin. As recently as 2001, there was a flood that damaged 860 buildings.113 While water quality and flood protection are separate issues, they share a common aggravating factor: impervious cover. The greater the percentage of area in a watershed that is covered in impervious cover, the larger impact there will be on the water body that collects runoff from the watershed. After a rain event, runoff will travel to the water body faster and bring a more concentrated load of pollutants. This leads to increased risk of both flooding and dangerous levels of pollutants in the water. Understanding the maximum potential area in a watershed that will be covered in impervious cover in the future is an important indicator in developing a worst case scenario for both flood and water quality risk for the watershed. In the City of Austin, both the water quality and flood control divisions of Watershed Protection and Development Review utilize software models to attempt to calculate what the impacts are and will be on all of the watersheds that are within or contribute to the City of Austin’s jurisdiction and extra-territorial jurisdiction. The piece of information that they are lacking are the forecasts of the amount of development that can be expected within the watershed in the future. To attempt to get a better sense of forecasting this information, the department has been attempting to project the maximum buildout for each of the watersheds in Austin, and have attempted to use What If? in association with ArcGIS to project the future land uses in city watersheds since 2004.                                                              113 “City of Austin - A Brief History of Austin Floods,” http://www.ci.austin.tx.us/watershed/floodhistory.htm. 89    Shown in figure 9 is a map illustrating the output generated for the Gilleland Creek watershed, made from data provided by Robbie Botto. Figure 9: Map of What If? Build-Out Analysis Output I conducted a number of informal interviews with employees in the watershed protection department, including three interviews with Robbie Botto, Environmental Planner, one interview with Roger Glick, Water Quality Section Manager and Fang Yu, Flood Control Engineer and one interview with Matt Holland, Senior Planner. The information contained below has been gleaned from these interviews. 90    What If? Case Study Criterion: Cost The affordability of the software was an important factor in the decision to use What If?. Robbie Botto was given the authority to decide which software to use, and Botto was mindful of the fact that the department did not have a lot to spend on this purchase. What If? was considerably cheaper than the other software options available. Since the department already had ArcGIS installed, the additional cost of the software was not considered a burden. Through discussions with Richard Klosterman, Botto was convinced that the amount of tech support that was offered by What If?, Inc., which was offered at no extra cost, would be sufficient to get the software working with the city’s data and needs. Since he was already familiar with GIS, there would be no need for additional classes or instruction to understand the GIS portion of the software. Botto has been satisfied with the value of the software delivered for the price that he paid for it, even though it has taken longer to implement than anticipated. Score: 2 (Meets expectations) What If? Case Study Criterion: Software Architecture The fact that the software was set up to work with ArcGIS was definitely a strength in Botto’s eyes. The WPDR department, as well as the City of Austin as a whole, already used ArcGIS extensively, and had collected a significant amount of GIS data that would be compatible with What If? and useful in the analyses that were desired. 91    Botto was happy with how What If? and ArcGIS interacted, and he was able to move between the two software packages easily. However, What If? did not always perform as well as he expected. Because of the specific outcomes he was looking for, and the need for results that were based on solid assumptions, several modifications were necessary for What if? in order to accomplish what he was trying to do. To accomplish these he notified Richard Klosterman of the problems that he was having, and Klosterman and his engineers worked on solutions to the problem. This process could take a long time, and is one of the reasons that work has progressed slowly since 2004. Botto said that one of the weaknesses of the software is that it does not have a large user base, so the software has not been adapted to the various needs that people will need to use it for, so users will have more problems that have not been dealt with before. Score: 1 (Insufficiently meets expectations) What If? Case Study Criterion: Complexity Botto chose the software because it seemed to be within his comprehension and ability as a planner with GIS experience. One of the goals of the department was to use software that would allow projections to be made entirely within the department, rather than relying on an outside contractor. Also, Botto felt confident about understanding the workings of the model so that he could have confidence in the results. He felt that the focus of What If? on creating a model that was based on simple, rough level analysis was a good fit with what he was trying to do with the software. As Botto later discovered, in 92    some ways, the software was not complicated enough to respond to the department’s specific needs. They had a problem in working with large areas of unzoned and undeveloped land and had to develop a technique where they divided it into a one acre grid in order to achieve more accurate results. As will be discussed in the data needs section, there were other ways that the department had to tailor its data and analysis to the abilities and limitations of What If?, which illustrates the necessity of understanding in detail the operations of the software that you are working with. Botto was able to understand the workings of the PSS sufficiently to tailor the software in these cases, so in that way it worked well. Score: 2 (Meets expectations) What If? Case Study Criterion: Data Requirements The data requirements of What If? seemed to fit well with the type of data that the city already had, and was one of the reasons that the decision was made to buy the software. However, although the City of Austin had a large base of GIS data, it turned out they still didn’t have all of the layers that would have been useful in implementing the buildout analysis. The lack of projected future water/wastewater infrastructure limited the accuracy of the buildout predictions. In most cases, however, Botto felt that he had the data that was required for the software. 93    Formatting the data to work in the analysis proved to be the biggest challenge in working with What If? to forecast growth in watersheds, a problem that was ultimately not resolved. At the regional scale, economic and population data is produced and distributed based on political delineations such as counties and census tracts, not geographical or ecological boundaries such as watersheds. The information that pertains to these divisions is not divided evenly across the landscape and it is not clear how to transfer the information into different divisions. This is not a problem that is unique to What If? but is rather a vexing problem in many investigations of this type. However, it becomes a problem related to What If? because the software requires this information in order to run. Matt Holland eventually made the decision earlier in early 2009 to work on an alternate process for projecting land uses, primarily to avoid having to deal with the problem of adjusting economic data to the watershed level. Botto still would like to use What If? but it is not the priority right now. Score: 1 (Insufficiently meets expectations) What If? Case Study Criterion: Scale The prediction scale that Botto was interested in was the watersheds of Austin. This is a little bit smaller than the scale that is usually used for projects with What If?. All of the examples that are shown on the What If? website show more of a regional type scale, while a watershed is at the sub-city scale. I have already mentioned above the ways in which this caused problems for data collection, which is one of the primary 94    limitations of scale. Botto felt that the watershed scale was effectively close to the finest scale analysis that could be achieved, due to the data and the assumptions that are made in the model. If the scale was any finer, then the results would become unreliable. There are no firm limits on the scale that What If? can operate, so it is essential for the user to have a good sense of the proper scale to approach the analysis. Score: 2 (Meets expectations) What If? Case Study Criterion: Comprehensiveness The most important element addressed by What If? that influenced the Watershed Protection Department to purchase What If? was its ability to project future land use spatially. This could then be used to determine relative levels of impervious cover, and the level of runoff that could be expected to come off of each land use. Previous methods used by the department determined future land use predictions based on Traffic Analysis Zones (TAZs), a common unit of measurement for transportation modeling, but which are substantially coarser than the flexible UAZs used by What If?. Information within the TAZs was assumed to be evenly distributed throughout the zone. While much of the demand that led to the use of What If? had come from Roger Glick and Fang Yu who wanted a spatially explicit result to use in their modeling, they left it up to Matt Holland and Robbie Botto to determine the best way to achieve those results. 95    One aspect of comprehensiveness that was important to the watershed protection department was the flexibility to adjust the variables that are considered in the analysis. In some ways What If? met those requirements. Its coupling with ArcGIS enabled some of the aspects that were not specifically covered within What If? to be accomplished in ArcGIS. Robbie used this capability to break up large unzoned areas, and he applied differential land suitability values along roads and other important geographic features. On the other hand, the lack of flexibility in the amount of information that was covered led to difficulties when it became hard to utilize economic data in a meaningful way. In this case, the model was too comprehensive, which caused problems. Score: 1 (Insufficiently meets expectations) What If? Case Study Criterion: Robustness Fang Yu, Roger Glick, and Matt Holland brought up the question of the validity of the model predictions. Fang and Roger were content to rely on the judgment of Holland and Botto to determine the validity of the model results. While Botto was convinced that the uncertainty was within reasonable levels and could be accounted for, Holland remained skeptical. He expressed the opinion that the results of the software were not necessarily anything better than could have been produced by looking at other examples of similar watersheds and extrapolating based on similarities with these watersheds. This is the method that was eventually used, although more work might still be done in What If? in the future. 96    Also, while Richard Klosterman claims that the purpose of the software is to use it in a public setting, in this case it was used entirely in house, for a purpose that it was arguably not designed for. Asked about this, Botto thought that What If? could be used effectively in a city wide visioning process associated with something like the comprehensive plan, but for the water quality department, most of the community interaction they did was at a smaller scale, such as the neighborhood, which What If? would not be as useful for. Score: 1 (Insufficiently meets expectations) What If? Case Study Summary Case Study Total Score: 10 My final score was the same for both the PSS evaluation and the case study evaluations, although the scores were distributed differently. Scoring the case study was difficult because Botto’s conclusion was that What If? had fulfilled his desires for it, and yet it was not still being implemented due to problems in its application. In scoring the case study, I found that the inability of the city to achieve the desired results over the five years of effort was a key factor in giving the software relatively low scores. What If? has helped the department to develop a methodology for projecting possible future impervious cover, but from the perspective of an organization thinking about buying the software, that doesn’t seem like a strong enough reason to use What If?. 97    Case Study Evaluation: CommunityViz Used at the Mission- Aransas NERR Introduction The Mission-Aransas National Estuarine Research Reserve (NERR) is located on the Texas coast northeast of Corpus Christi. Seventy-five percent of the reserve is in Aransas County. Nearby cities are Rockport and Port Aransas. The area around the NERR is popular for vacationers and the area has experienced rapid growth. Since 1990, the population of Aransas County has increased by 26%.114 This continued population growth threatens the sensitive NERR and surrounding ecosystems. The ecological damage is likely to continue as there are few land use controls available to unincorporated areas in Texas. While the circumstances are always slightly different, many areas of the country face the uncertain effects of rapid growth in environmentally sensitive areas. Funding was granted by the David and Lucile Packard foundation to study the effectiveness of developing an “integrated land-sea planning toolkit” that would facilitate understanding the environmental consequences of future development and land use planning. The toolkit development was overseen by Dr. Kiersten Madden, at the University of Texas Marine Science Institute, which is the lead state agency for the NERR in                                                              114 Integrated Land - Sea Toolkit and its Use in Aransas County, https://transfer.natureserve.org/download/longterm/EBM%20Tools/Integrated%20Land_Sea%20Planning %20Toolkit%20Presentation%20by%20Kiersten%20Madden,%20Amy%20Anderson,%20and%20Ian%20 Varley%20092209.wmv. 98    Texas.115 Collaborating with Dr. Madden were representatives from three different tool providers: Doug Walker and Amy Anderson from Placeways, LLC, the developer of CommunityViz, Patrick Crist and Ian Varley from NatureServe, the developer of Vista, and Dave Eslinger from NOAA Coastal Services Center, the developer of N-SPECT. The final output from the grant was a toolset that would “[apply] and [link] decision support tools from the land use planning, conservation planning, and ecological modeling sectors… [to] enable coastal communities to develop land use strategies that promote coastal environmental quality and community quality of life.”116                                                              115 “Mission-Aransas National Estuarine Research Reserve,” http://www.utmsi.utexas.edu/about-the- institute/mission-aransas-nerr.html. 116 Grant proposal executive summary: http://www.utmsi.utexas.edu/attachments/256_EBM1%20proposal.pdf 99    The general process of the software interaction is presented in figure 10. Figure 10: Land-Sea Toolkit Diagram117 In the toolkit, CommunityViz is used for land use projection, then the impacts of the future land use on animal and plant species is examined in Vista, and the runoff effects are modeled in NSPECT. An additional marine water quality model is needed to evaluate the spread of runoff into aqueous environments. Once the effects are understood, then mitigation or changes in plans can be applied, and the models run through again. This process is shown in a little more detail in figure 11, which also                                                              117 Patrick Crist et al., Integrated Land-Sea Planning: Technical Guide to the Integrated Land-Sea Planning Toolkit (EBM Tools Network, August 14, 2009), 28, http://www.utmsi.utexas.edu/images/stories/Land%20sea%20tech%20guide%20_reduced%20size.pdf. 100    illustrates the role of stakeholder participation in the analysis process. In this project, the stakeholder participation was achieved through community meetings in which priorities for conservation and future land use were gathered, and a three day workshop illustrating toolkit functionality and getting feedback on the toolkit design. Figure 11: Land-Sea Toolkit Process Diagram118 This process was developed over the course of the length of the grant, which was about two years. A technical report and a webinar explaining the toolkit have been produced and are published on the Mission-Aransas NERR website, as well as the                                                              118 Ibid., 30. 101    Environment Based Management Tools Network website.119 In addition to these results, the research team presented the results to local stakeholders. While final results from the Aransas county analysis were obtained, the focus of the results were on creating a replicable toolkit process rather than specific policy recommendations. CommunityViz played an important role in the project. The technical report describes its role this way: CommunityViz is the primary tool used to depict land use scenarios and summarize indicators across all tools. It is used to model trends in urban and rural growth and quantify outcomes in terms of numerous socioeconomic indicators. It also can incorporate hazard information such as inundation from storm surges and sea level rise.120 In the framework of the seven rating criteria below, I will look more specifically at the role of CommunityViz in the project. The information contained below was gathered from participant observation in the three day workshop that was held in Port Aransas on April 14-16, 2008, informal interviews with all of the researchers over the course of the 3 day workshop, phone interviews with Dr. Madden and Amy Anderson, and the digital and written materials produced by the researchers. One caveat for this case study is that it was somewhat difficult to get an objective perspective of the role of the software since the software developers were so heavily involved. However it is an                                                              119 “Mission-Aransas National Estuarine Research Reserve”; “Ecosystem Based Management Tools: Promoting awareness, development, and effective use of tools for ecosystem-based management in coastal and marine environments and their watersheds.,” http://www.ebmtools.org/. 120 Crist et al., Integrated Land-Sea Planning: Technical Guide to the Integrated Land-Sea Planning Toolkit, 23. 102    interesting example of what can be accomplished with the involvement of participants who know the software well and believe in its capabilities. CommunityViz Case Study Criterion: Cost In the context of this case study, the cost of the software program was not a factor since Placeways, LLC was involved in the research and provided all software free of charge, as well as consulting services of the software designers. The value of the consulting was also much higher than it would be normally since Doug Walker, the president of Placeways was personally involved in the research. Score: 2 (No opinion) CommunityViz Case Study Criterion: Data Requirements Data availability was definitely an issue in developing the forecasted future land use map. There are few land use controls in unincorporated county areas, which much of the area in Aransas County is. Due to this fact, it is hard to predict where growth will occur in the future. Since CommunityViz does not have strict rules about what data is needed to run buildout analyses, the researchers used data that they had available, and which they found sufficient for the rough estimates that they were producing. In terms of current data, the researchers relied largely on National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (CCAP) data, which is 30-meter resolution, remote sensing data gathered and coded for all of the coastal areas of the United States.121 The                                                              121 “Land Cover Analysis - Coastal Change Analysis Program,” http://www.csc.noaa.gov/crs/lca/ccap.html. 103    flexibility of what data is required for CommunityViz proved an asset in this case study because it enabled the researchers to tailor their analysis to the data that was available. However, as in other aspects of CommunityViz, this flexibility requires that the users have a good understanding of the software as well as the data available for the project. Score: 2 (Met Expectations) CommunityViz Case Study Criterion: Software Architecture The flexibility of CommunityViz’s software architecture served it well in this project. CommunityViz served as the land use and the presentation portion of the toolkit. Its ability to export to other software, such as Google Earth, display numerous indicators, and adjust its assumptions quickly were all important strengths for the project. Interestingly, although one of its design capabilities is to function well as a facilitation aid in community meetings, it was not used in this capability. CommunityViz’s design as an extension to ArcGIS also served it well in this project, as the different software programs used GIS file formats to transfer information between them. Without the common reliance on GIS functionality, interaction between the software packages would have been much more difficult, if not insurmountable. Score: 3 (Exceeded Expectations) 104    CommunityViz Case Study Criterion: Complexity This issue was also sidestepped a bit by having the software developers involved in the project, who obviously knew the software quite well. However, this advantage only applied to setting up the toolkit. The original goal of the three day workshop that was held on the toolkit was to “train local institutions in the use of the tools so they can continue to use the tools in a dynamic adaptive management process into the future.”122 The surveys that were passed out at the end of the session, however, indicated that “100% of respondents also indicated that they still need further training in the tools.”123 Since participants in the seminar were presented all of the software packages together, it is hard to separate the complexity of CommunityViz from the others. The processes that were used in CommunityViz in order to arrive at the results generated were not fully explained in the presentations or technical reports generated by the researchers. This is partially due to the complexity involved. Even though Placeways attempted to create a relatively simple piece of software, it still becomes too complicated to describe every action taken in the analysis, even to an audience that is familiar with GIS. However, this must be understood in context. The feedback processes involved in land-sea planning interactions are incredibly complex. The relative simplicity of CommunityViz is part of what made this toolkit even possible to assemble. Previously, this process would not have even been feasible. Score: 1 (Did not meet expectations)                                                              122 Patrick Crist et al., Grant Proposal for EBM Tools Demonstration Project The David and Lucile Packard Foundation’s Science for Oceans and Coasts (EBM Tools Network, May 30, 2007), 1, http://www.utmsi.utexas.edu/attachments/256_EBM1%20proposal.pdf. 123 Patrick Crist et al., Integrated Land-Sea Planning Toolkit Training Recommendations (EBM Tools Network, 2009), 2, http://www.utmsi.utexas.edu/images/stories/toolkit%20trng%20recs%20_final.pdf. 105    CommunityViz Case Study Criterion: Comprehensiveness Users of CommunityViz are given more flexibility in determining the comprehensiveness of the analysis undertaken than are users of What If?. The wizards that guide the user through many of the decision tools that are offered in CommunityViz are simplistic in what they require in order to function. For example, all that is needed in the buildout wizard is a polygon layer that contains some sort of land use differentiation. Density and other information then is input by the user, but can be as speculative as is desired. Greater comprehensiveness comes from combining the tools together, or in more thoughtful development of the land use layer or of suitability layers using GIS tools. The approach of CommunityViz seems to be to provide a series of simple tools that expand what is possible with GIS, but to leave it up to the users to use them in ways that make sense. In this case, this approach worked well since there were people implementing the software that understood the implications of the decisions that they were making. Because the tools are relatively simple but not necessarily obvious in how they should be used, CommunityViz seems more useful as an extension of a planners’ abilities, rather than being something an average person could use. This is different from What If? which guides the user more clearly and firmly through its processes. Score: 2 (Met Expectations) CommunityViz Case Study Criterion: Scale There was a voluntary scale limitation in this case study to the Live Oak peninsula, a portion of Aransas county that is predicted to experience the heaviest development 106    pressure. This was done in an attempt to limit the complexity of the analysis. There was not specific limitation in CommunityViz that prevented a larger or smaller scale other than the judgment of the researchers and the purpose of the project. The resolution of the buildout results are at the building level, which might suggest a level of precision that is not actually the case. While the buildout wizard projects building results, they are meant as examples of what might happen, not firm predictions of where they will go. In order to fit the scale of the predictions more closely to the level of uncertainty involved, the researchers degraded the future land use polygon layer to a coarser level than actual current parcels. They converted the polygon layer into a vector grid of land use types of larger agglomerations than would actually occur to provide a rough guide of what types of development would happen where, but not give a sense of false accuracy. All of these decisions were made not due to the limitations of the software, but rather by the intentions of the analysts. Score: 3 (Exceeded Expectations) CommunityViz Case Study Criterion: Robustness The model that CommunityViz uses to project development is more simple than the one used in What If?. Economics and demographics are not directly involved in the buildout wizard. It relies on the outputs of the suitability wizard to forecast the location of development. Unlike What If? it does not project land uses, but projects at the building level. The projections are predicated upon having some sort of polygon layer to project buildings onto. The purpose of the projections are to produce scenarios of the densities 107    that are likely to develop within large areas that are zoned a certain way. The formulation of this future land use layer must be constructed separately by the user however they wish. In this example, it was developed based on current CCAP data, the future land use plans of the city of Rockport, and the advice and judgment of stakeholders and the researchers themselves. The way that the researchers here used this process made it much more similar to the process that was adopted by the City of Austin after they stopped using What If?. It increases the role of the expert, and decreases the amount that is being predicted by the software. This brings the role of CommunityViz much closer to being a planning support system in this case, rather than a land use model. Score: 2 (Met Expectations) CommunityViz Case Study Summary Case Study Evaluation Total Score: 15 This case study was hard to score due to the involvement of the developers who have a higher level of competence than any organization that would attempt to apply CommunityViz. However, it was still possible to evaluate the role that CommunityViz played in the research project, and there is no doubt as to whether the software was being used to the full extent of its abilities. It is possible that the scores I gave for this software are a little higher than they might have otherwise been, as there would likely have been more problems in implementation for a more average case. 108    Rating Framework Conclusion As a result of applying the rating framework to these two PSS, I ended up favoring CommunityViz over What If? by a large margin, and that is reflected in the score: CommunityViz: 32 What If?: 20 While both software programs performed well, I felt that CommunityViz was the more flexible and useful product overall. While it would depend somewhat upon what I was undertaking, I feel certain that I would choose it over What If?. While the ratings for the case studies might have come out differently with other cases, my impression of both the software alone and the cases were fairly similar, so I feel fairly confident in preferring CommunityViz over What If?. 109    Chapter 6- Findings/Conclusion Findings Based on my experience in applying the rating framework to the two PSS that I examined, I am satisfied with the comprehensiveness with which it allowed me to investigate the software programs. However, as I carried out the rating process, there were aspects of applying the rating framework that I had not anticipated. Following are some of the reflections that I had during this process.  The criteria would often overlap, and sometimes it could be hard to know which criteria to apply an observation to. However, because criteria categories overlap, the process encourages the reviewer to think of the slightly different gradations of that aspect of functionality. I found that entering the categories became an iterative process, where analysis of one section led to thoughts about another one of the criteria. As I worked on each of them, they differentiated more from one another.  Since the criteria were taken from a paper focused on land use models, I was concerned that the criteria might encourage reviewers to overlook some of the functions of a PSS that are not specifically focused on land use projection. While I cannot say definitively that this is not the case, I found that the framework allowed a suitably comprehensive evaluation of the software. This was in part due to the relatively broad nature of the criteria used in the rating framework. I 110    found that in the context of each PSS, the criteria took on slightly different shades of meaning based on the context of the software and the case studies. While carrying out the rating, I realized that different people would likely approach the framework differently and interpret the terms in different ways. While this suggests that perhaps there should be more explanatory text about the meaning of the criteria distributed with the framework. I also think that it is a strength, in that it allows people to interpret the framework in the way that is most useful to them.  One difficulty that I faced in applying the rating framework to the PSS and case studies was that I did not have a specific application that I was evaluating the software. The use required of the PSS affects all of aspects of the rating process. Without one, it could be hard to have a context for a rating sometimes. In retrospect, it might have been more effective to create a scenario within which I was reviewing the software to give more plausible bases for giving scores. This is a problem that most reviewers who are using the rating framework will not have, since they should have a stronger idea of what they are attempting to use it for.  As discussed in earlier sections, the premise for the inclusion of case studies was that context-based knowledge is incredibly important for determining the worth of the PSS. The specific context within which this rating framework would be used will always vary, yet I anticipate that the most common users would have a significantly different context from mine. Where most users would be primarily concerned with the evaluation of the PSS, within the context of this thesis, I had 111    equal concern for the performance of the rating framework as for the performance of the PSS. This is not a problem per se, but it suggests that it might be useful to have a section of the rating system at the outset where a formal statement of the goals for the ratings are made clear, as well as details about who is doing the rating. This would help make the context clear to others reading the rating framework. Summary While the rating framework contained in this thesis is substantially complete, it would benefit from more formatting depending on the context that it would be used within. In the same way that there is a difference between formulating the theory of a PSS and implementing it, I think that this is also true with a rating framework. It is unlikely that many people would be willing to read this entire thesis in order to understand the rating framework and apply it. While chapter four explains the rating framework and chapter five contains an example of its application, shorter summaries of the essential portions of the first chapters would likely be necessary to distribute along with the rating framework in order to provide a more streamlined overview of how to apply and interpret the framework. Conclusion So the question remains: have PSS overcome the criticisms that Lee leveled at LSUMs? No. I don’t think that they ever will. Lee leveled insightful criticisms at the use of software models that forecast/predict/imagine possible future scenarios. The 112    criticisms remain largely true today, and I think that anyone who is thinking about using software in this manner would be well served by reading his article. However, that being said, there is still a place for PSS in planning. Planning for the future is hard: it is uncertain, it will be affected by our biases, our desires, our training. This will be true whether we plan using computer tools or not. In the end, that’s all that PSS are: tools. What matters more than the tool is who is using it, and how. I think that PSS are on the right track. The two software programs I investigated in this thesis made a sincere attempt to adopt an appropriate level of comprehensiveness in their analysis, limit the complexity of the software in order to achieve transparency, streamline the data requirements of the software, adopt an appropriate scale, use effective software architecture, limit the cost of their application, and achieve results that are robust. In some ways they failed, but the history of city and regional planning is littered with examples of failures and partial successes. Lee was not entirely successful with his article either. His dramatic and combative tone contributed to a feeling of mistrust between those who believed in the promise of urban models and the rest of the planning profession. Britton Harris, long a champion of modeling, said that “‘Requiem’ and its sequels have poisoned the discourse about planning and models.”124 I think that it is important to re-open those lines of communication and discourse. While there are legitimate concerns about the use of models and software in planning, there are opportunity costs in not engaging with them as well. Both of the case studies in this thesis offer a compelling vision for what it is                                                              124 Harris, “The Real Issues Concerning Lee's "Requiem",” 31 113    possible to achieve through incorporating PSS into planning. In each of these cases, the PSS facilitated the incorporation of likely environmental and social impacts into the planning process. They also enabled communication between planners and other disciplines. Both of these abilities are things that planners should strive to achieve in order to achieve more sustainable urban outcomes. The software and the processes for including them in planning can both be improved. This rating system can help achieve that, but there is a wider need to increase research into the interaction between software and planning as well. One of the lessons learned from the case studies was the importance of having a thorough understanding of the software used in order to effectively apply it to a given situation. Applying PSS in planning brings up interesting challenges, and the better the user of a PSS knows the software, the better he or she is able to understand the range of decisions that are possible to be made as well as what the implications of those decisions are. The ability to do this is partially a function of the design of the software, which is largely what Lee focused on in his paper, and what I investigated in this thesis. Another important aspect, though, is the degree of knowledge that the user has about the software, or software methods in general. If the userbase has a generally higher familiarity with the techniques that are being applied, then there is a wider range of functionality that the developers of software can include. The grafting of PSS onto GIS is a good example. Since GIS is inherently a spatial technology that fits well with PSS, and many planners are familiar with GIS, the two PSS that I investigated were able to take the base functionality of GIS as a base and then expand from there, rather than having to design the whole software package. 114    What this points to, I think, is to expand the level of integration of technology into University planning programs. By teaching GIS and PSS within planning programs, students can learn not only the software, but how the software relates to planning: what the advantages are as well as the dangers involved in using software in practice. There is already an expanding literature on the strategies and challenges associated with incorporating GIS into planning.125 Familiarity with PSS of the type that I investigated in this thesis should be an important component of that discussion. The uncertainty and transparency issues that arise when producing future scenarios in particular would be important aspects to cover in the academy, and “Requiem for Large Scale Models” would not be a bad starting point for that exploration.                                                              125 For example, see: D. R Godschalk and G. McMahon, “Staffing the revolution: GIS education for planners,” Journal of Planning Education and Research 11, no. 3 (1992): 216; G. Urey, “A critical look at the use of computing technologies in planning education: The case of the spreadsheet in introductory methods,” Journal of Planning Education and Research 21, no. 4 (2002): 406 115    Bibliography Adams, John S. “Textbooks that moved generations: Hoyt, H. 1939: The structure and growth of residential neighborhoods in American cities. Washington, DC: Federal Housing Administration..” Progress in Human Geography 29, no. 3 (2005): 321-325. Aibar, Eduardo, and Wiebe E. Bijker. “Constructing a City: The Cerda Plan for the Extension of Barcelona.” Science, Technology and Human Values 22, no. 3 (Winter 1997): 3-29. “ArcView : Pricing Information.” http://www.esri.com/software/arcgis/arcview/pricing.html. Bartholomew, Keith, and Reid Ewing. “Land Use–Transportation Scenarios and Future Vehicle Travel and Land Consumption.” Journal of the American Planning Association 75, no. 1 (2009). Batty, Michael. “A Chronicle of Scientific Planning: The Anglo-American Modeling Experience.” Journal of the American Planning Association 60, no. 1 (1994). ---. Cities and Complexity: Understanding Cities with Cellular Automata, Agent-Based Models, and Fractals. The MIT press, 2007. ---. “Planning Support Systems: Progress, Predictions, and Speculations on the Shape of Things to Come.” In Planning Support Systems for Cities and Regions, edited by Richard K. Brail, 3-30. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. Berg, B. L. Qualitative research methods for the social sciences. Allyn and Bacon Boston, 1989. “Berkeley/Penn Urban & Environmental Modeler's Datakit - Home.” http://dcrp.ced.berkeley.edu/research/footprint/. Bradfield, R., G. Wright, G. Burt, G. Cairns, and K. Van Der Heijden. “The origins and evolution of scenario techniques in long range business planning.” Futures 37, no. 8 (2005): 795– 812. Brail, Richard K, and Richard Klosterman, eds. Planning Support Systems: Integrating geographic information systems, models, and visualization tools. ESRI Press, 2001. Brail, Richard K. Planning Support Systems for Cities and Regions. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. Brooks, Michael P. Planning Theory for Practitioners. Chicago, Illinois: Planners Press, American Planning Association, 2002. Callies, David L., Robert H. Freilich, and Thomas E. Roberts. Cases and Materials on Land Use. 4th ed. West, 2004. “City of Austin - A Brief History of Austin Floods.” http://www.ci.austin.tx.us/watershed/floodhistory.htm. Colella, V. S, E. Klopfer, and M. Resnick. Adventures in Modeling: Exploring Complex, Dynamic Systems with StarLogo. Teachers College Press, PO Box 20, Williston, VT 05495-0020 ($29.95). Tel: 800-575-6566 (Toll Free)., 2001. “Common uses of Scenario 360.” http://placeways.com/communityviz/productinfo/scenario360/more.php. Copas, Conn V. “Spatial Information Systems for Decision Support.” In Human Factors in Geographical Information Systems, edited by David Medyckyj-Scott and Hilary M Hearnshaw, 158-167. London: Belhaven Press, 1993. Couclelis, Helen. “The Certainty of Uncertainty: GIS and the Limits of Geographic Knowledge.” Transactions in GIS 7, no. 2 (2003): 165-175. Craig, William J., Trevor M. Harris, and Daniel Weiner. Community participation and geographic information systems. CRC, 2002. 116    Crist, Patrick, Kiersten Madden, Ian Varley, David Eslinger, Dave Walker, Amy Anderson, S. Morehead, and K. Dunton. Grant Proposal for EBM Tools Demonstration Project The David and Lucile Packard Foundation’s Science for Oceans and Coasts. EBM Tools Network, May 30, 2007. http://www.utmsi.utexas.edu/attachments/256_EBM1%20proposal.pdf. ---. Integrated Land-Sea Planning Toolkit Training Recommendations. EBM Tools Network, 2009. http://www.utmsi.utexas.edu/images/stories/toolkit%20trng%20recs%20_final.pdf. ---. Integrated Land-Sea Planning: Technical Guide to the Integrated Land-Sea Planning Toolkit. EBM Tools Network, August 14, 2009. http://www.utmsi.utexas.edu/images/stories/Land%20sea%20tech%20guide%20_reduce d%20size.pdf. Drummond, William J., and Steven P. French. “The Future of GIS in Planning: Converging Technologies and Diverging Interests.” Journal of the American Planning Association 74, no. 2 (Spring 2008): 161-174. “Ecosystem Based Management Tools: Promoting awareness, development, and effective use of tools for ecosystem-based management in coastal and marine environments and their watersheds..” http://www.ebmtools.org/. ---. Making Social Science Matter: Why Social Inquiry Fails and How it Can Succeed Again. Cambridge University Press, 2001. Geertman, S., and J. C.H Stillwell. Planning support systems in practice. Springer Verlag, 2003. Godschalk, D. R, and G. McMahon. “Staffing the revolution: GIS education for planners.” Journal of Planning Education and Research 11, no. 3 (1992): 216. Goldner, William. “The Lowry Model Heritage.” Journal of the American Planning Association 37, no. 2 (March 1971): 100-110. Gorry, G. Anthony, and Michael S. Morton. “A Framework for Management Information Systems.” Sloan Management Review 13, no. 1 (1971): 55–70. Groat, Linda, and David Wang. Architectural Research Methods. Wiley, 2002. Hall, Peter. Cities of Tomorrow. 3rd ed. Malden, MA: Blackwell Publishing, 2007. Harris, Britton. “The Real Issues Concerning Lee's "Requiem".” Journal of the American Planning Association 60, no. 1 (Winter 1994): 31-34. ---. “Urban Simulation Models in Regional Science.” Journal of Regional Science 25, no. 4 (1985): 545–567. Hopkins, Lewis, and Marisa A. Zapata, eds. Engaging the Future: Forecasts, Scenarios, Plans and Projects. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2007. Integrated Land - Sea Toolkit and its Use in Aransas County. https://transfer.natureserve.org/download/longterm/EBM%20Tools/Integrated%20Land_ Sea%20Planning%20Toolkit%20Presentation%20by%20Kiersten%20Madden,%20Amy %20Anderson,%20and%20Ian%20Varley%20092209.wmv. “International Spatial Accuracy Research Association.” http://www.spatial-accuracy.org/. “Introduction to the Edwards Aquifer.” http://www.edwardsaquifer.net/intro.html. Janes, George M., and Michael Kwartler. “Communities in Control: Developing Local Models Using CommunityViz.” In Planning Support Systems for Cities and Regions, edited by R. K. Brail, 167-183. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. Jick, Todd D. “Mixing Qualitative and Quantitative Methods: Triangulation in Action.” Administrative Science Quarterly 24, no. 4 (December 1979): 602-611. Klosterman, Richard. “A New Tool for a New Planning: The What If? Planning Support System.” In Planning Support Systems for Cities and Regions, edited by R. K. Brail, 85- 99. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. 117    ---. “An Introduction to the Literature on Large-Scale Urban Models.” Journal of the American Planning Association 60, no. 1 (Winter 1994): 41-44. ---. “Comment on Drummond and French: Another View of the Future of GIS..” Journal of the American Planning Association 74, no. 2 (Spring 2008): 174-176. ---. “Deliberating About the Future.” In Engaging the Future: Forecasts, Scenarios, Plans and Projects, edited by Lewis Hopkins and Marisa A. Zapata, 199-220. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2007. ---. “Foundations for Normative Planning.” Journal of the American Planning Association 44, no. 1 (1978). ---. “The What if? Collaborative Support System.” Environment and Planning, B: Planning and Design 26 (1999): 393-408. ---. “Toward a Normative Theory of Planning.” Dissertation, Cornell University, 1976. ---. User's Guide to What If? 2.0. What If? Inc. http://www.whatifinc.biz/documentation.php. “Land Cover Analysis - Coastal Change Analysis Program.” http://www.csc.noaa.gov/crs/lca/ccap.html. Landis, John. “Imagining Land Use Futures: Applying the California Urban Futures Model.” Journal of the American Planning Association 61, no. 4 (Autumn 1995): 438-457. Latour, Bruno. Science in Action. Cambridge, Massachusetts: Harvard University Press, 1987. Lee, Douglass B. “Requiem for Large-Scale Models.” Journal of the American Planning Association 39, no. 3 (1973): 163–178. ---. “Retrospective on Large-Scale Urban Models..” Journal of the American Planning Association 60, no. 1 (1994): 35-40. Levitt, James N. Conservation in the Digital Age: Threats and Opportunities. Washington, D.C.: Island Press, 2002. “Mission-Aransas National Estuarine Research Reserve.” http://www.utmsi.utexas.edu/about-the- institute/mission-aransas-nerr.html. Moore, Gordon E. “Cramming More Components onto Integrated Circuits.” Electronics 38, no. 8 (April 19, 1965): 4. Moore, Terry. “Planning Support Systems: What Are Practicing Planners Looking For?.” In Planning Support Systems for Cities and Regions, edited by R. K. Brail, 231-256. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. Myers, D., and A. Kitsuse. “Constructing the future in planning: A survey of theories and tools.” Journal of Planning Education and Research 19, no. 3 (2000): 221. Park, Robert Ezra. The City. Chicago, Ill: Univ. of Chicago Press, 1925. “Placeways, LLC - Purchase CommunityViz.” http://shop.placeways.com/purchase.aspx. Portugali, Juval. Self-Organization and the City. New York: Springer, 1999. “Purchase What if?.” http://www.whatifinc.biz/purchase.php. Rittel, H. W.J, and M. M Webber. “Dilemmas in a General Theory of Planning.” Policy sciences 4, no. 2 (1973): 155–169. “Save Our Springs Alliance Home Page.” http://www.sosalliance.org/. “Scenario 360 Help: About Scenario 360 decision tools.” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario _360.htm. “Scenario 360 Help: About the Common Impacts Wizard.” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario _360.htm. 118    “Scenario 360 Help: Data required to set up a build-out analysis.” http://placeways.com/support/s360webhelp/CV4_0webhelp.html#Welcome_to_Scenario _360.htm. Scott Morton, Michael S. Management Decision Systems; Computer-Based Support for Decision Making. Boston: Division of Research, Graduate School of BusinessAdministration, Harvard University, 1971. Stake, Robert. “Qualitative Case Studies.” In The SAGE Handbook of Qualitative Research, edited by Norman K. Denzin and Dr. Yvonna Lincoln, 443-466. Sage Publications, Inc, 2005. Talen, Emily. “Bottom-up GIS.” Journal of the American Planning Association 66, no. 3 (2000): 279–294. Timmermans, Harry. “Disseminating Spatial Decision Support Systems in Urban Planning.” In Planning Support Systems for Cities and Regions, edited by Richard K. Brail, 31-43. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 2008. U.S. EPA. Projecting Land-Use Change: A Summary of Models for Assessing the Effects of Community Growth and Change on Land-Use Patterns. Cincinnati, OH: U.S. Environmental Protection Agency, Office of Research and Development, September 1, 2000. Urey, G. “A critical look at the use of computing technologies in planning education: The case of the spreadsheet in introductory methods.” Journal of Planning Education and Research 21, no. 4 (2002): 406. Wachs, Martin. “Forecasts in Urban Transportation Planning: Uses, Methods, and Dilemmas.” Climatic Change 11, no. 1 (1987): 61-80. Wack, Pierre. “Scenarios: shooting the rapids.” Harvard Business Review 63, no. 6 (1985): 139– 150. Wade, Tasha, and Shelly Sommer, eds. A to Z GIS. Redlands, California: ESRI Press, 2001. “Watershed Protection: Mission & Goals.” http://www.ci.austin.tx.us/watershed/mission.htm. Wegener, Michael. “Operational Urban Models: State of the Art..” Journal of the American Planning Association 60, no. 1 (1994). “What If? Homepage.” http://www.whatifinc.biz/. Winner, Langdon. “Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology .” Science, Technology and Human Values 18, no. 3: 362-378. World Commission on Environment and Development. Our Common Future. United Nations, June 1987. http://www.un-documents.net/wced-ocf.htm. 119    Vita Chad Maclay Phelan was born in San Diego, California. After completing his work at Torrey Pines High School, San Diego, California, he entered the University of California, Berkeley in Berkeley, California. He received the degree of Bachelor of Arts from University of California, Berkeley in December, 2002. In the year following graduation, he worked in Americorps for the National Community Reinvestment Coalition in Washington, D.C. In August 2005, he entered the Graduate School at The University of Texas at Austin. From 2007-2008, he worked as a GIS Technician for the Department of Watershed Protection and Development Review in the City of Austin. Permanent Address: 6180 Rancho Diegueno Del Mar, CA 92014 Email: chadphelan@gmail.com This thesis was typed by the author.