No. 3813 April 1, 1938 a1T~r~1t/ u~ ~·~a# :rubl1oatllU THE TEXAS MATHEMATICS TEACHERS' BULLETIN Volume XXll PUBLISHED BY THE UNIVERSITY OF TEXAS AUSTIN Publications of The University of Texas PUBLICATIONS COMMITTEE E. J. MATHEWS J.T.PATl'ERSON D. CONEY A. SCHAFFER B. M. HENDRIX B. SMITH A. C. WRIGHT General Publications J. T. PATTERSON R. H. GRIFFITH LOUISE BAREKMAN A. SCHAFFER FREDERIC DUNCALF G. W. STUMBERG FREDERICK EBY A. P. WINSTON Administrative Publications E. J. MATHEWS L. L. CLICK C. F. ARROWOOD C. D. SIMMONS E.C.H.BANTEL B. SMITH The University publishes bulletins four times a month, so numbered that the first two digits of the number show the year of issue and the last two the position in the yearlyseries. (For example, No. 3801 is the first publication of the year 1938.) These bulletins comprise the official pub­lications of the University, publications on humanistic and scientific subjects, and bulletins issued from time to time by various divisions of the University. The followingbureaus and divisions distribute publications issued by them; communications concerning publications in these fields should be addressed to The University of Texas, Austin, Texas, care of the bureau or division issuing the publication: Bureau of Business Research, Bureau of Economic Geology, Bureau of Engineering Research, Bureau of Industrial Chemistry, Bureau of Public School Extracurricular Activities, and Division of Extension. Communications concerning all other publications of the University should be addressed to University Publica­tions, The University of Texas, Austin. Additional copies of this publication may be procured from the University Publications, The University of Texas, Austin, Texas THE UNIYUSITY OF TEXAS PRUS ~ The University of Texas Publication No. 3813: April 1, 1938 THE TEXAS MATHEMATICS TEACHERS' BULLETIN Volume XXII PUBLISHED BY THE UNIVERSITY FOUR TIMES A MONTH AND ENTERED AS SECOND·CLASS MATTER AT THE POST OFFICE AT AUSTIN, TEXAS, UNDER THE ACT OF AUGUST 24, 1912 The benefits of education and of useful knowledge, generally ditJused through a community, are eNential to the preservation of a free govern· ment. Sam Houston Cultivated mind is the guardian genius of Democracy,and while guided and controlled by virtue, the noblest attribute of man. Itis the onlydictator that freemen acknowledge, and the only security which freemen desire. Mirabeau B. Lamar The University of Texas Publication No. 3813: April 1, 1938 THE TEXAS MATHEMATICS TEACHERS' BULLETIN Volume XXII Edited by P.M.BATCHELDER Associate Professor of Pure Mathematics and MARY E. DECHERD Assistant Professor of Pure Mathematics MATHEMATICS STAFF OF THE UNIVERSITY OF TEXAS P. M. Batchelder E. G. Keller J. W. Calhoun R. G. Lubben C. M. Cleveland M. E. Martindale N. Coburn Harlan C. Miller A. E. Cooper A. M. Mood H. V. Craig R. L. Moore Mary E. Decherd B. Nance E. L. Dodd Mrs. G. H. Porter H. J. Ettlinger M. B. Porter E. L. Godfrey R. H. Sorgenfrey R. N. Haskell R. L. Swain W. M. Jackson W. P. Udinski F. B. Jones H. S. Vandiver This bulletin is open to the teachers of mathematics in Texas for the expression of their views. The editors assume no responsibility for statements of facts or opinions in articles not written by them. CONTENTS PAGE Recent Progress in Probability Theory­With Special Reference to Significance Tests ____________________ _ __ ___ _ _ ________________ _ ______________ Edward L. Dodd___ ____ 5 Mathematical Training for Physics ______ ______C. P. Boner_____________ _ _ 17 Mathematics, the Basis for Quantitative Economic Analysis ______________________ _ _________ _ __c. A. Wiley____________ ______ 24 Mathematics in Sociology___________ __ ____ _________ ___ c. M. Rosenquist______ __ 33 Galactic Rotation _________ _ __________________________________ E. G. Keller_____ ________ ____ 38 The Attained and the Unattained in the Teaching of Mathematics__________ _________________ F. S. Beers __________________ 46 On Elementary Vector Algebra and the Rules of Sign___________________________________________ ____H. V. Craig______ __ _________ 58 The Brown University Prize Examination_ ______ _____________________________ 63 RECENT PROGRESS IN PROBABILITY THEORY-WITH SPECIAL REFERENCE TO SIGNIFICANCE TESTS1 BY EDWARD L. DODD The University of Texas In mathematics and the sciences, progress has been so rapid in recent years that a brief survey can cover hardly more than a very restricted field. In this paper on probabil­ity I shall be concerned primarily with the theory, the definitions and the fundamental concepts-noting various differences in the concepts of investigators, some of them treating probability as a branch of pure mathematics, and others interested principally in the applications of probabil­ity to the natural and social sciences. At the International Congress on the Theory of Probabil­ity held in Geneva, Switzerland, October 11-15, 1937, in which I had the privilege of participating, there were repre­sentatives from nearly all the leading countries of Europe. While many topics were discussed, the main interest lay in what may be called the foundations of probability. Bruno de Finetti,2 of Trieste, Italy, has written an excellent ab­stract of the papers presented, with discussion and com­ments. A very brief semi-popular account of the Congress by Dodd and Neyman appears in a recent issue of Nature.3 In what follows, no attempt is made to "cover" these meetings. But in some measure, this paper may reflect the nature or spirit of the conferences. The subject that I take up first may seem somewhat abstract; but it is to serve a double purpose. I hope that it will show how probability is in a special sense a bridge 1 Presented before the Texas Chapter of the Society of the Sigma Xi, Austin, April 12, 1938. 2"Resoconto critico del colloquio de Ginevra intorno alla teoria della probabilita." Giornale dell'lstituto Italiano degli Attuari, Vol. 9, No. 1 (1938-XVI), pp. 1-42. 3"An international conference on the theory of probability." Nature, Vol. 140, p. 938, November 27, 1937. between pure mathematics and general science. Moreover, for my final topic of significance tests, it is a natural intro­duction. The reasoning used in pure mathematics is deductive­so texts on logic tell us-whereas in science there is used the inductive method as well as the deductive. The in­ductive method is closely associated with probabuity. In­deed, through probability the inductive method is also closely connected with the method of indirect proof-a strictly deductive process used constantly in pure mathe­matics. To see this, let us recall that associated with any proposition: If A, then B; there is a strictly equivalent proposition: If not-B, then not-A. To say: "If a triangle is equiangular, it is equilateral" is to say no more and no less than: "If a triangle is not equilateral, it is not equi­angular." As another illustration, think of a circle C lying inside a larger circle D. If a point P is in the inner circle C, it is certainly in the larger circle D also. Or, we may equally well say that if Pis not in D, it is not in the inner circle C. Note that to pass from the proposition: If A, then B; to the equivalent proposition: Ifnot-B, then not-A, -we interchange hypothesis and conclusion, and then negate both. This equivalent proposition should be care­fully distinguished from a converse proposition, which may be obtained from: If A, then B,-either (1) by inter­changing hypothesis and conclusion to obtain: If B, then A; or (2) by negating A and B to obtain: If not-A, then not-B. But for a converse, we do not both interchange and negate. A proposition may be true while its converse is false. But a proposition and its equivalent are either both true or they are both false. Thus, to prove that an equi­angular triangle is equilateral, we may examine the equiv­alent proposition that a triangle not equilateral is not equiangular. But this has already been established. Hence, the proof is complete. This is an illustration of the Indirect Method of Proof. Here, starting with not-B, which is a denial of our conclusion B, we are led as a result of propo­sitions already accepted in our system of axioms and proved theorems, to a denial of our hypothesis A. This denial of our conclusion B we then reject as impossible. That is, we accept the conclusion B. The inductive method of general science proceeds along similar lines. A hypothesis H is set up and by a chain of propositions representing commonly accepted physical prin­ciples or mathematical deductions this hypothesis H is con­nected with a statement S which may be tested by experi­ment. If the experiment seems to disprove the statement S, a probability is created that the lzypothesis His false, at least to some extent. In a case like this, we would seldom be justified in asserting that the hypothesis H is surely false,-although in certain cases a very high probability may be established that the hypothesis is in some measure incorrect or imperfect. An uncertainty arises from two sources. In the first place, the chain of propositions which connect H with S may contain some weak links in "physical laws" imperfectly stated. In the second place, the observa­tions made constitute a sample. Other investigators using a different technique may get other samples decidedly dis­cordant. The main difference between the indirect proof as used in deductive reasoning and the rejection of hypoth­esis appearing to lead to propositions which are at vari­ance with experiment, lies in the fact that in the latter case the hypothesis becomes improbable, instead of impossible. Mention was just made of the sample. In the modern theory as developed by R. A. Fisher, of London University, and by many others, great emphasis is laid upon the notion of the sample. A set of measurements constitutes a sample. With few measurements, we have a small sample; with many measurements in the set, a large sample. The older theory of probability deals primarily with large samples. In recent years the small sample has been studied inten­sively, and it has been found that formulas for probable error need substantial revision when the sample is small. In certain cases, only small samples are available; and sometimes it is possible to get a good deal of information from a small sample. However, in planning large investi­gations, small samples should be avoided when possible. This may be illustrated by a question put to Dr. J. Neyman of London University when he was lecturing4 before the Graduate School of the United States Department of Agri­culture, April 8, 1937. He was asked whether to study conditions of employment, etc., in 300 cities it would be well to take a sample of 25 of these cities for exhaustive study. Dr. Neyman said "No" emphatically. In spite of detailed information in the cities selected, we would have only 25 in the sample; and this would be a small sample. Dr. Neyman mentioned a certain unsuccessful attempt made by Italian statisticians to get reliable information from a small sample. In the modern study of samples, an attempt is made to ascertain just how much information or how little informa­tion can be obtained from samples of given sizes. This is done in connection with significance tests to be discussed later. We come now to a fundamental question: "After all, just what is probability?" Here, authors do not agree. Differ­ent theories have been presented, and there has been much discussion. It would be difficult to describe adequately all the attempts made to answer this question. Perhaps one of the most important classifications that can be made would put in one class those theories that regard a proba­bility much as a physical constant, like the force of gravita­tion. In the other class would be the theories that regard a probability as a mental phenomenon depending upon the amount of ignorance and the amount of information pos­sessed by some individual mind. Sometimes the same ex­periment can be looked upon from either of these divergent viewpoints. Suppose a pack of playing cards have been dealt out face down on a table. A man has touched one of the cards. What is the probability that he has touched a •The lectures of Dr. Neyman have been published by this graduate school, Washington, D.C. Price $1.25. black ace? Regarding a probability as mental, the answer is 2/ 52. Let the man be now given the additional informa­tion that he has touched some black card. For this man, the probability would then become 2/ 26. But these figures for the probabilities would not satisfy some mathematicians who discredit psychic probability. They would note that the event refers to the past rather than to the future. The man either touched a black card or he did not. There is no real probability concerned here. If numerical probabili­ties are called for, these mathematicians would use only the limiting values one or zero, respectively. On the other hand, some mathematicians would reply as follows: By abstrac­tion, we may regard the man's performance of touching a card as a single experiment or sample in a long series of such experiments that he might make. If, then, in each experiment, he touches a card and declares it be a black ace, he will usually succeed in about 2 cases out of 52­assuming that he knows only that the cards are a regular set of playing cards, and assuming the absence of such powers of clairvoyance as are claimed for certain individuals at Duke University. Under the abstraction of repeated tri.als, we can then conceive of a relative frequency of successes­ something like a physical constant-at least something more substantial than a "degree of rational belief." Here it may be well to mention the viewpoint of R. A. Fisher of London University. Fisher distinguishes sharply between a sample and the so-called "parent population" from which the sample is drawn. If the characteristics of the parent population are known, then on the basis of relative frequencies there will be probabilities concerning the appearance of certain characteristics in the samples. But Fisher refuses to reverse this. Knowing the sample, he does not recognize the validity of any probability regard­ing the parent population. To him any inference about the parent population is to be designated as a likelihood. This likelihood would seem to be a form of psychic probability. However that may be, Fisher has elaborated a powerful method called the Method of Maximum Likelihood for sta­tistical purposes. This method is a rival of the Method of Moments that was vigorously supported by Karl Pearson, whose contributions to statistical theory are stupendous. The Method of Maximum Likelihood differs at least tech­nically from any method based upon Bayes' Theorem. Now this theorem of Bayes has been the focus of more violent controversies than any other topic in the field of probabil­ity. Bayes' Theorem in the form commonly used leads to something like the following assertion. As an explanation of an observed event, that hypothesis is the most probable from which this observed event would flow with the great­est probability. At first thought, this principle may appear plausible. In certain cases where no sound basis for de­ductive reasoning seems available, the acceptance of some such principle may appear almost unavoidable. Indeed, with a sufficiently large number of observations, the use of Bayes' Theorem may be defended. But the investigator who uses it ought to know what he is doing. Among the non-psychic conceptions of probability is that founded upon "equally likely events" or "equally likely cases." A beginner naturally studies this formulation of probability first-in connection with problems in throwing dice and drawing cards from a pack. The concept of "equally likely cases" has been subject to severe attack by many authors-rightly so, in many cases where authors have not realized that "equally likely cases" is a mathe­matical abstraction and should be taken as an undefined concept. This concept plays the same role as a point or straight line in pure geometry. A geometer who takes his "point" as an undefined notion is not criticised. It is only when a geometer tries to define a point as an object with­out length, breadth, or thickness that he may become ridiculous. Some authors imagine that everything should be defined-without realizing the impossibility involved. There still remains the question of the expediency of building a structure upon the idea of "equally likely cases," especially when difficulty arises in specifying these cases. This approach to probability has been called combinatorial. For the solution of problems which naturally arise, ex­tensive use is made of the theory of combinations. With integrals replacing sums, an easy transition is made to continuous or geometric probability. Of most interest to general science is the conception of probability as a relative frequency. For example, when a radioactive substance is disintegrating, electrons are being ejected from the nuclei of the atoms. The frequency with which this happens has the nature of a physical constant. That is, over equal intervals of time of sufficient length, just about the same number of electrons are ejected. On the other hand, there is a certain irregularity or unpre­dictableness about the time at which any ejection will take place. Such irregularity in occurrence will be mentioned again presently in connection with the Mises theory of probability. This Mises theory is the last theory that I shall mention in particular. It was the theory that was discussed in most detail at the Congress in Geneva. Before taking this up, how­ever, it should be noted that with the aid of Lebesgue meas­ure and the ideas associated therewith, a great deal of re­finement has been added to the various conceptions of prob­ability by numerous writers-as indicated in a "Traite.'" The theory of R. von Mises postulates an infinite succes­sion of "events," each event yielding some mark or number or set of numbers. This is a mathematical abstraction; but a simple illustration of what it attempts to describe approxi­mately is an infinite set of tosses of a coin, which with a head designated by H, and a tail designated by T, might start something like this : HTHTTHTHHHTHTTH ... In such a sequence we would expect to find some regularity and some irregularity. Roughly speaking, the regularity 5Traite du Calcul des Probabilites et de ses Applications-Emile Borel, editor, Paris, Gauthier-Villars. See especially the sect10ns written by E. Borel and M. Frechet. 12 The University of Texas Publication consists in heads appearing about half the time. To de­scribe the irregularity is not so easy. Of course, a sequence THTHTH .. . , in which the T's and H's alternate, we would not regard as a typical chance sequence. And we can go still further. We expect a head appearing singly, as in THT, to appear about twice as often as a double H, as in THHT. And we expect to find about twice as many sets THHT as sets THHHT, and so on. That is, we expect cer­tain runs of heads and runs of tails, the frequency diminish­ing with the length of the run in definite ratio. Indeed, we may test whether a sequence of numbers can be used effectively as a chance sequence by investigating the runs. However, the foregoing hardly expresses completely what we expect to find in a chance sequence. Now Axiom II of R. von Mises is to this effect: If from a chance sequence with its several marks or numbers appearing in the limit in fixed proportions, a subsequence is taken out by some method based solely upon the position of the "event" in the sequence, then in this subsequence the marks or numbers will appear in the same pr of the distance from the center to the circumference of the galaxy. It is true that some spiral nebulae rotate approximately as solids.6 It follows from celestial mechanics7 that if a body describes a circular or elliptical orbit under a central force, then the force is either directly proportional to the first power or inversely proportional to the square of the distance between the two bodies. The attraction on a particle (star) interior to a homogeneous oblate spheroid is directly proportional to the distance of the particle from the center of the spheroid. Thus, in those nebulae which approximate this configuration, the stars describe circular orbits and it is an additional result of celestial mechanics that the periods of revolution of all particles revolving un­der a first power law are equal. Thus some nebulae should according to mathematics rotate as solids. The rotation of the galaxy is confirmed by an almost independent method based on the cosmic cloud. There ex­ists throughout galactic space an extremely tenuous cloud of c':llcium atoms. Such a cloud is of low viscosity and little affected in its motion by stellar matter. When the spectrum of a distant star is obtained, there frequently appear faint calcium lines in addition to those belonging to the star. These lines are shifted less than the lines in the stellar spectrum. This raises the question of the dis­tance of the intervening cloud. If the distance of the cloud is taken to be the average distance of all its separate molecules and atoms, then the spectroscope reveals that the central portions of the cloud rotate in the same way as a star located at that point would rotate. In early June at 9 P.M. observe, with unaided eye, upon the eastern horizon the great star cloud in Sagittarius, the nucleus of our system. At that time your motion north­ward, relative to the center of the system, is about 150 miles per second. The list of books following the footnotes and relating to the subject of this note may be of interest to the reader. aOr, expressed in ratios, the diameter and mass of the solar system bear respectively the same ratios to the diameter and mass of the galaxy as 1h inch and the weight of a large loaf of bread bear respectively to the diameter of Texas and the weight of an American wheat crop. bEach year in the galaxy approximately ten stars suddenly and temporarily increase in brightness many thousandfold. These stars, called novae, are thought to be exploding old stars. Frequently the distances of these can be determined. If so their luminosity, which is a measure of their total light emitted, can be determined, because the apparent brightness of a star varies directly as its intrinsic bright­ness and inversely as the square of its distance. The proportionality constant is known. Let Q denote the average intrinsic brightness of galactic novae. About fifty novae appear annually in the Andromeda nebula. The assumption is made that novae are the same throughout the universe. By this assumption the average intrinsic brightness of spiral novae is also Q. Since the average apparent brightness is a measurable quantity, the average distance of the spiral novae can be determined. 0There exist in the galaxy thousands of stars which vary period­ically in brightness, their periods being from two days to two months. These stars are called Cepheid variables because the most representa­tive star of this class is 8 Cephei. In 1912 Miss Henrietta S. Leavitt of the Harvard Observatory, in making a study of Cepheids in the small Magellanic Cloud, discovered an empirical relation between the period of light variation and the apparent (observed) brightness of these stars. Since the dimensions of the cloud are small compared to its distance, these stars are all at approximately the same distance from the earth. Thus their apparent brightness is a measure of their intrinsic brightness. The period-luminosity relation discovered by Miss Leavitt is confirmed by the study of many galactic Cepheids whose distances are known. The assumption is then made that the relation is a universal one, holding throughout all space. In making use of this relation in the study of extra-galactic distances the period of a Cepheid in a spiral is noted. From the period-luminosity curve its absolute brightness Q is read. Then from the relation Light Received= kQ/d2, where k is a known constant, the distance d is computed. dA spectroscope is an optical instrument for forming and analyzing the spectra of light emitted by luminous bodies. Most stellar spectra are continuous spectra interrupted by transverse dark lines. A system of dark lines on the spectrum indicates the presence of a definite chemical element or compound in the atmosphere of the star. The same system of lines can be produced in the laboratory from a source of light at rest relative to the observer. When the system of lines in the stellar spectrum is compared with the system of laboratory origin it is noted that the stellar lines are, in general, shifted either toward the violet or red end of the spectrum. If the shift is toward the red (violet) the distance between the luminous body and the observer is increasing (decreasing). A velocity as low as ?4, mile per second can be detected by the shift. If a spectrogram is taken of a rotating spiral which is edgewise toward the earth, the spectrum from one end of the edge shows the red shift, from the other end the violet. These two shifts prove the rotation of the spiral. REFERENCES 1. Duncan: Astronomy (a text), p. 419. Published by Harper and Brothers, New York (1935). 2. Stebbins and Whitford: "Diameter of the Andromeda Nebula," Proceedings of the National Academy of Science, 20, p. 93, 1934. 3. Curtis: Handbuch der Astrophysik, Band VI Zweite Halfte/ Erster Teil, p. 860. 4. Hubble: (Ref. 3 above, p. 862). Also, "A Spiral Nebula as a Stellar System," Messier 31, Mt. Wilson Contr., No. 376. 5. Slipher and Pease: (Ref. 3 above, p. 852). 6. Pease: (Ref. 3 above, p. 852). 7. Moulton: Celestial Mechanics. BOOKS I. "The Realm of the Nebulae," Edwin Hubble. Published by the Yale University Press (1936). II. "New Pathways in Science," A. S. Eddington. The Macmillan Company, New York (1935). III. "Problems of Cosmogony and Stellar Dynamics," J. H. Jeans. Cambridge University Press (1917). IV. "Kosmos," W. de Sitter, Harvard University Press (1932). V. "Astronomical Society of the Pacific Leaflets," Vol. I. Astronom­ical Society of the Pacific, San Francisco, California (1934). THE ATTAINED AND THE UNATTAINED IN THE TEACHING OF MATHEMATICS1 BY F. S. BEERS, Examiner and Executive Secretary, The University System of Georgia At the meeting of the Mathematical Association of America at Williamstown, 1934, Mr. C. N. Moore presented one of the most stimulating papers I have heard in recent years. I recall neither its title nor much of the specific content; but the point which Mr. Moore made was unmis­takably sound and convincing, namely, that there are for the student of mathematics values over and above those described by the terms "mechanical" and "intellectual." The particular values to which Mr. Moore called attention were closely associated with the aesthetic and imaginative qualities commonly ascribed to various forms of poetry. If I recall correctly, Mr. Moore drew an analogy between the niceties of Euclidean geometry and the sensuous richness of the poetic writing of John Keats. One who understands space as only a geometrician does is rewarded not only by an intellectual mastery of relationships which remain im­muhble, but also by an experience which transcends the mere facts and finds refuge in a warmth of feeling ordi­narily associated only with art in its more conventionally accepted modalities. Thus, to one whose range of emotional understanding is adequate to the task, the sensuous beauties of the Ode on a Grecian Urn and the overtones arising from understanding of space relations yield comparable if not identical aesthetic values. This unusual penetration into the affective qualities of mathematics, too often discounted, discredited, or ignored by teachers, suggests that the place of mathematics in edu­cation may be justified from a point of view hitherto un­touched, or at least largely ignored. 1Read by invitation before the Southeastern Section of the Mathe­matical Association of America, at Decatur, Georgia, March 23, 1935. Everyone is familiar with the more commonly employed arguments for including mathematics among those ma­terials which are taught as a part of a college education. To a very apparent extent, science finds its roots embedded in mathematical calculation. Lord Kelvin's statement "Science depends on measurement; nay, science is meas­urement," is more than an epigram; it is very nearly the expression of a universal truth. Verification through repe­tition is predicated upon quantitative identity. The rich stores of knowledge which have made possible our civiliza­tion, which have contributed in innumerable ways to phys­ical comfort, and which have brought present-day life up from that early stage in which it was fumbling, nasty, brutish, and short, are all based upon the processes of re­finement and verification which constitute the heart of mathematical calculation. In a word, there are also to be found in mathematics many of those values normally attributed to the social sciences. The extent of the applications of mathematics is so great that a teacher may be pardoned a modicum of bewilder­ment if he is sometimes at a loss to know how most effectively to justify his field to students or how to select from it those divisions which will prove of greatest value either from a cultural or from a utilitarian point of view; and occasionally a teacher becomes distressed over the fact that few of the fundamentals or the applications of mathe­matics penetrate deeply enough into the nervous systems of his students to make more than the vaguest trace. I recall the unhappy youngster who got the terminologies of his biology and mathematics somewhat confused. Asked for an example of deductive reasoning, he propounded the follow­ing incredible syllogism: "All four-legged animals are quadratics; the horse is a four-legged animal; therefore, the horse is a quadratic." Perhaps the oldest and the most hoary justification for teaching mathematics finds its expression in the doctrine of formal discipline. Roughly, this doctrine would support the teaching of foreign languages on the grounds that such study exercises and makes muscular the memory; and it fortifies the study of mathematics on the ground that prac­tice in the manipulations of symbols and their interplay in irrefragable relations develops, nay even creates, the power of reasoning. But however much we may idealize mathematics and the values which arise from its study, there is little doubt that the last named apology, formal discipline, is probably the least sound, although not entirely without merit. It has long been recognized that there are no distinct and separate entities such as memory or reasoning or honesty, which contribute to the total character of the individual in any mathematically determinable quantity. One may have an excellent memory for names and a distinctly poor one for numbers, as has been demonstrated in a well known clerical aptitude test. Or, again, one may have great skill in manip­ulating the functions of formal logic and at the same time be quite inept in the lesser but more practical fields of common sense. One of the loudest wails I have heard came from an academician whose chief duty was to teach logic. His bitter cry was that, despite his skill in formal state­ment and deduction, he was inevitably worsted in any bar­gaining with the junk dealers or life insurance salesmen. While we are touching upon this topic of character traits it may be well, by way of illustration, to point out that honesty, like reason and memory, is specific rather than general. Ingenious tests have been devised which will meas­ure honesty in a given situation under given conditions. The results show that an individual may be honest under one set of conditions and dishonest under another. The teller of a bank may be entirely above reproach in the dis­charge of his duties as an official but thoroughly dishonest in accounting to his wife for a sum of money which he lost in a clandestine game of poker. The argument that any kind of study is important be­cause it trains a certain character trait is a weak one, not only because the so-called traits of memory, reasoning, hon­esty and the like do not exist in anywhere near the isolation commonly supposed, but also because no given discipline is the only route to the training of such traits, if they do exist. Whatever the value which inheres in the subject for the student of mathematics, whether it be aesthetic, social, dis­ciplinary, or some combination of these, there has been in recent years a tendency to ignore these, at least theoret­ically, and to attempt to organize courses which will have an immediate and practical value. For the present aca­demic year such a course, pitched at the freshman level, has been devised by the Department of Mathematics of the Uni­versity of Georgia and is being required as part of the general education program sponsored by the University System. This course essays to canvass the facts and processes necessary for an intelligent understanding of ele­mentary courses or for an equally understanding grasp of materials presented in newspapers and magazines. The course begins with statistics, which is a rather easily justified point of departure in view of the fact that govern­ment reports, school reports and the like are couched in statistical terms and call for some understanding of statis­tical measures for their interpretation. It is continued with a section on the mathematical theory of finance, which in­cludes interest, discount, annuities, sinking funds, bonds, life insurance, and the like. These are all matters which concern the individual citizen or the community at large. Trigonometry is included in a brief chapter because of its use in other fields and because of the background it affords for continuation work. The final sections include a fairly extensive review of algebra and mensuration. This review covers most of the fundamentals of secondary school arithmetic and algebra which are necessary for later work. In short there are really two chief objectives, one to afford that student plan­ning to study no more mathematics an acquaintance with the situations and problems which he will face as a citizen, 50 The University of Texas Publication and the other to prepare those who will continue their study for more advanced work. For the purpose of determining upon some measure which would yield an index of how effectively the materials of the course were being comprehended by the students, an examination was devised and used uniformly throughout the University System. It was thought best that the test include a rather large number of items and that it sample rather carefully all of the materials included in the course. Such a test would tend to compensate for differences in emphasis used by the various teachers scattered rather widely in the University System. Moreover, a fairly long examination carefully constructed so that answers to ques­tions would be unequivocal or at least not allowing for more than two or three options, and calling for relatively little calculation, seemed the most economical and sure way of controlling variation in scoring which often is a consid­erable factor in tests of lesser length. Finally, in a long test chance factors could also be better controlled. Eight or ten questions do not and cannot constitute an adequate sampling of the content or the mental processes belonging to a course comprising statistics, finance, trigonometry, algebra, and mensuration. Eight or ten questions do not give a fair sample of a student's performance. It is a fact upon which there is no disagreement among competent thinkers that pure chance may and frequently does account for success or failure upon a given question. Now if there were 100 questions, the maximum possible influence of chance based on a single question would be only 1/ 100 of the total possible score. If there were 50 questions the maximum influence would be 1/ 50 of the possible score; where there are only 10 ques­tions, the possible influence of chance rises to 1/ 10 of the total possible range. In general, the smaller the number of questions or ele­ments, the more are all chance factors exaggerated. This is especially true where there is no exact knowledge of the relative difficulties of the elements, and where no effort is made to weigh them according to their difficulty or sig­nificance. Such inequalities and inexact weightings are sure to occur in any examination of whatever type; but their influence is greatest when the number of questions is very small, and conversely their influence diminishes rapidly as the number of separately scored elements increases. The ideal examination is one which uses the whole of the examination period for pure cerebral activity in the field of thought in which the examinee is being measured; at the opposite pole of efficiency are those tests which take a large part of the time, energy, and attention of the examinee for purely recording purposes and other less relevant activities. A notable case in point is the conventional attempt to meas­ure ability or achievement in every branch of science or of the liberal arts through the sole medium of English prose. From the confidence with which may teachers believe that an essay reveals special achievement, literary appreciation, and logical ability of students, one might conclude that such prose laid bare the student's very soul with tangible and objective Euclidean proof. Ordinarily the traditional examination in mathematics is not subject to the dangers arising from too great faith in the efficacy of exposition as an analyst; but it is open to question on the grounds that its brevity allows of too much room for the operation of chance factors and at the same time requires too much of the student in the mere business of recording responses. Admittedly for a thorough understanding of any branch of mathematics a student must be able to carry on a rather long and involved process of reasoning and calculation. This is one of the fundamental characteristics of the sub­ject and historically the one which has given rise to the disciplinary theory. But a thoroughgoing examination which would carefully test the student on his ability to carry through sustained and complicated processes of reas­oning and manipulation would require a very much longer period of time than can ordinarily be given for the regular quarterly or semester tests. By and large the short answer type of question for the purpose of testing approximately 800 students taught in small groups by 20 different teachers offers fewer difficul­ties and has more to recommend it than a test for a similar group based on fewer problems but calling for sustained and neatly integrated calculation. The examination used for the 797 freshmen taking the survey course in mathematics in the University System of Georgia was 150 items in length. The questions themselves were of two general types, those in which the correct re­sponse was given along with several incorrect ones, and those in which only the problem was stated which the ex­aminee was asked to solve and then to enter his answer in parentheses provided for the purpose. To the best of the ability of those preparing the test these questions were distributed evenly over the whole content of the course, and the two types of question were so allo­cated that they overlapped extensively; that is, each of the various sections-statistics, finance, trigonometry, algebra, and mensuration-was represented both by multiple choice items and by problems. Of all short answer questions the multiple choice or mul­tiple situation type is probably the most flexible. It draws essentially upon recognition, however, And of course it is frequently true that the examinee can recognize laws, processes, or principles when he is confronted with them, although he might not be able to recall them in the absence of a guide. The important qualifications of the wrong op­tions or decoys in the multiple choice question is that they be plausible, that is, sufficiently near the correct response in form or principle so that careful discrimination is re­quired. The problem type of question, of course, depends largely upon recall and upon immediate reasoning. The difficulty in the construction of such questions in cases where a large number of them are used is to formulate them in such a way that they really present a problem, a principle, or a situation calling for thought and at the same time that they do not require extensive calculation for their solution. As the amount of calculation required goes up, the number of problems which can be asked decreases. Making generalizations about the mental processes called into play by various questions is difficult and unsatisfactory at best. With this qualification in mind, however, it may be worth while to draw some illustrations from the test at issue. For example, questions may involve interpretation, as in the instance where a graph is presented in the form of the temperature chart of a hospital patient and several items are based on this chart, as: At what interval was the temperature of the patient taken? What is the lowest temperature recorded on the graph? What was the patient's temperature on the twenty-third day of his illness? Again, a question may involve the identification of a formula, as for example: The formula for finding the cost of a bond is-and then several formulas are listed from which the examinee is expected to choose. Again, the item may test for knowledge of a law, as the law of exponents involved in the question: What is the numerical value of (5xy) 0 ? Or again, items may be merely descriptive and definitive as: Amortization refers to (1) kind of insurance, (2) kind of sinking fund, (3) rate of interest, (4) kind of bond, (5) means of extinguishing a debt. And from these options the student is to select that one which best defines or describes the term in question. In such an examination, that is, one in which there are 100 to 150 items, no single question plays a highly critical part in determining the total score, and yet the general pic­ture which the total score presents is fairly significant of the extent to which the examinee has achieved a bowing acquaintance with the subject. It must be admitted, of course, that such measurement is indirect, that it does not include all of the possibilities which might be desired, that it possibly places a handicap on the student whose persevering tendencies make long sus­tained and highly concentrated calculation preferable and more diagnostic. But let me point out that the minute we leave the exact sciences as such and attempt to measure knowledge as possessed by a given individual, we leave be­hind, for the most part, the possibility of direct measure­ment. Even in the less exact sciences-geology, astronomy, biology, and the like, our scientists can seldom measure di­rectly the total phenomenon or result or development in which they are interested. Usually they must be content with measuring some aspect of it and then of drawing in­ferences about the meaning of their results as these predict the total process with which they are really concerned. Although of all the disciplines mathematics is indubitably the most exact, nevertheless the attempt at determining the extent of acquaintance with any phase of it on the part of a human being is not and cannot be in the light of our pres­ent state of knowledge an exact process. As I have already indicated, 797 freshmen took the ex­amination in functional mathematics, an examination com­posed of relatively short questions but comprising a total of 150 of such questions. In the process of scoring, each answer was given a value of one, so that the total possible score was 150 points. The mean score obtained for the group was 70 points or less than half of the total possible score. The root mean square deviation indicates that ap­proximately two-thirds of the group made scores ranging between 46 and 94 points. The highest score made was 135 and the lowest 20. From these figures it is evident that the test was sufficiently difficult so that the best students still had plenty of room at the top, while the poorest ones were able to make some kind of showing. It will be of interest, in passing, to note that students in ten different colleges were tested and that the mean scores for these colleges were within two-thirds of a standard deviation of each other. As for the means of the other two schools, one was exceptionally high and the other excep­tionally low. In general it appears that the range of achievement is roughly as great in one school as in another, or at least the heterogeneity is sufficiently marked as to be very much in evidence. It would, of course, be manifestly unsound to generalize about the effectiveness of study in mere terms of scores made. We cannot assume that because a score falls at the twenty-fifth percentile that the course for that student is but 25 per cent efficient. Nevertheless groupings do have differences which ought to have some significance. For ex­ample, 25 per cent of the students tested made scores of 53 points or less; 50 per cent got scores ranging between 54 and 86 ; and 25 per cent made scores between 87 and 135. Now these are wide differences. The problem would be simplified if there were really sharp lines of demarcation, which, of course, do not exist. It would also be further simplified if we were able to say with any certainty just what values have accrued to the 25 percentile student and just how these differ from those acquired by the 75 or 90 percentile students. Earlier in this paper I called attention to certain values which may arise from the study of mathematics, namely aesthetic, social, and disciplinary. That these do exist, is, I think, unquestioned; but I feel certain that all will agree that the extent to which they are present is indeterminate. May I submit, however, that the extent of mastery in terms of purely intellectual and manipulative skill can fairly well be measured and that whatever values may be inherent in the study of mathematics over and above these factors of skill are largely predicated upon a rather thoroughgoing knowledge. If this premise is granted, I believe that it follows log­ically that a considerable percentage of the 797 students tested in the University of Georgia System have not a suffi­ciently wide mastery of the subject to justify the assump­tion that they are acquiring a training in character or are achieving other intangible values. And, frankly, it is very likely not possible to devise a course which will get uniform results. Students vary too much in their background and training, in their native equipment, in the degree of inter­est which they possess, and in the amount of physical and intellectual energy which they have at their command. It is highly probable that these differences have been height­ened rather than lessened by high-school training. The business of the college as a social agency in the train­ing of youth is to help each student to find that level of performance on which he will be successful. There is noth­ing undemocratic in such a procedure. When we learn that such a course as survey mathematics differentiates students rather sharply, it seems logical that if these students could be sectioned to begin with on the basis of their achievement in mathematics or in some field which is highly correlated with mathematics, opportunity would be better for stimu­lating those of exceptional ability to achieve up to their capacity and also for modifying course work for those whose likelihood of competing favorably with the more gifted stu­dents is slight. It is not unreasonable to look forward to such sectioning as significantly influential on the morale of students. In games of skill and in the various sports ability groupings very nearly take care of themselves auto­matically. First and second line teams are rather rapidly and rather certainly formed after short periods of tryout; tennis players of considerable skill tend to gravitate for the sake of sport to competitors of approximately equal skill; and even among children indulging in baseball on the side lot there is expended great care to see that the distribution of talent, when sides are being chosen, is fairly even. We are looking forward in the University System of Georgia to sectioning according to ability as one of the important steps in providing for individual differences and as a means for studying modifications in course material to the end that courses may be made suitable to the capacities and needs of students, whatever those capacities and needs may be. Meanwhile, since no adequate measures are avail­able to indicate the kind of opportunity students entering college have had for acquainting themselves with mathe­matics, and since grades from high schools vary widely as indices of achievement, students need for their own infor­mation opportunities under fairly uniform conditions to try themselves out and to learn something about their relative degree of success as measured by a common yardstick. When we have developed understandable measures of achievement in our school systems so that a clear record of what has been accomplished during the high-school period comes along with the student when he enters college we shall probably find that a requirement in mathematics for all students is inadvisable. The fact that an understanding of mathematics in any of its phases depends large!y upon cumulative knowledge to an extent not matched by other subjects makes it promising that at some future date such records ought to be available. At such a time sectioning ought not to be necessary. If any adequate understanding of mathematics depends as it probably does on ability which is more or less fixed, then determining the extent of that ability early in the educational process will do much to­ward simplifying the curricula. Under such a scheme, un­doubtedly, many of the students who now find their way to college work and many youngsters who now do not even consider the possibility could be guided toward those types of college work for which they are eminently suited. ON ELEMENTARY VECTOR ALGEBRA AND THE RULES OF SIGN BY H. V. CRAIG The University of Texas 1. Introduction. For some time the writer has been impressed by the fact that there are many topics in ad­vanced mathematics that could be incorporated with profit into elementary courses. This is particularly true with regard to vector analysis. In the present article we shall attempt to show that the rather troublesome rules of sign of beginning algebra may be introduc€d quite naturally by means of vectors. The writer is of the opinion that these rules should be presented through some device of a physical nature rather than by way of a highly abstract formalism, and since vector analysis possesses to a high degree the appeal of intuitive­ness it seems that the simpler parts of this discipline are admirably suited to elementary instruction. Furthermore, the few facts extraneous to algebra which must be intro­duced are of considerable interest in themselves and of great scientific importance. The fundamental operations involved (reversal or rotation through 180 degrees and stretching) are extremely simple. 2. Physical and mathematical background. The two cardinal features of a force applied to a particle are the magnitude or amount of the force and the direction of the force. This information can be given concisely by a directed stroke or vector, the length of the stroke deter­mining the magnitude and the direction of the stroke the direction. The end of the stroke which bears the arrow­head is called the vertex or terminus and the other end the origin. Two vectors are said to be equal if they have the same magnitude and the same direction-regardless of their position in space. Evidently, each pair of distinct points determines two vectors, since either of the two points may serve as the origin. We note that either one of these vec­tors may be converted into the other by a rotation through The Texas Mathematics Teachers' Bulletin 59 180 degrees (reversal). Furthermore, we shall say that a pair of coincident points determines a degenerate or zero vector. A zero vector is regarded as being of zero length and does not determine a direction. Two vectors A and B may be made to determine a third C in the following way: move B without changing its direc­tion until the origin of B is at the vertex of A and take the origin of A as the origin of C and the vertex (new) of B as B the vertex of C. If A and B represent two forces acting on a particle, then it is an experimental fact that theJ3 force corresponding to C is equivalent to the simultaneous forces A and B. For this reason C is called the "sum" of the strokes A and B and is denoted by A +B. This process is of great importance in many branches of physics and engineering and is the basis of the "dead reckoning" calculations used by aviators. It should be observed that if B is of the same magnitude as A but oppositely directed, then the origin and vertex of C coincide, i.e., C is a zero vector; thus we write A +B = 0 and B = -A. In this way we are led to associate the symbol -with the operation: reversal of direction. We note in passing that the forces corresponding to A and -A counteract each other. Subtraction of vectors is defined by the equation A-B=A + (-B). Evidently, A +A is a vector having the direction of A and twice the magnitude. It is quite natural to designate this sum as 2A, and this generalizes to the following definition : If s is a positive number, then sA is the vector having the direction of A and a magnitude equal to s times that of A, while (-s)A is defined to be -(sA). Evidently, multiplication of a vector by a positive number is equivalent to a stretching, and multiplication by a negative number to a stretching and a reversal. Let us now turn to the special case in which all of the directed strokes are confined to a single line. Let us select a point on this line and associate with it the number zero, and then let us mark off from 0 in a given direction a vector of unit length and associate with it the number one. Gen­erally, we associate with each positive numbers a vector of length s having the direction of 01. Thus if we designate the unit vector 01 by U, then Os= sU. Now -U or (-1) U is the reversal of U and is to be associated with its coefficient -1; similarly, (-s) U = -(sU) , the re­versal of sU, and is associated with -s. -s -:i, -1 0 3 s Evidently, xU = yU if and only if x = y, and if x>O and y>O then (1) x(yU)=(xy)U=y(xU). 3. Rules of sign. Obviously, if either or both of x and y are negative numbers, the expression (xy) U of (1) in­volves the multiplication of negative numbers, an opera­tion which we regard as being undefined at the present moment. Accordingly, we seek a definition which will render (1) valid for all real numbers. By virtue of the definition of the product of a negative number by a vector, we may write -2 (3U) =reversal 2 (3U) =reversal 6U = -6U, 2 (-3U) = 2 (reversal 3U) =reversal 6U = -6U, and -2(-3U) = -2 (reversal 3U) =reversal 2 (reversal 3U) = 6U. Thus we are led to the rules (-2)3=2(-3)=-6; (-2) (-3)=6. Also, -(-2V) is the reversal of the reversal of 2V, and so we adopt the definition -(-2) = 2. Similarly, addition and subtraction of real numbers could be studied through their corresponding vectors and the distributive law (x +y) U = xU +yU. Thus to obtain 3 -5, we replace the numbers by their cor­responding vectors, i.e., we solve the problem by stepping it off. Thus, 3U-5U=3U+ (-5)U=-2U. Furthermore, the troublesome y -1 could be introduced by way of rotations through 90 degrees. Thus, since the effect of multiplying a vector by a real number is either to stretch or to stretch and rotate through 180 degrees, the opportunity for inventing new numbers that would rotate a vector through other angles presents itself. Thus if i is a new number (not real) and the symbol iV is defined to be the result of rotating V through 90 degrees (regardless of its position) then i ( iU) = -U = (-1) U and we are led to the definition ii= -1. 4. Conclusion. The writer is convinced that the rules of sign should be presented to the uninitiated as definitions rather than as theorems resulting from other definitions and postulates-in either case their status should be given explicitly. Furthermore, in his opinion, if the rules are introduced as definitions they should be supplemented by a rather extensive account of their simplifying and unifying consequences. It is of course a gross misuse of terms to speak of proving a definition, but on the other hand defini­tions, although logically arbitrary, are seldom aimless, and as a matter of fact our desire for the simplicity resulting from the permanence of forms whose elements are under­going generalization almost forces the adoption of certain definitions. As an example, consider the extension of ex­ponents from positive integers to positive and negative rational numbers and the simplicity secured by the well known definitions. Historically, the introduction of negative numbers was a very difficult step, and apparently the rule concerning the product of two negative numbers is a perennial stumbling block. In the writer's opinion the notion of direction should be introduced and made fundamental. Most authors of elementary texts do plot the numbers on a straight line, but they fail to take full advantage of their device. How­ever, the directional or vectorial treatment given in Article 3 should not be regarded as sufficient, but should be aug­mented by other discussions showing the advantages ob­tained by the definitions adopted. As a result of his own experience and talks with others the writer is convinced that mathematics students from the elementary level through the level of the calculus suffer from an unbalanced diet-too much stress being placed on memory work and manipulative technique and not enough on the why and the wherefore. THE BROWN UNIVERSITY PRIZE EXAMINATION The Brown University Prize Examination for freshmen was given on Saturday, October 9, 1937. The three prizes were awarded as follows: First: Paul Brill, of Patchogue, N.Y. Second: R. Whittaker Schmied, of Austin. Third: Virginia W. Buckner, of New Orleans, La. The questions were as follows: 1. Under what conditions will the equation 2 ax -l9a2 a + x x -a x~ +ax-6a2 + x-2a = x +3a have a solution for x, and what will the solution (or solu­tions) be? 2. Two places A and B are 168 miles apart, and trains leave A for B and B for A at the same time. They pass each other at the end of 1 hour and 52 minutes, and the first reaches B half an hour before the second reaches A. Find the speed of each train. 3. The hypotenuse of a right triangle is 25 and the alti­tude on the hypotenuse is 12. Find the lengths of the sides. 4. Construct a triangle, given one median and the angles it forms with the two adjacent sides.