Learning Business Statistics with Microsoft Excel 2000

3.25 avg rating
( 4 ratings by GoodReads )
 
9780130308788: Learning Business Statistics with Microsoft Excel 2000

This book is designed to reflect the important changes in the pedagogy of statistics brought about by the usage of computers. It takes advantage of the ability of computers to help students understand statistical methods. The book utilizes a straightforward approach and seeks to present computer usage as a tool accompanied by the changes in the practice of statistics. Simplified use of Excel and includes expected value variance of probability distribution, relationship between confidence level and likelihood of an accurate confidence interval. Dynamic use of Excel's graphics utilizes Excel's capabilities to view such topics as probability density function and normal distribution relationships. Cultural value approach is used as an alternate to the vice-versa approach as used by most other books Useful to any professional who uses quantitative analysis: financial analysts and accountants.

"synopsis" may belong to another edition of this title.

From the Inside Flap:

Preface What's Different about this Book?

I wrote this book because I have been very dissatisfied with the way statistics is and has been taught to business students. The biggest impact on statistics is the advent of computers and the dramatic effect they have had on the practice of statistics. Incredibly, they have had practically no impact on the pedagogy of statistics. Although a number of textbooks now incorporate Excel, they treat it as simply a sophisticated calculator—a convenient way to get an answer. This is, of course, a valid use of computers; it is the way practitioners of statistics use computers all the time, and it is appropriate to teach students how to use computers as such a tool. But it is not, in my opinion, enough. Computer usage should be accompanied by important changes in the way in which statistics is taught. This has not been done. Let's consider a few examples of what I have in mind.

In the days before computers, an important tool for every statistical professional was a set of statistical tables. No longer. Statistical tables, such as the z table and the t table, have become something almost exclusively used by students who are required to learn how to use tables, which, if they become actual user of statistics, they will probably never see again. The impact of the use of tables in statistics is much more than simply a different way of obtaining information more conveniently available from a computer; tables have introduced compromises in the teaching of statistics, compromises which make the learning of statistics an even more complicated affair than it must be. We can see this in the inherent tendency of many of the most popular statistics texts to interpret problems as ones with which one can use the z distribution as an approximation for the theoretically appropriate distribution. This is almost universal, for example, in the treatment of hypothesis tests on a population proportion. The use of the normal distribution in such problems is based on the fact that the normal distribution, under certain circumstances, is a good approximation for the binomial. So why not use the binomial distribution directly? If one must depend on tables, the binomial distribution is problematic. With a computer, it is not. The use of the normal distribution in these cases suffers from two problems: It makes it more difficult for students to really understand the real basis of the hypothesis test, and it can give answers which are wrong.

Many textbooks repeat a rule which has gained authority solely in the repetition that the normal distribution can be used in hypothesis tests on a population proportion as long as nπ > 5 and n(1 - π) > 5. This is simply false; counterexamples can easily be found. Several are given in Chapter 10

Many textbooks continue to separate problems involving the distribution of sample means into "large sample" and "small sample" cases with the normal distribution used in the former and the t distribution in the latter. Although the complexity added by this practice is probably small, we should recognize it as an anachronism created because z tables were easier to use and had more information than t tables. With computers, this distinction should be only of historical interest, and the simpler approach of using the t distribution whenever the population variance is unknown is the simple, correct approach.

In one way computers may impede a student's understanding of statistics. A computer allows one to perform a complex statistical procedure, such as multiple regression analysis, without any knowledge of the procedure at all. Simply click an icon or enter a command, and the procedure is done. This may seem a strong reason not to use computers at all in a statistics course. Yet there are many ways in which computers could help support a student's understanding of statistics if they were incorporated in the pedagogy. Consider, for example, two topics at opposite ends of an introductory statistics course—the use of standard deviations to measure dispersion and the use of simple regression to measure the relationship between two random variables. I think it makes sense for students to have some experience doing both of these things by hand. Both, however, are computationally tedious, and an understanding of each procedure is complicated if one uses the methods that have been developed which are computationally easier but tend to obscure the relationships being actually computed. Introducing a computer which can perform the operation in complete "black-box" fashion and always get the correct answer does not encourage student understanding. This book takes a very different approach (see Chapters 2 and 14).

I try to take advantage of Excel's strong visual metaphor by having students use Excel's ability with arithmetic to introduce themselves to the computations involved both with standard deviation and with simple regressions (among others). This makes the structure of the computations as clear (or even more clear) than solving problems totally by hand. It also makes it possible for students to easily see how different data, with differences in the characteristic the procedure measures, result in different intermediate as well as final steps. Yet it frees the student from computational drudgery and likely error. After this experience, students are introduced to the facilities built into Excel to perform these computations automatically.

Computers (and Excel) also allow students to generate pseudorandom variables in ways which can reinforce their understanding of probability distributions and the concepts of expected value and the variance of a distribution. These are used in Chapters 3 and 4.

Others have used computer simulations to help students understand the Central Limit Theorem. The problem I have seen with these approaches is that if they are simply demonstrations without significant interaction, students who have difficulty understanding this foundation of inferential statistics will simply react as if they were viewing a foreign film in an unknown language without subtitles. By contrast, I try to have the student more actively involved with a demonstration of the theorem (in Chapter 6)

I am particularly proud of the way Excel is used in this book to help students strengthen their intuitional understanding of confidence intervals and hypothesis testing (Chapters 7 and 8). In my experience in teaching statistics, understanding hypothesis testing was often the most elusive goal of an introductory course in statistics. For me, this has changed since I began using Excel to perform 1,000 separate tests of a single Null Hypothesis on three different populations (Chapter 8). The great majority of my students now have a sophisticated understanding of such topics as the difference between Type I and Type II error and the role of the significance level in establishing the probability of each. The material in the chapter uses Excel to help students understand the power of a test and the relationship between sample size and the power of a test.

Many statistics texts now provide add-ins to Excel for use with their text. In my view, many are counterproductive. They add new "black boxes" to Excel, ones which students are unlikely to have access to once they leave their statistics class. Their purpose seems to me to change Excel to make it more suited to teaching with the traditional pedagogy. By contrast, I feel it is statistical pedagogy that should change to take advantage of the powerful new capabilities computers (and Excel) offer for the learning of statistics. I do provide an add-in to Excel, but it doesn't add new analysis tools, it provides pedagogical tools. For example, my add-in gives Excel the capability to draw multiple random samples from a population—a capability exploited when students explore the central limit theorem or perform multiple hypothesis tests on a population. These are capabilities centered on the learning of statistics. Students who understand statistics will not need these capabilities after they leave class

This book provides more support than most to the student who is not "fluent" with Excel. Step-by-step instructions are given as each new topic is introduced. As discussion of a topic progresses, students are given increasingly less detail in those instructions. Finally, problems are provided at the end of each chapter for students to do on their own, with answers given in Appendix E. Why Excel?

Excel is certainly not the most capable statistical package available today—it is inferior in that respect to Minitab, not to mention real research tools like SAS or Stata. A student who really plans to concentrate in statistics will certainly need to become exposed to more capable statistical software. Nevertheless, Excel has some important characteristics which make it well suited for students in introductory statistics.

Excel is very visual. An Excel user faces a virtual field of numbers. Most statistical packages, reflecting their mainframe heritage, conceal this field of numbers. C

"About this title" may belong to another edition of this title.

Top Search Results from the AbeBooks Marketplace

1.

Neufeld, John L.
Published by Prentice Hall (2001)
ISBN 10: 0130308781 ISBN 13: 9780130308788
New Paperback Quantity Available: 1
Seller
Ergodebooks
(RICHMOND, TX, U.S.A.)
Rating
[?]

Book Description Prentice Hall, 2001. Paperback. Book Condition: New. Bookseller Inventory # DADAX0130308781

More Information About This Seller | Ask Bookseller a Question

Buy New
US$ 32.75
Convert Currency

Add to Basket

Shipping: US$ 3.99
Within U.S.A.
Destination, Rates & Speeds

2.

Neufeld, John L.
Published by Prentice Hall
ISBN 10: 0130308781 ISBN 13: 9780130308788
New PAPERBACK Quantity Available: 1
Seller
Cloud 9 Books
(West Palm Beach, FL, U.S.A.)
Rating
[?]

Book Description Prentice Hall. PAPERBACK. Book Condition: New. 0130308781 New Condition. Bookseller Inventory # NEW6.0043295

More Information About This Seller | Ask Bookseller a Question

Buy New
US$ 59.99
Convert Currency

Add to Basket

Shipping: US$ 4.99
Within U.S.A.
Destination, Rates & Speeds

3.

Neufeld, John L.
Published by Prentice Hall (2001)
ISBN 10: 0130308781 ISBN 13: 9780130308788
New Paperback Quantity Available: 3
Seller
Murray Media
(North Miami Beach, FL, U.S.A.)
Rating
[?]

Book Description Prentice Hall, 2001. Paperback. Book Condition: New. Bookseller Inventory # P110130308781

More Information About This Seller | Ask Bookseller a Question

Buy New
US$ 72.47
Convert Currency

Add to Basket

Shipping: US$ 2.99
Within U.S.A.
Destination, Rates & Speeds

4.

Neufeld, John L.
ISBN 10: 0130308781 ISBN 13: 9780130308788
New Quantity Available: 1
Seller
Castle Rock
(Pittsford, NY, U.S.A.)
Rating
[?]

Book Description Book Condition: Brand New. Book Condition: Brand New. Bookseller Inventory # 97801303087881.0

More Information About This Seller | Ask Bookseller a Question

Buy New
US$ 81.96
Convert Currency

Add to Basket

Shipping: US$ 3.99
Within U.S.A.
Destination, Rates & Speeds