Probability and statistics with R First Edition by Alan T. Arnholt, Ana F. Militino, Maria Dolores Ugarte – Ebook PDF Instant Download/Delivery: 978-1584888918, 1584888911
Full download Probability and statistics with R First Edition after payment

Product details:
ISBN 10: 1584888911
ISBN 13: 978-1584888918
Author: Alan T. Arnholt, Ana F. Militino, Maria Dolores Ugarte
Designed for an intermediate undergraduate course, Probability and Statistics with R shows students how to solve various statistical problems using both parametric and nonparametric techniques via the open source software R. It provides numerous real-world examples, carefully explained proofs, end-of-chapter problems, and illuminating graphs to facilitate hands-on learning.
Integrating theory with practice, the text briefly introduces the syntax, structures, and functions of the S language, before covering important graphically and numerically descriptive methods. The next several chapters elucidate probability and random variables topics, including univariate and multivariate distributions. After exploring sampling distributions, the authors discuss point estimation, confidence intervals, hypothesis testing, and a wide range of nonparametric methods. With a focus on experimental design, the book also presents fixed- and random-effects models as well as randomized block and two-factor factorial designs. The final chapter describes simple and multiple regression analyses.
Demonstrating that R can be used as a powerful teaching aid, this comprehensive text presents extensive treatments of data analysis using parametric and nonparametric techniques. It effectively links statistical concepts with R procedures, enabling the application of the language to the vast world of statistics.
Table of contents:
1 A Brief Introduction to S
1.1 The Basics of S
1.2 Using S
1.3 Data Sets
1.4 Data Manipulation
1.4.1 S Structures
1.4.2 Mathematical Operations
1.4.3 Vectors
1.4.4 Sequences.
1.4.5 Reading Data
1.4.5.1 Using scan()
1.4.5.2 Using read.table().
1.4.5.3 Using write()
1.4.5.4 Using dump() and source()
1.4.6 Logical Operators and Missing Values
1.4.7 Matrices
1.4.8 Vector and Matrix Operations
1.4.9 Arrays
1.4.10 Lists.
1.4.11 Data Frames
1.4.12 Tables
1.4.13 Functions Operating on Factors and Lists
1.5 Probability Functions
1.6 Creating Functions
Programming Statements 1.7
1.8 Graphs
1.9 Problems
2 Exploring Data
2.1 What Is Statistics?
2.2 Data
2.3 Displaying Qualitative Data
2.3.1 Tables
2.3.2 Barplot
2.3.3 Dot Charts
Pie Charts 2.3.4
2.4 Displaying Quantitative Data
2.4.1 Stem-and-Leaf Plots
2.4.2 Strip Charts (R Only)
Histograms 2.4.3
Summary Measures of Location 2.5
2.5.1 The Mean
2.5.2 The Median
2.5.3 Quantiles
2.5.4 Hinges and Five-Number Summary
2.5.5 Boxplots
2.6 Summary Measures of Spread
2.6.1 Range
2.7 Bivariate Data
2.6.2 Interquartile Range
2.6.3 Variance
2.7.1 Two-Way Contingency Tables
2.7.2 Graphical Representations of Two-Way Contingency Tables
2.7.3 Comparing Samples
2.7.4 Relationships between Two Numeric Variables
2.7.5 Correlation
2.7.6 Sorting a Data Frame by One or More of Its Columns
2.7.7 Fitting Lines to Bivariate Data
Multivariate Data (Lattice and Trellis Graphs) 2.8
2.8.1 Arranging Several Graphs on a Single Page
2.8.2 Panel Functions
2.9 Problems
3 General Probability and Random Variables
3.1 Introduction
3.2 Counting Rules
3.2.1 Sampling With Replacement
3.2.2 Sampling Without Replacement
3.2.3 Combinations
3.3 Probability
3.3.1 Sample Space and Events
3.3.2 Set Theory
Interpreting Probability
3.3.3.1 Relative Frequency Approach to Probability
3.3.3.2 Axiomatic Approach to Probability
3.3.4 Conditional Probability
3.3.5 The Law of Total Probability and Bayes’ Rule
3.3.6 Independent Events
3.4 Random Variables
3.4.1 Discrete Random Variables
3.4.2 Mode, Median, and Percentiles
3.4.3 Expected Values of Discrete Random Variables
3.4.4 Moments
3.4.4.1 Variance
3.4.4.2 Rules of Variance
3.4.5 Continuous Random Variables
3.4.5.1 Numerical Integration with S
3.4.5.2 Mode, Median, and Percentiles
3.4.5.3 Expectation of Continuous Random Variables
3.4.6 Markov’s Theorem and Chebyshev’s Inequality
3.4.7 Weak Law of Large Numbers.
3.4.8 Skewness
3.4.9 Moment Generating Functions
3.5 Problems
4 Univariate Probability Distributions
4.1 Introduction.
4.2 Discrete Univariate Distributions
4.2.1 Discrete Uniform Distribution
4.2.2 Bernoulli and Binomial Distributions
4.2.3 Poisson Distribution
4.2.4 Geometric Distribution
4.2.5 Negative Binomial Distribution
4.2.6 Hypergeometric Distribution
4.3 Continuous Univariate Distributions
4.3.1 Uniform Distribution (Continuous)
4.3.2 Exponential Distribution
4.3.3 Gamma Distribution
4.3.4 Hazard Function, Reliability Function, and Failure Rate
4.3.5 Weibull Distribution
4.3.6 Beta Distribution
4.3.7 Normal (Gaussian) Distribution
4.4 Problems
5 Multivariate Probability Distributions
5.1 Joint Distribution of Two Random Variables
5.1.1 Joint pdf for Two Discrete Random Variables
Joint pdf for Two Continuous Random Variables 5.1.2
Independent Random Variables 5.2
5.3 Several Random Variables
Conditional Distributions 5.4
5.5 Expected Values, Covariance, and Correlation
5.5.1 Expected Values
5.5.2 Covariance
5.5.3 Correlation
5.6 Multinomial Distribution
5.7 Bivariate Normal Distribution
5.8 Problems
6 Sampling and Sampling Distributions
6.1 Sampling
6.1.1 Simple Random Sampling
6.1.2 Stratified Sampling
6.1.3 Systematic Sampling
6.1.4 Cluster Sampling
6.2 Parameters
6.2.1 Infinite Populations’ Parameters
6.2.2 Finite Populations’ Parameters
6.3 Estimators
6.3.1 Empirical Probability Distribution Function
6.3.2 Plug-In Principle..
6.4 Sampling Distribution of the Sample Mean
6.5 Sampling Distribution for a Statistic from an Infinite Population
6.5.1 Sampling Distribution for the Sample Mean.
6.5.1.1 First Case: Sampling Distribution of X when Sampling from a Normal Distribution
6.5.1.2 Second Case: Sampling Distribution of X when X Is not a Normal Random Variable
6.5.2 Sampling Distribution for XY when Sampling from Two Inde-pendent Normal Populations
6.5.3 Sampling Distribution for the Sample Proportion
6.5.4 Expected Value and Variance of the Uncorrected Sample Variance and the Sample Variance
6.6 Sampling Distributions Associated with the Normal Distribution
6.6.1 Chi-Square Distribution (x2)
6.6.1.1 The Relationship between the x2 Distribution and the Normal Distribution
6.6.1.2 Sampling Distribution for S and S2 when Sampling from Normal Populations
6.6.2 t-Distribution
6.6.3 The F Distribution
6.7 Problems
7 Point Estimation
7.1 Introduction
7.2 Properties of Point Estimators
7.2.1 Mean Square Error
7.2.2 Unbiased Estimators.
7.2.3 Efficiency
7.2.4 Consistent Estimators
7.2.5 Robust Estimators
7.3 Point Estimate Techniques
Method of Moments Estimators 7.3.1
7.3.2 Likelihood and Maximum Likelihood Estimators
Fisher Information 7.3.2.1
Fisher Information for Several Parameters. 7.3.2.2
7.3.2.3 Properties of Maximum Likelihood Estimators
7.3.2.4 Finding Maximum Likelihood Estimators for Multiple Parameters
7.3.2.5 Multi-Parameter Properties of MLES
7.4 Problems
8 Confidence Intervals
8.1 Introduction
8.2 Confidence Intervals for Population Means
8.2.1 Confidence Interval for the Population Mean when Sampling from a Normal Distribution with Known Population Variance.
8.2.1.1 Determining Required Sample Size
8.2.2 Confidence Interval for the Population Mean when Sampling from a Normal Distribution with Unknown Population Variance
8.2.3 Confidence Interval for the Difference in Population Means when Sampling from Independent Normal Distributions with Known Equal Variances
8.2.4 Confidence Interval for the Difference in Population Means when Sampling from Independent Normal Distributions with Known but Unequal Variances.
8.2.5 Confidence Interval for the Difference in Means when Sampling from Independent Normal Distributions with Variances That Are Unknown but Assumed Equal
8.2.6 Confidence Interval for a Difference in Means when Sampling from Independent Normal Distributions with Variances That Are Un-known and Unequal.
8.2.7 Confidence Interval for the Mean Difference when the Differences Have a Normal Distribution
8.3 Confidence Intervals for Population Variances
8.3.1 Confidence Interval for the Population Variance of a Normal Pop-ulation
8.3.2 Confidence Interval for the Ratio of Population Variances when Sampling from Independent Normal Distributions
8.4 Confidence Intervals Based on Large Samples
8.4.1 Confidence Interval for the Population Proportion
8.4.2 Confidence Interval for a Difference in Population Proportions
Confidence Interval for the Mean of a Poisson Random Variable 8.4.3
8.5 Problems
9 Hypothesis Testing
9.1 Introduction.
9.2 Type I and Type II Errors
9.3 Power Function
9.4 Uniformly Most Powerful Test
9.5 -Value or Critical Level
9.6 Tests of Significance
9.7 Hypothesis Tests for Population Means
9.7.1 Test for the Population Mean when Sampling from a Normal Dis-tribution with Known Population Variance
9.7.2 Test for the Population Mean when Sampling from a Normal Dis-tribution with Unknown Population Variance
9.7.3 Test for the Difference in Population Means when Sampling from Independent Normal Distributions with Known Variances.
9.7.4 Test for the Difference in Means when Sampling from Independent Normal Distributions with Variances That Are Unknown but As-sumed Equal
9.7.5 Test for a Difference in Means when Sampling from Independent Normal Distributions with Variances That Are Unknown and Un-equal
9.7.6 Test for the Mean Difference when the Differences Have a Normal Distribution
9.8 Hypothesis Tests for Population Variances
9.8.1 Test for the Population Variance when Sampling from a Normal Distribution
9.8.2 Test for Equality of Variances when Sampling from Independent Normal Distributions
9.9 Hypothesis Tests for Population Proportions
9.9.1 Testing the Proportion of Successes in a Binomial Experiment (Ex-act Test)
9.9.2 Testing the Proportion of Successes in a Binomial Experiment (Nor-mal Approximation)
9.9.3 Testing Equality of Proportions with Fisher’s Exact Test
9.9.4 Large Sample Approximation for Testing the Difference of Two Proportions.
9.10 Problems
10 Nonparametric Methods
10.1 Introduction
10.2 Sign Test
10.2.1 Confidence Interval Based on the Sign Test
10.2.2 Normal Approximation to the Sign Test.
10.3 Wilcoxon Signed-Rank Test
10.3.1 Confidence Interval for & Based on the Wilcoxon Signed-Rank Test
10.3.2 Normal Approximation to the Wilcoxon Signed-Rank Test
10.4 The Wilcoxon Rank-Sum or the Mann-Whitney U-Test
10.4.1 Confidence Interval Based on the Mann-Whitney U-Test
10.4.2 Normal Approximation to the Wilcoxon Rank-Sum and Mann-Whitney U-Tests.
10.5 The Kruskal-Wallis Test
10.6 Friedman Test for Randomized Block Designs
10.7 Goodness-of-Fit Tests
10.7.1 The Chi-Square Goodness-of-Fit Test
10.7.2 Kolmogorov-Smirnov Goodness-of-Fit Test
10.7.3 Shapiro-Wilk Normality Test
10.8 Categorical Data Analysis
10.8.1 Test of Independence
10.8.2 Test of Homogeneity
10.9 Nonparametric Bootstrapping
10.9.1 Bootstrap Paradigm
10.9.2 Confidence Intervals
10.10 Permutation Tests
10.11 Problems
11 Experimental Design
11.1 Introduction
11.2 Fixed Effects Model
11.3 Analysis of Variance (ANOVA) for the One-Way Fixed Effects Model
11.4 Power and the Non-Central F Distribution
11.5 Checking Assumptions
11.5.1 Checking for Independence of Errors.
11.5.2 Checking for Normality of Errors.
11.5.3 Checking for Constant Variance
11.6 Fixing Problems
11.6.1 Non-Normality
11.6.2 Non-Constant Variance
11.7 Multiple Comparisons of Means
11.7.1 Fisher’s Least Significant Difference
11.7.2 The Tukey’s Honestly Significant Difference.
11.7.3 Displaying Pairwise Comparisons
11.8 Other Comparisons among the Means
11.8.1 Orthogonal Contrasts
11.8.2 The Scheffé Method for All Constrasts
People also search for:
probability and statistics with r
introduction to probability and statistics with r
probability and statistics with r pdf
probability and statistics with r (2nd edition pdf)
probability and statistics with r programming
Tags: Alan Arnholt, Ana Militino, Maria Dolores Ugarte, Probability and statistics


