A COURSE IN MONTE CARLO is a concise explanation of the Monte Carlo (MC) method. In addition to providing guidance for generating samples from diverse distributions, it describes how to design, perform, and analyze the results of MC experiments based on independent replications, Markov chain MC, and MC optimization. The text gives considerable emphasis to the variance-reducing techniques of importance sampling, stratified sampling, Rao-Blackwellization, control variates, antithetic variates, and quasi-random numbers. For solving optimization problems it describes several MC techniques, including simulated annealing, simulated tempering, swapping, stochastic tunneling, and genetic algorithms. Examples from many areas show how these techniques perform in practice. Hands-on exercises enable student to experience challenges encountered when solving real problems. An answer key to selected problems is included.

### Table of Contents

1. INTRODUCTION.

About this Book. Integration and Summation. Improving Efficiency. Minimizing a Function. Improving Efficiency. Reading Plans. Exercises. Software. Notations. References.

2. INDEPENDENT MONTE CARLO.

Independent Monte Carlo (IMC). Why Monte Carlo?. Generating Samples. Choosing a Monte Carlo Sampling Plan. Importance Sampling. Estimating Volume. Interpreting Relative Error. Product and Non-Product Spaces. Bootstrap Method. Regression Analysis. Lessons Learned. Hands-On Exercises. References.

3. SAMPLING GENERATION.

Selecting a Sampling Algorithm. Independent and Dependent Variates. Inverse-Transform Method. Restricted Sampling. Discrete Distributions. Sampling from a Table. Restricted Sampling. Composition Method. Acceptance-Rejection Method. Squeeze Method. Adaptive Method. Ratio-of-Uniforms Method. Lessons Learned. Exercises. References.

4. PSEUDORANDOM NUMBER GENERATION.

Linear Congruential Generators. Prime Modulus. Evaluating PNG's. Theoretical Evaluation. Empirical Testing. Collision Test. Birthday Spacings Test. LCG's with Modulus 2(Beta).M = 2(superscript)32.M = 2(superscript)48. Mixed Linear Congruential Generators. Combined Generators. AWC and SWB Generators. Twisted GFSR Generators. Mersenne Twisted GFSRs. Lessons Learned. References.

5. VARIANCE REDUCTION.

Stratified Sampling. Unequal Sample Sizes. Rao-Blackwellization. Exceedance Probabilities for Rare Events. Control Variates. Antithetic Variates. Quasirandom Numbers. Exercises. Lessons Learned. Hands-On Exercises. References.

6. MARKOV CHAIN MONTE CARLO.

Hastings-Metropolis Method. Reversibility. Coordinate Updating. Single-Coordinate Updating. Bayesian MCMC. Joint and Full Conditional Distributions. Gibbs Sampling. Convergence for Xj. Non-connected State Spaces and Mixtures. Convergence for (Lambda)t. Local and Global Moves. Problem Size. Variance of Estimate. Choosing a Nominating Kernel. Independence Hastings-Metropolis Sampling. Random-Walk Nominating Kernels. Chains Favoring Smaller (Sigma)(infinity)squared. General-State Spaces. Polynomial Convergence. Discrete-Event Systems. First-Passage Times. Absorbing States. Lessons Learned. Appendix: Modified Acceptance-Rejection Sampling. Appendix: Modified Acceptance-Rejection Sampling. Exercises. Hands-On Exercises. References.

7. MCMC SAMPLE-PATH ANALYSIS.

Multiple Independent Replications. Single Sample Path. Estimating the Warm-Up Interval. Batch-Means Method. FNB Rule. SQRT Rule. ABATCH Rule. Testing for Independence. Appendix: LABATCH.2. Lessons Learned. Hands-On Exercises. References.

8. OPTIMIZATION VIA MCMC.

Searching for the Global Optimum. Nominating Kernels. Cooling. Initial Temperature. Temperature Gradient. Stage Length. Stopping Rule. Local Minima. Accelerating Convergence. Simulated Tempering. Swapping. Stochastic Tunneling. Genetic Algorithms. Searching for More Than the Minimum. Lessons Learned. Hands-On Exercises. References.

9. ADVANCED CONCEPTS IN MCMC.

Exploiting Reversibility. Rapid Mixing. Markov Random Fields. Gibbs Distribution. Potts Model. Random Cluster Model. Problem Size. More General Models. Slice Sampling. Product Slice Sampling. Partial Decoupling. Coupling from the Past. Monotone Markov Chains. Reusing Randomness. Total Number of Steps. Saving Space. Independent Hastings-Metropolis Sampling. Lessons Learned. Exercises. References.