"The underlying concept is to use randomness to solve problems that might be deterministic in principle."
A class of computational algorithms that rely on random sampling to obtain numerical results.
Markov Chain Monte Carlo (MCMC): A class of algorithms for simulating samples from a probability distribution by constructing a Markov chain that has the desired distribution as its stationary distribution.
Importance Sampling: A technique for estimating the expectation of a function with respect to a given probability distribution by sampling from a different distribution that is easier to simulate.
Metropolis-Hastings Algorithm: A MCMC algorithm for simulating samples from a probability distribution, where the proposal distribution is governed by a proposal probability that is weighed against an acceptance probability.
Rejection Sampling: A method for generating random variables from a distribution by simulating a candidate distribution and rejecting those values that fall outside of the desired distribution.
Gibbs Sampling: A MCMC algorithm that generates a sequence of samples from a multivariate probability distribution by sampling from the full conditionals of the variables.
Transition Matrix: A matrix whose rows and columns represent the states of a Markov chain and whose elements give the probabilities of moving from one state to another.
Monte Carlo Integration: A method for estimating the expectation of a function by averaging samples from the function, where the samples are drawn from a random distribution.
Central Limit Theorem: A statistical theorem that states that as the sample size increases, the distribution of sample means will approach a normal distribution.
Convergence Rate: The speed at which a sequence of random variables approaches its limiting distribution.
Variance Reduction: A technique for improving the efficiency of Monte Carlo methods by reducing the variance of the estimate.
Importance Sampling in Sensitivity Analysis: A technique for determining how sensitive the output of a model is to changes in the input parameters.
Nested Sampling: A technique for estimating the marginal likelihood of a model by simulating samples from a sequence of distributions that approximate the posterior distribution.
Bayesian Inference: A method for updating the probability of a hypothesis based on new evidence, where the probability of the hypothesis is expressed as a probability distribution.
Sequential Monte Carlo: A class of Monte Carlo methods for simulating samples from a probability distribution in a sequential manner.
Hybrid Monte Carlo: A MCMC algorithm that combines Hamiltonian dynamics with Metropolis-Hastings steps to improve the efficiency of the simulation.
Metropolis-Hastings algorithm: The Metropolis-Hastings algorithm is a technique for obtaining a sequence of samples from a probability distribution for which direct sampling is difficult.
Importance Sampling: Importance sampling attempts to improve the efficiency of Monte Carlo integration by focusing its computational effort on the regions of the sample space that contribute most to the expected value of the integrand.
Gibbs Sampling: Gibbs sampling is a Markov Chain Monte Carlo algorithm, which can be used to simulate a desired distribution, such as a posterior distribution from a Bayesian inference problem.
Markov Chain Monte Carlo (MCMC): MCMC is a general class of algorithms which use a Markov chain to sample from a desired probability distribution.
Hamiltonian Monte Carlo: Hamiltonian Monte Carlo (HMC) is a method for sampling from probability distributions in a high-dimensional space.
Parallel Tempering: Parallel tempering is a Monte Carlo method used for simulating systems characterized by rugged energy landscapes.
Swendsen-Wang algorithm: The Swendsen-Wang algorithm is a graph-based method used for simulating systems with complex interactions.
Wang-Landau algorithm: The Wang-Landau algorithm is a Monte Carlo method used to estimate the density of states and investigate phase transitions in complex systems.
Transition Path Sampling: Transition path sampling is a Monte Carlo method used to study rare events in complex systems where the dynamics may be difficult to characterize.
Umbrella Sampling: Umbrella Sampling is a Monte Carlo method that is used to compute the potential of mean force for a system characterized by a reaction coordinate.
"Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution."
"They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches."
"Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures."
"Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative 'soft' methods."
"By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the 'sample mean') of independent samples of the variable."
"Evaluation of multidimensional definite integrals with complicated boundary conditions."
"When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution."
"The samples being generated by the MCMC method will be samples from the desired (target) distribution."
"These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states."
"These mean-field particle techniques rely on sequential interacting samples."
"When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes."
"The computational cost associated with a Monte Carlo simulation can be staggeringly high."
"The embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc."