We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let G be a finite group. Let
$H, K$
be subgroups of G and
$H \backslash G / K$
the double coset space. If Q is a probability on G which is constant on conjugacy classes (
$Q(s^{-1} t s) = Q(t)$
), then the random walk driven by Q on G projects to a Markov chain on
$H \backslash G /K$
. This allows analysis of the lumped chain using the representation theory of G. Examples include coagulation-fragmentation processes and natural Markov chains on contingency tables. Our main example projects the random transvections walk on
$GL_n(q)$
onto a Markov chain on
$S_n$
via the Bruhat decomposition. The chain on
$S_n$
has a Mallows stationary distribution and interesting mixing time behavior. The projection illuminates the combinatorics of Gaussian elimination. Along the way, we give a representation of the sum of transvections in the Hecke algebra of double cosets, which describes the Markov chain as a mixture of Metropolis chains. Some extensions and examples of double coset Markov chains with G a compact group are discussed.
Consider the following experiment: a deck with m copies of n different card types is randomly shuffled, and a guesser attempts to guess the cards sequentially as they are drawn. Each time a guess is made, some amount of ‘feedback’ is given. For example, one could tell the guesser the true identity of the card they just guessed (the complete feedback model) or they could be told nothing at all (the no feedback model). In this paper we explore a partial feedback model, where upon guessing a card, the guesser is only told whether or not their guess was correct. We show in this setting that, uniformly in n, at most
$m+O(m^{3/4}\log m)$
cards can be guessed correctly in expectation. This resolves a question of Diaconis and Graham from 1981, where even the
$m=2$
case was open.
Consider the barycentric subdivision which cuts a given triangle along its medians to produce six new triangles. Uniformly choosing one of them and iterating this procedure gives rise to a Markov chain. We show that, almost surely, the triangles forming this chain become flatter and flatter in the sense that their isoperimetric values go to infinity with time. Nevertheless, if the triangles are renormalized through a similitude to have their longest edge equal to [0, 1] ⊂ ℂ (with 0 also adjacent to the shortest edge), their aspect does not converge and we identify the limit set of the opposite vertex with the segment [0, 1/2]. In addition we prove that the largest angle converges to π in probability. Our approach is probabilistic, and these results are deduced from the investigation of a limit iterated random function Markov chain living on the segment [0, 1/2]. The stationary distribution of this limit chain is particularly important in our study.
Let X1,X2,… be an ergodic Markov chain on the countable state space. We construct a strong stationary dual chain X* whose first hitting times give sharp bounds on the convergence to stationarity for X. Examples include birth and death chains, queueing models, and the excess life process of renewal theory. This paper gives the first extension of the stopping time arguments of Aldous and Diaconis [1,2] to infinite state spaces.
We suggest a simple algorithm for Monte Carlo generation of uniformly distributed variables on a compact group. Example include random permutations, Rubik's cube positions, orthogonal, unitary, and symplectic matrices, and elements of GLn over a finite field. the algorithm reduces to the “standard” fast algorithm when there is one, but many new example are included.
A deck of n cards is shuffled by repeatedly taking off the top m cards and inserting them in random positions. We give a closed form expression for the distribution after any number of steps. This is used to give the asymptotics of the approach to stationarity: for m fixed and n large, it takes shuffles to get close to random. The formulae lead to new subalgebras in the group algebra of the symmetric group.
Despite a true antipathy to the subject, Hardy contributed deeply to modern probability. His work with Ramanujan begat probabilistic number theory. His work on Tauberian theorems and divergent series has probabilistic proofs and interpretations. Finally, Hardy spaces are a central ingredient in stochastic calculus. This paper reviews his prejudices and accomplishments, through these examples.
Let M be a random matrix chosen from Haar measure on the unitary group Un. Let Z = X + iY be a standard complex normal random variable with X and Y independent, mean 0 and variance ½ normal variables. We show that for j = 1, 2, …, Tr(Mj) are independent and distributed as √jZ asymptotically as n →∞. This result is used to study the set of eigenvalues of M. Similar results are given for the orthogonal and symplectic and symmetric groups.
The most frequently discussed method of revising a subjective probability distribution P to obtain a new distribution P*, based on the occurrence of an event E, is Bayes's rule: P*(A) = P(AE)/P(E). Richard Jeffrey (1965, 1968) has argued persuasively that Bayes's rule is not the only reasonable way to update: use of Bayes's rule presupposes that both P(E) and P(AE) have been previously quantified. In many instances this will clearly not be the case (for example, the event E may not have been anticipated), and it is of interest to consider how one might proceed.
Example. Suppose we are thinking about three trials of a new surgical procedure. Under the usual circumstances a probability assignment is made on the eight possible outcomes Ω = {000, 001, 010, 011, 100, 101, 110, 111}, where 1 denotes a successful outcome, 0 not. Suppose a colleague informs us that another hospital had performed this type of operation 100 times, with 80 successful outcomes. This is clearly relevant information and we obviously want to revise our opinion. The information cannot be put in terms of the occurrence of an event in the original eight-point space Ω, and the Bayes rule is not directly available. Among many possible approaches, four methods of incorporating the information will be discussed: (1) complete reassessment; (2) retrospective conditioning; (3) exchangeability; (4) Jeffrey's Rule.
For simple random walk on the integers, consider the chance that the walk has traveled distance k from its start given that its first return is at time 2n. We derive a limiting approximation accurate to order 1/n. We give a combinatorial explanation for a functional equation satisfied by the limit and show this yields the functional equation of Riemann's zeta function.
Garten and Knopp [7] introduced the notion of infinite iteration of Césaro (C1 ) averages, which they called H∞ summability. Flehinger [6] (apparently unaware of [7]) produced the first nontrivial example of an H∞ summable sequence: the sequence ﹛ai ﹜ ∞i=1 where at is 1 or 0 as the lead digit of the integer i is one or not. Duran [2] has provided an elegant treatment of H∞ summability as a special case of summability with respect to an ergodic semigroup of transformations.
A needle of length l dropped at random on a grid of parallel lines of distance d apart can have multiple intersections if l > d. The distribution of the number of intersections and approximate moments for large l are derived. The distribution is shown to converge weakly to an arc sine law as l/d→∞.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.