The spread of infectious disease in a human community or the proliferation of fake news on social media can be modeled as a randomly growing tree-shaped graph. The history of the random growth process is often unobserved but contains important information such as thesource of the infection. We consider the problem of statistical inference on aspects of the latent history using only a single snapshot of the final tree. Our approach is to apply random labels to the observed unlabeled tree and analyze the resulting distribution of the growth process, conditional on the final outcome. We show that this conditional distribution is tractable under a shape-exchangeability condition, which we introduce here, and that this condition is satisfied for many popular models for randomly growing trees such as uniform attachment, linear preferential attachment and uniform attachment on a D-regular tree. For inference of the rootunder shape-exchangeability, we propose computationally scalable algorithms for constructing confidence sets with valid frequentist coverage as well as bounds on the expected size of the confidence sets. We also provide efficient sampling algorithms which extend our methods to a wide class of inference problems.

We propose a market-based scoring (MBS) method for evaluating the performance of probabilistic forecasts. We demonstrate our approach on the 2020 U.S.~elections for President, Senate and House of Representatives by evaluating the forecasts posted on the FiveThirtyEight website based on their performance against the prediction markets at PredictIt.

Our analysis finds that PredictIt and FiveThirtyEight perform comparably based on traditional metrics such as calibration and accuracy. For market-based scoring, we find that if we ignore PredictIt's fees and commissions, then FiveThirtyEight forecasts beat the markets overall; but if we factor in fees and commissions, the markets beat FiveThirtyEight. We discuss implications of this analysis for forecasting future election cycles and for betting market design and operations.

In addition to the analysis presented here, a running tally of results from the above analysis was updated and reported throughout the 2020 campaign at https://pivs538.herokuapp.com/.

Basic principles of statistical inference are commonly violated in network data analysis. Under the current approach, it is often impossible to identify a model that accommodates known empirical behaviors, possesses crucial inferential properties, and accurately models the data generating process. In the absence of one or more of these properties, sensible inference from network data cannot be assured. Our proposed framework decomposes every network model into a (relatively) exchangeable data generating process and a sampling mechanism that relates observed data to the population network. This framework, which encompasses all models in current use as well as many new models, such as edge exchangeable models, that lie outside the existing paradigm, offers a sound context within which to develop theory and methods for network analysis.

We propose a market-based scoring (MBS) method for evaluating the performance of probabilistic forecasts. We demonstrate our approach on the 2020 U.S.~elections for President, Senate and House of Representatives by evaluating the forecasts posted on the FiveThirtyEight website based on their performance against the prediction markets at PredictIt.

Our analysis finds that PredictIt and FiveThirtyEight perform comparably based on traditional metrics such as calibration and accuracy. For market-based scoring, we find that if we ignore PredictIt's fees and commissions, then FiveThirtyEight forecasts beat the markets overall; but if we factor in fees and commissions, the markets beat FiveThirtyEight. We discuss implications of this analysis for forecasting future election cycles and for betting market design and operations.

In addition to the analysis presented here, a running tally of results from the above analysis was updated and reported throughout the 2020 campaign at https://pivs538.herokuapp.com/.

Three case studies in the challenges and failures of probabilistic reasoning and common sense in the U.S. response to Covid-19.

The most common bets in 19th-century casinos were even-money bets on red or black in Roulette or Trente et Quarante. Many casino gamblers allowed themselves to be persuaded that they could make money for sure in these games by following betting systems such as the d'Alembert. What made these systems so seductive? Part of the answer is that some of the systems, including the d'Alembert, can give bettors a very high probability of winning a small or moderate amount. But there is also a more subtle aspect of the seduction. When the systems do win, their return on investment --- the gain relative to the amount of money the bettor has to take out of their pocket and put on the table to cover their bets --- can be astonishingly high. Systems such as le tiers et le tout, which offer a large gain when they do win rather than a high probability of winning, also typically have a high upside return on investment. In order to understand these high returns on investment, we need to recognize that the denominator --- the amount invested --- is random, as it depends on how successive bets come out.

In this article, we compare some systems on their return on investment and their success in hiding their pitfalls. Systems that provide a moderate gain with a very high probability seem to accomplish this by stopping when they are ahead and more generally by betting less when they are ahead or at least have just won, while betting more when they are behind or have just lost. For historical reasons, we call this martingaling. Among martingales, the d'Alembert seems especially good at making an impressive return on investment quickly, encouraging gamblers' hope that they can use it so gingerly as to avoid the possible large losses, and this may explain why its popularity was so durable.

We also discuss the lessons that this aspect of gambling can have for evaluating success in business and finance and for evaluating the results of statistical testing.

Featured by Mark Buchanan in an article for Nature Physics.

Comment on the proposal to rename the R.A. Fisher Lecture.

A response to John Ioannidis's article *A fiasco in the making? As the coronavirus pandemic takes hold, we are making decisions without reliable data* (March 17, 2020) in which he downplays the severe risks posed by coronavirus pandemic.

This article was also translated into Finnish, Spanish and Greek. The Greek translation was published in the Greek newspaper *To Vima*.

When gambling, think probability.

When hedging, think plausibility.

When preparing, think possibility.

When this fails, stop thinking. Just survive.

Naive probabilism is the (naive) view, held by many technocrats and academics, that all rational thought boils down to probability calculations. This viewpoint is behind the obsession with `data-driven methods' that has overtaken the hard sciences, soft sciences, pseudosciences and non-sciences. It has infiltrated politics, society and business. It's the workhorse of formal epistemology, decision theory and behavioral economics. Because it is mostly applied in low or no-stakes academic investigations and philosophical meandering, few have noticed its many flaws. Real world applications of naive probabilism, however, pose disproportionate risks which scale exponentially with the stakes, ranging from harmless (and also helpless) in many academic contexts to destructive in the most extreme events (war, pandemic). The 2019--2020 coronavirus outbreak (COVID-19) is a living example of the dire consequences of such probabilistic naivet\'e. As I write this on March 13, 2020, we are in the midst of a 6 continent pandemic, the world economy is collapsing and our future is bound to look very different from the recent past. The major damage caused by the spread of COVID-19 is attributable to a failure to act and a refusal to acknowledge what was in plain sight. This shared negligence stems from a blind reliance on naive probabilism and the denial of basic common sense by global and local leaders, and many in the general public.

Every exchangeable Feller process taking values in a suitably nice combinatorial state space can be constructed by a system of iterated random Lipschitz functions. In discrete time, the construction proceeds by iterated application of independent, identically distributed functions, while in continuous time the random functions occur as the atoms of a time homogeneous Poisson point process. We further show that every exchangeable Feller process projects to a Feller process in an appropriate limit space, akin to the projection of partition-valued processes into the ranked-simplex and graph-valued processes into the space of graph limits. Together, our main theorems establish common structural features shared by all exchangeable combinatorial Feller processes, regardless of the dynamics or resident state space, thereby generalizing behaviors previously observed for exchangeable coalescent and fragmentation processes as well as other combinatorial stochastic processes. If, in addition, an exchangeable Feller process evolves on a state space satisfying the *n*-disjoint amalgamation property for all *n*\geq1, then its jump measure can be decomposed explicitly in the sense of Levy-Ito-Khintchine.

I prove a connection between the logical framework for intuitive probabilistic reasoning (IPR) introduced by Crane (2017) and sets of imprecise probabilities. More specifically, this connection provides a straightforward interpretation to sets of imprecise probabilities as subjective credal states, giving a formal semantics for Crane's formal system of IPR. The main theorem establishes the IPR framework as a potential logical foundation for imprecise probability that is independent of the traditional probability calculus.

The notion of typicality appears in scientific theories, philosophical arguments, math- ematical inquiry, and everyday reasoning. Typicality is invoked in statistical mechanics to explain the behavior of gases. It is also invoked in quantum mechanics to explain the appearance of quantum probabilities. Typicality plays an implicit role in non-rigorous mathematical inquiry, as when a mathematician forms a conjecture based on personal experience of what seems typical in a given situation. Less formally, the language of typicality is a staple of the common parlance: we often claim that certain things are, or are not, typical. But despite the prominence of typicality in science, philosophy, math- ematics, and everyday discourse, no formal logics for typicality have been proposed. In this paper, we propose two formal systems for reasoning about typicality. One system is based on propositional logic: it can be understood as formalizing objective facts about what is and is not typical. The other system is based on the logic of intuitionistic type theory: it can be understood as formalizing subjective judgments about typicality.

We define a relationally exchangeable structure as a random combinatorial structure whose law is invariant with respect to relabeling its relations, instead of its elements. Examples of relationally exchangeable structures include edge exchangeable random graphs and hypergraphs and also exchangeable random set partitions (when viewed from a certain perspective). Relationally exchangeable models arise most naturally in certain network science applications, particularly network datasets generated processes of interactions. We prove a representation theorem for the class of relationally exchangeable structures and discuss some consequences and open problems. The representation refines Kingman's paintbox correspondence for exchangeable random partitions.

I make the distinction between *academic* probabilities, which are not rooted in reality and thus have no tangible real-world meaning, and *real* probabilities, which attain a real-world meaning as the odds that the subject asserting the probabilities is forced to accept for a bet against the stated outcome. With this I discuss how the replication crisis can be resolved easily by requiring that probabilities published in the scientific literature are real, instead of academic. At present, all probabilities and derivatives that appear in published work, such as P-values, Bayes factors, confidence intervals, etc., are the result of academic probabilities, which are not useful for making meaningful assertions about the real world.

I compare forecasts of the 2018 U.S. midterm elections based on (i) probabilistic predictions posted on the FiveThirtyEight blog and (ii) prediction market prices on PredictIt.com. Based on empirical forecast and price data collected prior to the election, the analysis assesses the calibration and accuracy according to Brier and logarithmic scoring rules. I also analyze the performance of a strategy that invests in PredictIt based on the FiveThirtyEight forecasts.

This article describes our motivation behind the development of RESEARCHERS.ONE, our mission, and how the new platform will fulfull this mission. We also compare our approach with other recent reform initiatives such as post-publication peer review and open access publications.

I introduce a formalization of probability which takes the concept of 'evidence' as primitive. In parallel to the intuitionistic conception of truth, in which 'proof' is primitive and an assertion A is judged to be true just in case there is a proof witnessing it, here 'evidence' is primitive and A is judged to be probable just in case there is evidence supporting it. I formalize this outlook by representing propositions as types in Martin-Lof type theory (MLTT) and defining a 'probability type' on top of the existing machinery of MLTT, whose inhabitants represent pieces of evidence in favor of a proposition. One upshot of this approach is the potential for a mathematical formalism which treats 'conjectures' as mathematical objects in their own right. Other intuitive properties of evidence occur as theorems in this formalism.

Publication of scientific research all but requires a supporting statistical analysis, anointing statisticians the de facto gatekeepers of modern scientific discovery. While the potential of statistics for providing scientific insights is undeniable, there is a crisis in the scientific community due to poor statistical practice. Unfortunately, widespread calls to action have not been effective, in part because of statisticians' tendency to make statistics appear simple. We argue that statistics can meet the needs of science only by empowering scientists to make sound judgments that account for both the nuances of the application and the inherent complexity of fundamental effective statistical practice. In particular, we emphasize a set of statistical principles that scientists can adapt to their ever-expanding scope of problems.

A recent proposal to "redefine statistical significance" (Benjamin et al. *Nature Human Behaviour*, 2017) claims that lowering the default cutoff for statistical significance from 0.05 to 0.005 would "immediately improve the reproducibility of scientific research in many fields".
Benjamin et al assert specifically that false positive rates would fall below 10% and replication rates would double under the lower cutoff.
I analyze these claims here, showing how the failure to account for P-hacking and other widespread reporting issues leads to exaggerated and misleading conclusions about the potential impact of the Benjamin et al proposal. My analysis shows that the benefits of redefining statistical significance are far less certain than projected by Benjamin et al, with most plausible scenarios showing only a marginal improvement, and possible decline, in replication rate under the proposed 0.005 cutoff.

Commentary by Andrew Gelman.

Response by one of the authors (E.J. Wagenmackers) Bayesian Spectacles Blog.

More discussion by Tim van der Zee.

Exchangeable models for countable vertex-labeled graphs can- not replicate the large sample behaviors of sparsity and power law degree distribution observed in many network datasets. Out of this mathematical impossibility emerges the question of how network data can be modeled in a way that reflects known empirical behaviors and respects basic statistical principles. We address this question by observing that edges, not vertices, act as the statistical units in networks constructed from interaction data, making a theory of edge-labeled networks more natural for many applications. In this context we introduce the concept of edge exchangeability, which unlike its vertex exchangeable counterpart admits models for networks with sparse and/or power law structure. Our characterization of edge exchangeable networks gives rise to a class of nonparametric models, akin to graphon models in the vertex exchangeable setting. Within this class, we identify a tractable family of distributions with a clear interpretation and suitable theoretical properties, whose significance in estimation, prediction, and testing we demonstrate.

We study random relational structures that are *relatively exchangeable*---that is, whose distributions are invariant under the automorphisms of a reference structure *M*. When *M* has *trivial definable closure*, every relatively exchangeable structure satisfies a general Aldous-Hoover-type representation. If *M* satisfies the stronger properties of * ultrahomogeneity* and *n*-*disjoint amalgamation property* (*n*-DAP) for every *n*\geq1, then relatively exchangeable structures have a more precise description whereby each component depends locally on *M*.

We use various combinatorial and probabilistic techniques to study growth rates for the probability that a random permutation from the Mallows distribution avoids consecutive patterns. The Mallows distribution behaves like a *q*-analogue of the uniform distribution by weighting each permutation \pi by *q^{inv(\pi)} *, where inv(\pi) is the number of inversions in $\pi$ and *q* is a positive, real-valued parameter. We prove that the growth rate exists for all patterns and all *q*>0, and we generalize Goulden and Jackson's cluster method to keep track of the number of inversions in permutations avoiding a given consecutive pattern. Using singularity analysis, we approximate the growth rates for length-3 patterns, monotone patterns, and non-overlapping patterns starting with 1, and we compare growth rates between different patterns. We also use Stein's method to show that, under certain assumptions on *q*, the length of \sigma, and inv(\sigma), the number of occurrences of a given pattern \sigma is well approximated by the normal distribution.

We describe an Aldous-Hoover-type characterization of random relational structures that are exchangeable relative to a fixed structure which may have various equivalence relations. Our main theorem gives the common generalization of the results on relative exchangeability due to Ackerman (2015) and Crane and Towsner (2015) and hierarchical exchangeability results due to Austin and Panchenko (2014).

Combinatorial Levy processes evolve on general state spaces of countable combinatorial structures. In this setting, the usual Levy process properties of stationary, independent increments are defined in an unconventional way in terms of the symmetric difference operation on sets. In discrete time, the description of combinatorial Levy processes gives rise to the notion of combinatorial random walks. These processes behave differently than random walks and Levy processes on other state spaces. Standard examples include processes on sets, graphs, and n-ary relations, but the framework permits far more general possibilities. The main theorems characterize both finite and infinite state space combinatorial Levy processes by a unique sigma-finite measure. Under the additional assumption of exchangeability, we obtain a more explicit characterization by which every exchangeable combinatorial Levy process corresponds to a Poisson point process on the same state space. Associated behavior of the projection into a space of limiting objects reflects certain structural features of the covering process.

Using techniques from Poisson approximation, we prove explicit error bounds on the number of permutations that avoid any pattern. Most generally, we bound the total variation distance between the joint distribution of pattern occurrences and a corresponding joint distribution of independent Bernoulli random variables, which as a corollary yields a Poisson approximation for the distribution of the number of occurrences of any pattern. We also investigate occurrences of consecutive patterns in random Mallows permutations, of which uniform random permutations are a special case. These bounds allow us to estimate the probability that a pattern occurs any number of times and, in particular, the probability that a random permutation avoids a given pattern.

An alternative approach to modeling latent time-varying sequences of clusters demonstrates certain benefits over existing methods for analyzing Supreme Court voting data. The family of Markov chains presented satisfies important statistical properties, including exchangeability, consistency under subsampling, and reversibility with respect to a tractable class of two-parameter partition models. These properties make the model robust to missing data, choice of labels, and changes in the sample over time. When combined with an appropriate model for the response, exchangeability and consistency give rise to the stronger model properties of label equivariance and non-interference, respectively, which together make inferences readily interpretable. These and other aspects of the approach are illustrated with a detailed analysis of voting data in the Supreme Court over the period 1946--2012. The results of this analysis agree with what is known about the Court's behavior during this period of time. In some cases, this approach detects aspects of the Court that other quantitative analyses, such as Martin--Quinn scores, do not.

The transition law of every exchangeable Feller process on the space of countable graphs is determined by a sigma-finite measure on the space of {0,1}x{0,1}-valued arrays. In discrete-time, this characterization amounts to a construction from an independent, identically distributed sequence of exchangeable random functions. In continuous-time, the behavior is enriched by a Levy--Ito-type decomposition of the jump measure into mutually singular components that govern global, vertex-level, and edge-level dynamics. Furthermore, every such process almost surely projects to a Feller process in the space of graph limits.

Motivated by the gene tree/species tree problem from statistical phylogenetics, we extend the class of Markov branching trees to a parametric family of distributions on fragmentation trees that satisfies a generalized Markov branching property. The main theorems establish important statistical properties of this model, specifically necessary and sufficient conditions under which a family of trees can be constructed consistently as sample size grows. We also consider the question of attaching random edge lengths to these trees.

Ewens's sampling formula exemplifies the harmony of mathematical theory, statistical application, and scientific discovery. The formula not only contributes to the foundations of evolutionary molecular genetics, the neutral theory of biodiversity, Bayesian nonparametrics, combinatorial stochastic processes, and inductive inference but also emerges from fundamental concepts in probability theory, algebra, and number theory. With an emphasis on its far-reaching influence throughout statistics and probability, we highlight these and many other consequences of Ewens's seminal discovery.

We study a broad class of stochastic process models for dynamic networks that satisfy the minimal regularity conditions of (i) exchangeability and (ii) cadlag sample paths. Our main theorems characterize these processes through their induced behavior in the space of graph limits. Under the assumption of time-homogeneous Markovian dependence, we classify the discontinuities of these processes into three types, prove bounded variation of the sample paths in graph limit space and express the process as a mixture of time-inhomogeneous, exchangeable Markov processes with cadlag sample paths.

Any exchangeable, time-homogeneous Markov processes on [k]^{\mathbb {N}} with cadlag sample paths projects to a Markov process on the simplex whose sample paths are cadlag and of locally bounded variation. Furthermore, any such process has a de Finetti-type description as a mixture of independent, identically distributed copies of time-inhomogeneous Markov processes on [k] . In the Feller case, these time-inhomogeneous Markov processes have a relatively simple structure; however, in the non-Feller case, a greater variety of behaviors is possible since the transition law of the underlying Markov process on [k]^{\mathbb N} can depend in a nontrivial way on its exchangeable sigma-algebra.

The three parameter cluster model is a combinatorial stochastic process that generates categorical response sequences by randomly perturbing a fixed clustering param-eter. This clear relationship between the observed data and the underlying clustering is particularly attractive in cluster analysis, in which supervised learning is a common goal and missing data is a familiar issue. The model is well-equipped for this task, as it can handle missing data, perform out-of-sample inference, and accommodate both independent and dependent data sequences. Moreover, its clustering parameter lies in the unrestricted space of partitions, so that the number of clusters need not be specified beforehand. We establish these and other theoretical properties and also demonstrate the model on data sets from epidemiology, genetics, political science, and legal studies.

We propose a Bayesian method for clustering from discrete data structures that commonly arise in genetics and other applications. This method is equivariant with respect to relabeling units; unsampled units do not interfere with sampled data; and missing data do not hinder inference. Cluster inference using the posterior mode performs well on simulated and real data sets, and the posterior predictive distribution enables supervised learning based on a partial clustering of the sample.

We introduce the exchangeable rewiring process for modeling time-varying networks. The process fulfills fundamental mathematical and statistical properties and can be easily constructed from the novel operation of random rewiring. We derive basic properties of the model, including consistency under subsampling, exchangeability, and the Feller property. A reversible sub-family related to the Erdos-Renyi model arises as a special case.

We introduce a family of Markov processes on set partitions with a bounded number of blocks, called Lipschitz partition processes. We construct these processes explicitly by a Poisson point process on the space of Lipschitz continuous maps on partitions. By this construction, the Markovian consistency property is readily satisfied; that is, the finite restrictions of any Lipschitz partition process comprise a compatible collection of finite state space Markov chains. We further characterize the class of exchangeable Lipschitz partition processes by a novel set-valued matrix operation.

Superposition is a mapping on point configurations that sends the n-tuple (x1, . . . , xn) ∈ X n into the n-point configuration {x1, . . . , xn} ⊂ X , counted with multiplicity. It is an additive set operation such the superposition of a k-point configuration in X n is a kn-point configuration in X . A Poisson superposition process is the superposition in X of a Poisson process in the space of finite-length X -valued sequences. From properties of Poisson processes as well as some algebraic properties of formal power series, we obtain an explicit expression for the Janossy measure of Poisson superposition processes, and we study their law under domain restriction. Examples of well-known Poisson superposition processes include compound Poisson, negative binomial, and permanental (boson) processes.

We study k-divisible partition structures, which are families of random set partitions whose block sizes are divisible by an integer k = 1, 2, . . .. In this setting, exchangeability corresponds to the usual invariance under relabeling by arbitrary permutations; however, for k > 1, the ordinary deletion maps on partitions no longer preserve divisibility, and so a random deletion procedure is needed to obtain a partition structure. We describe explicit Chinese restaurant-type seating rules for generating families of exchangeable k-divisible partitions that are consistent under random deletion. We further introduce the notion of Markovian partition structures, which are ensembles of exchangeable Markov chains on k-divisible partitions that are consistent under a random process of Markovian deletion. The Markov chains we study are reversible and refine the class of Markov chains introduced in J. Appl. Probab. 48(3):778–791.

We show structural properties of the system of ordered partitions of [n] := {1, . . . , n} all of whose left-to-right minima occur in odd locations, called left-to-right arrange-ments. Our main objectives are (i) to show that the set of all finite left-to-right arrangements is a projective system under a natural choice of restriction operation, (ii) to establish a non-trivial embedding of set partitions of [n] into the set of left-to-right arrangements of [n], and (iii) to illustrate how this embedding can be used to easily enumerate certain sets of pattern-avoiding set partitions.

We characterize the class of exchangeable Feller processes evolving on partitions with boundedly many blocks. In continuous-time, the jump measure decomposes into two parts: a $\sigma$-finite measure on stochastic matrices and a collection of nonnegative real constants. This decomposition prompts a Levy-Ito representation. In discrete-time, the evolution is described more simply by a product of independent, identically distributed random matrices.

We study both time-invariant and time-varying Gibbs distributions for configurations of particles into disjoint clusters. Specifically, we introduce and give some fundamental properties for a class of partition models, called permanental partition models, whose distributions are expressed in terms of the α-permanent of a similarity matrix parameter. We show that, in the time-invariant case, the permanental partition model is a refinement of the celebrated Pitman–Ewens distribution; whereas, in the time-varying case, the permanental model refines the Ewens cut-and-paste Markov chains (J. Appl. Probab. 43(3):778–791, 2011). By a special property of the α-permanent, the partition function can be computed exactly, allowing us to make several precise statements about this general model, including a characterization of exchangeable and consistent permanental models.

We show that the permanent of a matrix is a linear combination of determinants of block diagonal matrices which are simple functions of the original matrix. To prove this, we first show a more general identity involving \alpha-permanents: for arbitrary complex numbers \alpha and \beta, we show that the \alpha-permanent of any matrix can be expressed as a linear combination of \beta-permanents of related matrices. Some other identities for the \alpha-permanent of sums and products of matrices are shown, as well as a relationship between the \alpha-permanent and general immanants. We conclude with a discussion of the computational complexity of the \alpha-permanent and provide some numerical illustrations.

We study consistent collections of random fragmentation trees with random integer-valued edge lengths. We prove several equivalent necessary and sufficient conditions under which Geometrically distributed edge lengths can be consistently assigned to a Markov branching tree. Among these conditions is a characterization by a unique probability measure, which plays a role similar to the dislocation measure for homogeneous fragmentation processes. We discuss this and other connections to previous work on Markov branching trees and homogeneous fragmentation processes.

We study the convergence rate to stationarity for a class of exchangeable partition-valued Markov chains called cut-and-paste chains. The law governing the transitions of a cut-and-paste chain are determined by products of i.i.d. stochastic matrices, which describe the chain induced on the simplex by taking asymptotic frequencies. Using this representation, we establish upper bounds for the mixing times of ergodic cut-and-paste chains, and under certain conditions on the distribution of the governing random matrices we show that the "cutoff phenomenon" holds.

We study a family of Markov processes on \mathcal{P}^{(k)}, the space of
partitions of the natural numbers with at most *k* blocks. The process can be
constructed from a Poisson point process on
$\mathbb{R}^+\times\prod_{i=1}^k\mathcal{P}^{(k)}$ with intensity
$dt\otimes\varrho_{\nu}^{(k)}$, where $\varrho_{\nu}$ is the distribution of
the paintbox based on the probability measure \nu on the
set of ranked-mass partitions of 1 and $\varrho_{\nu}^{(k)}$ is the product
measure on $\prod_{i=1}^k\mathcal{P}^{(k)}$. We show that these processes
possess a unique stationary measure, and we discuss a particular set of
reversible processes for which transition probabilities can be written down
explicitly.