# Probabilistic multi-conclusion validity

I’ve been thinking a bit recently about how to generalize standard results relating probability to validity to a multi-conclusion setting.

The standard result is the following (where the uncertainty of p is 1-probability of p):

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises is at least as great as the uncertainty of the conclusion.

It’ll help if we restate this as follows:

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probability of the conclusion is at least 1.

Stated this way, there’s a natural generalization available:

A multi-conclusion argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probabilities of the conclusions is greater than or equal to 1.

And once we’ve got it stated, it’s a corollary of the standard result (I believe).
It’s pretty easy to see directly that this works in the “if” direction, just by considering classical probability functions which only assign 1 or 0 to propositions.

In the “only if” direction (writing u for uncertainty and p for probability)

Consider A,B|=C,D. This holds iff A,B,~C,~D|= holds by a standard premise/conclusion swap result. And we know u(~C)=p(C), u(~D)=p(D). By the standard result, the sum of uncertainties of the premises of a single-conclusion argument must be greater than that of the conclusion. That is, the single-conc argument holds iff u(A)+u(B)+u(~C)+u(~D) is greater than equal to 1. But by the above identification, this holds iff u(A)+u(B)+p(C)+p(D) is greater than or equal to 1. This should generalize to arbitrary cases. QED.