Thursday, June 19, 2025

Duality in logic and algebra

In logic, we consider a proposition $A$ and its negation $\lnot A$.
If we have a logical implication $A\to B$, 
then we also have its contrapositive $\lnot A \leftarrow \lnot B$.

In linear algebra, we consider a vector space $V$ and its dual $V^\star$.
https://en.wikipedia.org/wiki/Dual_space
If we have a linear transformation $L : V\to W$, 
then we also have its dual transformation $V^\star \leftarrow W^\star : L^\star$.


To see the similarity between the situations, 
we need to write $\lnot B$ and elements of $W^\star$ as arrows, and 
the transformations from 
$A\to B$ to $\lnot A \leftarrow \lnot B$ and  $V\to W$ to $V^\star \leftarrow W^\star$
as simply composition of arrows.

In logic, we follow intuitionistic logic 
and define $\bot$, falsum. as the always false proposition, 
and the negation of a proposition $B$ as the implication $B\to \bot$, i.e., $B$ implies that false proposition $\bot$.
Thus we write the proposition $\lnot B$ as the implication arrow $B\to\bot$, $B$ implies $\bot$.

In linear algebra, each vector space has, by definition of vector space, a field $k$ of scalars which it is a vector space "over".
In the basic examples the field $k$ is either the real numbers $\R$ or the complex numbers $\bf C$.
Then given a vector space $W$ we define its dual $W^\star$ as the vector space of linear transformations from $W$ to its ground field $k$.
Thus an element of $W^\star$ is a linear transformation $W\to k$.

Without further ado, here are some diagrams illustrating the above, comparing fragments of logic and algebra, 
including writing logical implications as if they were linear transformations between vector spaces (this is definitely non-standard):

$$\begin{array} {} A && \xrightarrow[\kern8em]{\textstyle\alpha} && B \\ & \llap{\lnot A} \searrow & \xleftarrow{\textstyle\alpha\kern.2em \text{contrapositive} = \alpha^\star} & \swarrow \rlap{\lnot B} \\ \\ && \bot & \\ \end{array}$$

$$\begin{array} {} V && \xrightarrow[\kern4em]{\textstyle L} && W \\ & \llap{V^\star \ni L^\star t = t\circ L} \searrow & \xleftarrow[\kern 1em]{\textstyle L^\star} & \swarrow \rlap{t \in W^\star} \\ && k & \\ \end{array}$$

$$\begin{array} {} A && \xleftarrow[\kern8em]{\textstyle \alpha \kern.2em \text{converse} = \alpha^{-1}} && B \\ & \llap{\lnot A} \searrow & \xrightarrow{ \textstyle \alpha \kern.2em \text{inverse} = {(\alpha^{-1})}^\star } & \swarrow \rlap{\lnot B} \\ \\ && \bot & \\ \end{array}$$

$$\begin{array} {} V && \xleftarrow[\kern4em]{\textstyle L^{-1}} && W \\ & \llap{V^\star \ni s} \searrow & \xrightarrow[\kern 1em]{\textstyle {(L^{-1}})^\star} & \swarrow \rlap{s\circ L^{-1} = {(L^{-1}})^\star s \in W^\star} \\ && k & \\ \end{array}$$

So $\alpha \kern.2em \text{inverse}$ is the contrapositive of $\alpha \kern.2em \text{converse}$.

AFAIK it is not customary in logic to label implications.
However, to emphasize the similarity of logic to algebra, I have done that here.
Again, the suggested correspondence is between $\lnot B$ and $W^\star$, etc.
$t$ and $s$ are just shown to show how the dual transformations act on the dual spaces.

No comments:

Post a Comment