Background
Let $\mathcal{L}$ be an $m \times n$ square lattice on a torus, and let $\Sigma$ be a finite set. We think of $\Sigma$ as the possible spin values that can be assigned to the points of the lattice; a state will therefore be a map $\mathcal{L} \to \Sigma$. In the Interactions-Round-a-Face (IRF) model, there is a weight function $W: \Sigma^4 \to \mathbb{R}$, and the total Boltzmann weight of a state $s: \mathcal{L} \to \Sigma$ will be $\overline{W}(s) = \prod_{\text{faces } (i, j, k, l)} W(s_i, s_j, s_k, s_l)$. (The vertices of a face are labeled starting from the upper left and going clockwise, say.) The partition function is defined to be $\sum_{\text{states } s} \overline{W}(s)$. This model has many exactly solvable 2-dimensional lattice models as special cases.
The standard analysis of this partition function begins by considering the row transfer matrix, defined as follows. Given a pair of adjacent rows and spin vectors $\phi = (s_1, \ldots, s_n)$, $\phi' = (s_1', \ldots, s_n')$, we define $T_{\phi, \phi'} = \prod_{i = 1}^n W(s_i, s_{i + 1}, s_{i + 1}', s_i')$. We think of $T$ as defining a $\lvert \Sigma \rvert^n \times \lvert \Sigma \rvert^n$ matrix. One then observes that the partition function is simply the trace of $T^m$.
More generally, one could take each face to have a different weight function $W$; we would then get a different transfer matrix for each row, and the partition function would be the trace of the product of all these transfer matrices.
Question
Computations in the IRF model can be viewed as a sort of graphical calculus, where we think of the row transfer matrix as a map from $V = \mathbb{C}^{\Sigma^n}$ to itself, where the input is given by the assignment of spins to the top row, and the output is given by the assignment of spins to the bottom row. More explicitly, $T$ should be thought of as a multilinear form on $V \otimes V$, which we then convert into a map $V \to V$ using the given basis to identify $V$ with $V^\ast$. Stacking rows corresponds to composing $T$ with itself, and identifying the $(m + 1)$-st row with the first row corresponds to taking the trace. This interpretation also works when $T$ varies from row to row.
What's not clear to me is how to extend this graphical calculus to horizontal composition. One would like to start not from the row transfer matrix $T$, whose definition seems somewhat artificial, but the face weight $W$, which can be thought of as a 4-linear form on $V' = \mathbb{C}^{\Sigma}$, or as a map $W: V'^{\otimes 2} \to V'^{\otimes 2}$ by identifying the "bottom" two copies of $V'$ with $V'^{\ast}$ as above. Vertical composition is again given by powering $W$, but the horizontal composition is strange. It sends the $(2, 2)$-tensors $W_{s_1's_2'}^{s_1s_2}$, $W_{s_2's_3'}^{s_2s_3}$ to the "$(3, 3)$-tensor" $X_{s_1's_2's_3'}^{s_1s_2s_3} = W_{s_1's_2'}^{s_1s_2} W_{s_2's_3'}^{s_2s_3}$, which doesn't seem to be a "categorical" construction. The problem is that we need to use the middle input (and output) twice, so the resulting map will be quadratic, rather than linear, in these variables.
What is the correct categorical framework for the IRF model?
This post imported from StackExchange MathOverflow at 2015-01-16 22:09 (UTC), posted by SE-user Evan Jenkins