\section{The Formulas} \label{sec:Formulas} One of the selling points for this
article is that the formulas in it are concise. Thus we start by running
through these formulas for a knot invariant $\rho_1$ as quickly as we
can. In Section~\ref{sec:Implementation} we turn the formulas into a
short yet very fast computer program, in Section~\ref{sec:Proofs} we
give a partial interpretation of the formulas in terms of car traffic
on a knot diagram and use it to prove the invariance of $\rho_1$, and
in Section~\ref{sec:Context} we quickly sketch the context: Alexander,
Burau, Jones, Melvin, Morton, Rozansky, Overbay, and our own prior
work. This article accompanies two talks,~\cite{Talk:Waco-2203} and~\cite{Talk:Geneva-2206}
(videos and handouts available).
\parpic[r]{\input{figs/SampleKnot.pdf_t}}
Given an oriented $n$-crossing knot $K$, we draw it in the plane as a
long knot diagram $D$ in such a way that the two strands intersecting
at each crossing are pointed up (that's always possible because we
can always rotate crossings as needed), and so that at its beginning
and at its end the knot is oriented upward. We call such a diagram an
{\em upright knot diagram.} An example of an upright knot diagram $D_3$
is shown on the right.
We then label each edge of the diagram with two integer labels: a running index $k$ which runs from 1 to $2n+1$, and a ``rotation number'' $\varphi_k$, the geometric rotation number of that edge (the signed number of times the tangent to the edge is horizontal and heading right, with cups counted with $+1$ signs and caps with $-1$; this number is well defined because at their ends, all edges are headed up). On the right the running index runs from $1$ to $7$, and the rotation numbers for all edges are $0$ (and hence are omitted) except for $\varphi_4$, which is $-1$.
{\em A Technicality.} Some Reidemeister moves create or lose an edge and to avoid the need for renumbering it is beneficial to also allow labelling the edges with non-consecutive labels. Hence we allow that, and write $\ip$ for the successor of the label $i$ along the knot, and $\ipp$ for the successor of $\ip$ (these are $i+1$ and $i+2$ if the labelling is by consecutive integers). Also, ``$1$'' will always refer to the label of the first edge, and ``$2n+1$'' will always refer to the label of the last.
We let $A$ be the $(2n+1)\times(2n+1)$ matrix of Laurent polynomials in the formal variable $T$ defined by
\[ A \coloneqq I - \sum_c \left( T^sE_{i,\ip} + (1-T^s)E_{i,\jp} + E_{j,\jp} \right), \]
where $I$ is the identity matrix and $E_{\alpha\beta}$ denotes the elementary
matrix with $1$ in row $\alpha$ and column $\beta$ and zeros elsewhere.
The summation is over the crossings $c$ of the diagram $D$, and once $c$
is chosen, $s$ denotes its sign and $i$ and $j$ denote the labels below
the crossing where the label $i$ belongs to the over-strand and $j$
to the under-strand.
Alternatively, $A=I + \sum_c A_c$, where $A_c$ is a matrix of zeros except for the blocks as follows:
\begin{equation} \label{eq:A}
\begin{array}{c}\input{figs/Xings.pdf_t}\end{array}
\qquad\longrightarrow\qquad
\begin{array}{c|cccc}
A_c & \text{column }\ip & \text{column }\jp \\
\hline
\text{row }i & -T^s & T^s-1 \\
\text{row }j & 0 & -1
\end{array}
\end{equation}
%{\color{blue} For example, if $D=D_1=\ \uparrow$ is the diagram with no crossings,
%the resulting matrix $A$ is the $1\times 1$ identity matrix
%$(1)$. \ldots}
\parpic[r]{\input{figs/D2.pdf_t}}
For example, if $D=D_1$ is the diagram with no crossings (as shown on
the right), the resulting matrix $A$ is the $1\times 1$ identity matrix
$(1)$. If $D=D_2$ is the second diagram on the right (here $s=+1$,
$(i,j)=(2,1)$, and $(\ip,\jp)=(3,2)$), then
\[ A =
\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}
+ \begin{pmatrix}0&-1&0\\0&T-1&-T\\0&0&0\end{pmatrix}
= \begin{pmatrix}1&-1&0\\0&T&-T\\0&0&1\end{pmatrix},
\]
and for $D_3$ as on the first page, we have
\[ A = \footnotesize \left(
\begin{array}{ccccccc}
1 & -T & 0 & 0 & T-1 & 0 & 0 \\
0 & 1 & -1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & -T & 0 & 0 & T-1 \\
0 & 0 & 0 & 1 & -1 & 0 & 0 \\
0 & 0 & T-1 & 0 & 1 & -T & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & -1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right). \]
\picskip{0}
We note without supplying details that the matrix $A$ comes in a
straightforward way from Fox calculus as it is applied to the Wirtinger
presentation of the fundamental group of the complement of $K$ (using
the diagram $D$). Hence the determinant of $A$ is equal up to a unit
to the normalized Alexander polynomial $\Delta$ of $K$ (which satisfies
$\Delta(T)=\Delta(T^{-1})$ and $\Delta(1)=1$). In fact, we have that
\begin{equation} \label{eq:Delta} \Delta = T^{(-\varphi(D)-w(D))/2}\det(A), \end{equation}
where $\varphi(D)\coloneqq\sum_k\varphi_k$ is the total rotation number of $D$ and where $w(D)=\sum_cs_c$ is the writhe of $D$, namely the sum of the signs $s_c$ of all the crossings $c$ in $D$.
For our example $D_2$, $\det(A)=T$, $\varphi(D)=1$, and $w(D)=1$, so $\Delta=T^{(-1-1)/2}\cdot T=1$, as expected for a diagram of the unknot.
For $D_3$, $\det(A)=1-T+T^2$, $\varphi(D)=-1$, and $w(D)=3$, so $\Delta=T^{(1-3)/2}(1-T+T^2)=T-1+T^{-1}$, as expected for the trefoil knot.
We set\footnote{At $T=1$ the matrix $A$ has $1$'s on the main diagonal, $(-1)$'s on the diagonal above it, and $0$'s everywhere else. Hence $A$ is invertible at $T=1$ and hence over the field of rational functions.} $G=(g_{\alpha\beta})=A^{-1}$, and taking our inspiration from physics, we name $g_{\alpha\beta}$ the {\em Green function} for the diagram $D$. For our three examples $D_1$, $D_2$, and $D_3$, the Green function $G$ is respectively
\begin{equation} \label{eq:GExamples}
\begin{pmatrix} 1 \end{pmatrix},
\quad \begin{pmatrix} 1&T^{-1}&1\\0&T^{-1}&1\\0&0&1 \end{pmatrix},
\quad \footnotesize \left(
\begin{array}{ccccccc}
1 & \frac{T^3-T^2+T}{T^2-T+1} & 1 &
\frac{T^3-T^2+T}{T^2-T+1} & 1 &
\frac{T^3-T^2+T}{T^2-T+1} & 1 \\
0 & 1 & \frac{1}{T^2-T+1} & \frac{T}{T^2-T+1} &
\frac{T}{T^2-T+1} & \frac{T^2}{T^2-T+1} & 1 \\
0 & 0 & \frac{1}{T^2-T+1} & \frac{T}{T^2-T+1} &
\frac{T}{T^2-T+1} & \frac{T^2}{T^2-T+1} & 1 \\
0 & 0 & \frac{1-T}{T^2-T+1} & \frac{1}{T^2-T+1} &
\frac{1}{T^2-T+1} & \frac{T}{T^2-T+1} & 1 \\
0 & 0 & \frac{1-T}{T^2-T+1} & \frac{T-T^2}{T^2-T+1}
& \frac{1}{T^2-T+1} & \frac{T}{T^2-T+1} & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right).
\end{equation}
We can now define our invariant $\rho_1$. It is the sum of two sums. The first is a sum of a term $R_1(c)$ over all crossings $c$ in $D$, where for such a crossing we let $s$ denote its sign and we let $i$ and $j$ denote the edge labels of the incoming over- and under-strands, respectively and where
\begin{equation} \label{eq:R1}
R_1(c) \coloneqq s \left(
g_{ji} \left(g_{\jp,j}+g_{j,\jp}-g_{ij}\right)
-g_{ii} \left(g_{j,\jp}-1\right)
-1/2
\right).
\end{equation}
The second sum is a sum over the edges $k$ of $D$ of a correction term dependent on the rotation number $\varphi_k$. We multiply the result by $\Delta^2$ to ``clear the denominators''\footnote{$R_1(s)$ is quadratic in the entries of $G$ and hence it has denominators proportional to $\Delta^2$.}:
\begin{equation} \label{eq:rho1}
\rho_1 \coloneqq \Delta^2\left(\sum_c R_1(c) - \sum_k\varphi_k\left(g_{kk}-1/2\right)\right),
\end{equation}
Direct calculations show that $\rho_1(D_1)=0$ (as the sums are empty), $\rho_1(D_2)=0$, and $\rho_1(D_3) = -T^2 +2T - 2 + 2T^{-1} - T^{-2}$.
\ifpub{}{\needspace{12mm}}
\begin{theorem}[``Invariance'', proofs in Section~\ref{sec:Proofs}] \label{thm:Main} The quantity $\rho_1$ is a knot invariant.
\end{theorem}
As we shall see in the next section, $\rho_1$ has more separation power than the Jones polynomial, yet it is closer to the more topologically meaningful Alexander polynomial $\Delta$: it is cooked up from the same matrix $A$ and in terms of computational complexity, computing $\rho_1$ is not very different from computing $\Delta$. In order to compute $\Delta$ we need to compute the determinant of $A$, while to compute $\rho_1$ we need to invert $A$ and then compute a sum of $O(n)$ terms that are quadratic in the entries of $A^{-1}$.\footnote{We prefer not to be more specific about the complexity of computing $\rho_1$. It is the same as the complexity of inverting $A$, and matrix inversion is poly-time, with a rather small exponent, even for matrices with entries in a ring of polynomials (e.g.~\cite{Storjohann:ComplexityOfInversion}). We have not explored how much one can further gain by exploiting the fact that $A$ is very sparse.} We have computed $\rho_1$ for knots with over 200 crossings using the unsophisticated implementation presented in Section~\ref{sec:Implementation}.
Topologists should be intrigued! $\rho_1$ is derived from the same matrix as the Alexander polynomial $\Delta$, yet we have no topological interpretation for $\rho_1$.
\section{Implementation and Power} \label{sec:Implementation}
\input APAINB.tex
\section{Proofs of Theorem~\ref{thm:Main}, the Invariance Theorem} \label{sec:Proofs}
We tell the proof of the Invariance Theorem (Theorem~\ref{thm:Main}) in two ways: an elegant and intuitive though slightly lacking telling in Section~\ref{ssec:Cars}, and a complete though slightly dull telling in Section~\ref{ssec:dull}. But first, a few common elements.
\subsection{Common Elements}
Two upright knot diagrams are considered the same (as diagrams) if their underlying knot diagrams are the same, and if the respective rotation numbers of their edges are all the same. It is clear that $\rho_1$ is well defined on upright knot diagrams. To prove Theorem~\ref{thm:Main} we need to know what to prove. Namely, when do two upright knot diagrams represent the same knot? This is answered in the spirit of the classical Reidemeister theorem by the following:
\begin{figure}
\ifpub
{\[ \resizebox{\linewidth}{!}{\input{figs/UprightRMoves.pdf_t}} \]}
{\[ \input{figs/UprightRMoves.pdf_t} \]}
\caption{
The upright Reidemeister moves: Reidemeister 1 left and right,
Reidemeister 2 braid-like and cyclic, Reidemeister 3, and (the $+$)
Swirl.
} \label{fig:UprightRMoves}
\end{figure}
\begin{theorem}[``Upright Reidemeister''] \label{thm:UprightRMoves}
Two upright knot diagrams represent the same knot if and only if they
differ by a sequence of R1l, R2r, R2b, R2c, R3, and Sw$^+$ moves as in
Figure~\ref{fig:UprightRMoves}.
\end{theorem}
\noindent{\em Sketch of the proof.} In the case of round knots (i.e.,~
not ``long''), knot diagrams can be turned upright by rotating individual
crossings. The only ambiguity here is by powers of the full rotation,
the {\em swirls} Sw$^+$ and Sw$^-$ (where Sw$^-$ is the same as Sw$^+$
except with a negative crossing, and we don't need to impose it separately
as it follows from Sw$^+$ and R2). Hence we have a well-defined map
from $\{\text{knot diagrams}\}$ to $\{\text{upright knot diagrams}\}/$Sw$^\pm$.
It remains to write the usual Reidemeister moves between knot diagrams as
moves between upright knot diagrams. The result are the moves R1l, R2r,
R2b, R2c, and R3. Note that unoriented knot theory is presented with
just three Reidemeister moves, but these split into several versions
in the oriented case. The sufficiency of the versions we picked can be
found in~\cite{Polyak:RMoves}. In the case of long knots a minor further
complication arises, regarding the rotation numbers of the initial and
final edges. We leave the details of the problem and its resolution to
the reader. \qed
Our key formulas,~\eqref{eq:R1} and~\eqref{eq:rho1} involve the Green function $g_{\alpha\beta}$. We need to know that it is subject to some relations, the {\em $g$-rules} of Lemma~\ref{lem:gRules} below, whose proof is so easy that it comes first:
\noindent{\em Proof of Lemma~\ref{lem:gRules}.} The first set of $g$-rules
reads out column $\beta$ of the equality $AG=I$, and the second set of
$g$-rules reads out row $\alpha$ of the equality $GA=I$. \qed
\begin{lemma}[``$g$-rules''] \label{lem:gRules} Given a fixed upright knot diagram $D$, its corresponding matrix $A$, and its inverse $G=(g_{\alpha\beta})$, and given a crossing $c=(s,i,j)$ in $D$ (with $s$, $i$, and $j$ as before), the following two sets of relations (the $g$-rules) hold (with $\delta$ denoting the Kronecker delta):
\begin{equation} \label{eq:CarRules}
g_{i\beta} = \delta_{i\beta}+T^sg_{\ip,\beta}+(1-T^s)g_{\jp,\beta},
\qquad g_{j\beta} = \delta_{j\beta}+g_{\jp,\beta},
\qquad g_{2n+1,\beta} = \delta_{2n+1,\beta}
\end{equation}
and
\begin{equation} \label{eq:CounterRules}
g_{\alpha i} = T^{-s}(g_{\alpha,\ip}-\delta_{\alpha,\ip}),
\qquad g_{\alpha j} = g_{\alpha,\jp} - (1-T^s)g_{\alpha i} - \delta_{\alpha,\jp},
\qquad g_{\alpha,1} = \delta_{\alpha,1}.
\end{equation}
Furthermore, for each fixed $\beta$ there are $2n+1$ $g$-rules of type~\eqref{eq:CarRules} (the first two depend on a choice of one of $n$ crossings, and the third is fixed, to a total of $2n+1$ rules). These fully determine the $2n+1$ scalars $g_{\alpha\beta}$ corresponding to varying $\alpha$. Similarly, for each fixed $\alpha$ there are $2n+1$ $g$-rules of type~\eqref{eq:CounterRules}. These fully determine the $2n+1$ scalars $g_{\alpha\beta}$ corresponding to varying $\beta$.
\end{lemma}
For later use, we teach our computer about $g$-rules:
\input{gRules.tex}
\subsection{Cars, Traffic Counters, and Interchanges} \label{ssec:Cars}
Our first proof of Theorem~\ref{thm:Main} is slightly informal as it
uses the language and intuition of probability theory even though our
``probabilities'' are merely algebraic formulae and not numbers between
$0$ and $1$. Seasoned mathematicians should see that there is no real
problem here. Yet just to be safe, we also include a fully formal proof
in Section~\ref{ssec:dull}.
Cars ($\car$, $\rac$) travel on knot diagrams subject to the following three rules, inspired by Jones' ``bowling balls''~\cite{Jones:Hecke} and by Lin, Tian, and Wang's ``random walks''~\cite{LinTianWang:RandomWalk} (within the proof of Proposition~\ref{prop:CarsAreGreen} below we will see that these rules are equivalent to the $g$-rules of Equation~\eqref{eq:CarRules} above):
\begin{itemize}
\item On plain roads (edges) they travel following the orientation of the edge.
\item When reaching an underpass (the lower strand of a crossing), cars just pass through.
\item When reaching an overpass cars pass through with probability $T^s$
(where $s=\pm 1$ is the sign of the crossing), yet drop over to the
lower strand with the complementary probability of $1-T^s$.
\end{itemize}
These rules can be summarized by the following pictures:
\[ \resizebox{\linewidth}{!}{\input{figs/CarRules2.pdf_t}} \]
In these pictures the horizontal struts represent ``traffic counters''
which measure the amount of traffic that passes through their respective
roads, and the output reading of these counters is printed above
them. Thus for example, the last interchange picture indicates that if
a unit stream of cars is injected into the diagram on the bottom right
and two traffic counters are placed at the top, then the first of these
will read a car intensity of $T^{-1}$ and the second $(1-T^{-1})$.
Note that our probabilities aren't really probabilities, if only because
$T$ and $T^{-1}$ cannot both be between $0$ and $1$ simultaneously. Yet
we will manipulate them algebraically as if they are probabilities,
restricting ourselves to equalities and avoiding inequalities. With
this restriction, we can use intuition from probability theory. We will
pretend that $T^s\sim 1$, or equivalently, that $1-T^s\sim 0$. This
has an algebraic meaning that does not refer to inequalities. Namely,
certain series can be deemed summable. For example,
\[ \sum_{r\geq 0}(1-T^s)^r = \frac{1}{1-(1-T^s)} = T^{-s}. \]
\parpic[r]{\input{figs/D2Traffic.pdf_T}}
\begin{example} Cars are injected on edge $\#1$ of the diagram $D_2$ of Section~\ref{sec:Formulas} as indicated on the right. What does the indicated traffic counter on edge $\#2$ measure?
\picskip{2}
\noindent{\em Solution.} Every car coming through the interchange from
$\#1$ passes through the underpass and comes to $\#2$, so the counter
reads ``$1$'' just for this traffic. But then these cars continue and
pass on the overpass, and $(1-T)$ of them fall down and continue through
edge $\#2$ and get counted again. But then these fallen cars continue and
pass on the overpass once again, and $(1-T)$ of them, meaning $(1-T)^2$
of the original traffic, fall once more and contribute a further reading
of $(1-T)^2$. This process continues and the overall counter reading is
\[ 1+(1-T)+(1-T)^2+(1-T)^3+\ldots = \frac{1}{1-(1-T)} = T^{-1}. \]
Note that this is exactly the row $1$ column $2$ entry of the matrix $G$ computed for this tangle in~\eqref{eq:GExamples}.
\end{example}
We claim that this is general:
\begin{proposition} \label{prop:CarsAreGreen} For a general knot diagram $D$, the entry
$g_{\alpha\beta}$ of its Green function is equal to the reading of a
traffic counter placed at $\beta$ given that traffic is injected into $D$
at $\alpha$. (In the case where $\alpha=\beta$, the counter is placed
after where the traffic is injected, not before).
\end{proposition}
\noindent{\em Proof.} Consider the $g$-rules of type~\eqref{eq:CarRules}. The third, $g_{2n+1,\beta} = \delta_{2n+1,\beta}$ is the statement that if traffic is injected on the outgoing edge of $D$, it can only be measured on the outgoing edge of $D$ (so traffic never flows backwards). The second, $g_{j\beta} = \delta_{j\beta}+g_{\jp,\beta}$, is the statement that traffic goes through underpasses undisturbed, so $g_{j\beta} = g_{\jp,\beta}$ unless the traffic counter $\beta$ is placed between $j$ and $\jp$, in which case it measures one unit more if the cars are injected before it, at $j$, rather than after it, at $\jp$. Similarly the first of these $g$-rules, $g_{i\beta} = \delta_{i\beta}+T^sg_{\ip,\beta}+(1-T^s)g_{\jp,\beta}$, is the statement of the behaviour of traffic at overpasses. Thus the rules in~\eqref{eq:CarRules} are obeyed by cars and traffic counters, and as the rules in~\eqref{eq:CarRules} determine $g_{\alpha\beta}$, the proposition follows. \qed
\begin{proposition} The quantity $\rho_1$ is invariant under R3.
\end{proposition}
\noindent{\em Proof.} We first show that cars entering a multiple
interchange styled as the left hand side of the R3 move, exit it with
the exact same distribution as cars entering the multiple interchange
styled as the right hand side. The hardest part of that computation is
when cars enter at the bottom left (at $i$) and it boils down to the
equality $1\!-\!T = (1\!-\!T)^2\!+\!T(1\!-\!T)$:
\[
\def\messA{{$(1\!-\!T)^2\!+\!T(1\!-\!T)$}}
\def\messB{{$(1\!-\!T)T$}}
\def\messC{{$T(1\!-\!T)$}}
\def\messD{{$1\!-\!T$}}
\ifpub
{\resizebox{0.95\linewidth}{!}{\input{figs/R3Traffic.pdf_t}}}
{\input{figs/R3Traffic.pdf_t}}
\]
If cars enter in the middle or at the bottom right (at $j$ or at $k$), the computation is even easier.\footnote{Note that this computation is exactly the one that proves that the Burau representation~\cite{Burau:Zopfgruppen} respects R3.}
The conclusion is that performing the R3 move does not affect traffic patterns outside the area of the move itself; namely, the Green function $g_{\alpha\beta}$ is unchanged if both $\alpha$ and $\beta$ are outside the area of the move.
Thus the only contribution to $\rho_1$ that may change
(see~\eqref{eq:rho1}) is the contribution coming from the three $R_1$
terms corresponding to the crossings that move, and we need to know if
the following equality holds:
\ifpub
{\begin{multline*}
R_1(+1,j,k) + R_1(+1,i,\kp) + R_1(+1,\ip,\jp) \\
\overset{?}{=} R_1(+1,i,j) + R_1(+1,\ip,k) + R_1(+1,\jp,\kp)
\end{multline*}}
{\[
R_1(+1,j,k) + R_1(+1,i,\kp) + R_1(+1,\ip,\jp)
\overset{?}{=} R_1(+1,i,j) + R_1(+1,\ip,k) + R_1(+1,\jp,\kp)
\]}
Both sides here are messy quadratics involving the $g_{\alpha\beta}$'s of both sides, evaluated at $\alpha,\beta\in\{i,j,k,\ip,\jp,\kp,\ipp,\jpp,\kpp\}$. But we can use the traffic rules (aka the $g$-rules) to rewrite these quadratics in terms of the $g_{\alpha\beta}$'s with $\alpha,\beta\in\{\ipp,\jpp,\kpp\}$, and these are unchanged between the sides. So we simply need to know whether the above equality holds {\em after} the relevant $g$-rules have been applied to both sides. We could do that by hand, but it's simpler to appeal to a higher wisdom:
\input{Invariance-R3-Short.tex} \ \qed
\noindent{\em First Proof of Theorem~\ref{thm:Main}, ``Invariance''.} We've shown invariance under R3. Invariance under the other moves is shown in a similar way: first one shows that overall traffic patterns are unchanged by each of the moves, and then one verifies that the local contributions to $\rho_1$ coming from the area changed by each move are equal once the $g$-rules are used to rewrite them in terms of $g_{\alpha\beta}$'s that are unaffected by the moves. This is shown in greater detail in the following section. \qed
\subsection{A More Formal Version of the Proof} \label{ssec:dull}
Again we start with the hardest, R3:
\begin{proposition} \label{prop:R3} The quantity $\rho_1$ is invariant under R3.
\end{proposition}
\noindent{\em Proof.} We need to know how the Green function
$g_{\alpha\beta}$ changes under R3. Here are the two sides of the move,
along with the $g$-rules of type~\eqref{eq:CarRules} corresponding to
the crossings within, written with the assumption that $\beta$ isn't in
$\{\ip,\jp,\kp\}$, so several of the Kronecker deltas can be ignored. We
use $g$ for the Green function at the left-hand side of R3, and $g'$ for the
right-hand side:
\[
\def\grulesA{{$\begin{array}{l}
g_{j,\beta} = \delta_{j\beta} \!+\! Tg_{\jp,\beta} \!+\! (1\!-\!T)g_{\kp,\beta} \\
g_{k,\beta} = \delta_{k\beta} \!+\! g_{\kp,\beta}
\end{array}$}}
\def\grulesB{{$\begin{array}{l}
g_{i,\beta} = \delta_{i\beta} \!+\! Tg_{\ip,\beta} \!+\! (1\!-\!T)g_{\kpp,\beta} \\
g_{\kp,\beta} = g_{\kpp,\beta}
\end{array}$}}
\def\grulesC{{$\begin{array}{l}
g_{\ip,\beta} = Tg_{\ipp,\beta} \!+\! (1\!-\!T)g_{\jpp,\beta} \\
g_{\jp,\beta} = g_{\jpp,\beta}
\end{array}$}}
\def\gprulesA{{$\begin{array}{l}
g'_{i,\beta} = \delta_{i\beta} \!+\! Tg'_{\ip,\beta} \!+\! (1\!-\!T)g'_{\jp,\beta} \\
g'_{j,\beta} = \delta_{j\beta} \!+\! g'_{\jp,\beta}
\end{array}$}}
\def\gprulesB{{$\begin{array}{l}
g'_{\ip,\beta} = Tg'_{\ipp,\beta} \!+\! (1\!-\!T)g'_{\kp,\beta} \\
g'_{k,\beta} = \delta_{k\beta} \!+\! g'_{\kp,\beta}
\end{array}$}}
\def\gprulesC{{$\begin{array}{l}
g'_{\jp,\beta} = Tg'_{\jpp,\beta} \!+\! (1\!-\!T)g'_{\kpp,\beta} \\
g'_{\kp,\beta} = g'_{\kpp,\beta}
\end{array}$}}
\ifpub
{\resizebox{\linewidth}{!}{\input{figs/R3.pdf_t}}}
{\input{figs/R3.pdf_t}}
\]
A routine computation (eliminating $g_{\ip,\beta}$, $g_{\jp,\beta}$, and $g_{\kp,\beta}$) shows that the first system of 6 equations is equivalent to the following 3 equations:
\[ g_{i,\beta} = \delta_{i\beta} + T^2g_{\ipp,\beta} + T(1-T)g_{\jpp,\beta} + (1-T)g_{\kpp,\beta}, \]
\[
g_{j,\beta} = \delta_{j\beta} + Tg_{\jpp,\beta} + (1-T)g_{\kpp,\beta},
\qquad\text{and}\qquad
g_{k,\beta} = \delta_{k\beta} + g_{\kpp,\beta}.
\]
Similarly eliminating $g'_{\ip,\beta}$, $g'_{\jp,\beta}$, and $g'_{\kp,\beta}$ from the second set of equations, we find that it is equivalent to
\[ g'_{i,\beta} = \delta_{i\beta} + T^2g'_{\ipp,\beta} + T(1-T)g'_{\jpp,\beta} + (1-T)g'_{\kpp,\beta}, \]
\[
g'_{j,\beta} = \delta_{j\beta} + Tg'_{\jpp,\beta} + (1-T)g'_{\kpp,\beta},
\qquad\text{and}\qquad
g'_{k,\beta} = \delta_{k\beta} + g'_{\kpp,\beta}.
\]
But these two sets of equations are the same, and as stated in the
$g$-rules lemma (Lemma~\ref{lem:gRules}), along with the $g$-rules
corresponding to the other crossings in $D$ (which are also the same
between $g$ and $g'$), these equations determine $g_{\alpha\beta}$
and $g'_{\alpha\beta}$, for $\alpha,\beta\notin\{\ip,\jp,\kp\}$. So
with this exclusion on $\alpha$ and $\beta$, we have that
$g_{\alpha\beta}=g'_{\alpha\beta}$. But this means that the
summations~\eqref{eq:rho1} in the definitions of $\rho_1$ are equal for
the two sides of R3, except perhaps for the three summands on each side
that come from the crossings that touch $\{\ip,\jp,\kp\}$.
What remains is completely mechanical. We just need to compute the sum of those three summands for both sides of R3, and apply to it the $g$-rules of types~\eqref{eq:CarRules} and~\eqref{eq:CounterRules} that eliminate the indices $\{\ip,\jp,\kp\}$. The computation is easy enough to be done by hand, yet why bother? Here's the machine version (it takes less typing to apply all relevant $g$-rules and also eliminate the indices $\{i,j,k\}$):
\input{Invariance-R3.tex} \ \qed
\begin{proposition} \label{prop:R2c} The quantity $\rho_1$ is invariant under R2c.
\end{proposition}
\noindent{\em Proof.} We follow the exact same steps as in the case
of $R3$. First, we write the $g$-rules, assuming that $\beta$ is not in
$\{i,j,\ip,\jp\}$:
\[
\def\grulesA{{$\begin{array}{l}
g_{i,\beta} = T^{-1}g_{\ip,\beta}+(1-T^{-1})g_{\jpp,\beta} \\
g_{\jp,\beta} = g_{\jpp,\beta}
\end{array}$}}
\def\grulesB{{$\begin{array}{l}
g_{\ip,\beta} = Tg_{\ipp,\beta}+(1-T)g_{\jp,\beta} \\
g_{j,\beta} = g_{\jp,\beta}
\end{array}$}}
\def\rotA{{$\varphi_{j^+}\!=\!1$}}
\def\rotB{{$\varphi_{j^{+\!+}}\!=\!1$}}
\input{figs/R2c.pdf_t}
\]
Note that for the right hand side we allowed ourselves to label the edges $\ipp$ and $\jpp$ as the computation is independent of the labelling and the labelling need not be by contiguous integers (outside of the move area, we assume that the left hand side and the right hand side are labelled in the same way). Note also that for the right hand side, there are no relevant $g$-rules. Now as in the case of R3, for the left hand side we eliminate $g_{\ip,\beta}$ and $g_{\jp,\beta}$ and we are left with the relations
\[ g_{i,\beta}=g_{\ipp,\beta} \qquad\text{and}\qquad g_{j,\beta}=g_{\jpp,\beta}. \]
Otherwise the $g$-rules for the left and for the right are the same, and so their Green functions are the same except if the indices are in $\{i,j,\ip,\jp\}$ (these indices do not even appear in the right hand side). Thus the contribution to $\rho_1$ from outside the area of the move is the same for both sides.
Next we write the contribution to $\rho_1$ coming from the two crossings
and one rotation that appear on the left, and use the $g$-rules to push
all the indices in $\{i,j,\ip,\jp\}$ up to $\ipp$ and $\jpp$. This can
be done by hand, but seeing that we have tools, we use them as follows:
\input{Invariance-R2c.tex}
This result is clearly equal to the single rotation contribution to $\rho_1$ that comes from the right hand side. \qed
\begin{proposition} \label{prop:R1l} The quantity $\rho_1$ is invariant under R1l.
\end{proposition}
\noindent{\em Proof.} We start with the relevant $g$-rules:
\[
\def\grules{{$\begin{array}{l}
g_{\ip,\beta} = \delta_{\ip,\beta} + Tg_{\ipp,\beta} + (1-T)g_{\ip,\beta} \\
g_{i,\beta}=\delta_{i,\beta}+g_{\ip,\beta}
\end{array}$}}
\input{figs/R1l.pdf_t}
\]
The first of these rules is equivalent to $g_{\ip,\beta}=T^{-1}\delta_{\ip,\beta} + g_{\ipp,\beta}$. For $\beta\neq i,\ip$ we find as before that $g_{i,\beta}=g_{\ipp,\beta}$ and we can ignore the contributions to $\rho_1$ coming from outside the area of the move. The contribution to $\rho_1$ coming from the single crossing and single rotation on the left hand side is computed below, and is equal to the empty contribution coming from the right hand side:
\input{Invariance-R1l.tex} \ \qed
\vskip 1mm
\noindent{\em Second Proof of Theorem~\ref{thm:Main}, ``Invariance''.} After the Upright Reidemeister Theorem (Theorem~\ref{thm:UprightRMoves}) which sets out what we need to do, and Propositions~\ref{prop:R3}, \ref{prop:R2c}, and~\ref{prop:R1l} which prove invariance under R3, R2c, and R1l, it remains to show the invariance of $\rho_1$ under R1r, R2b, and Sw$^+$. This is done exactly as in the examples already shown, so in each case we show only the punch line:
\input{Invariance-Rest.tex} \ \qed
\section{Some Context and Some Morals} \label{sec:Context} We would
like to emphasize again that $\rho_1$ seems very close to the Alexander
polynomial, yet we have no topological interpretation for it. Until that
changes, where is $\rho_1$ coming from?
It comes via a lengthy path, which we will only sketch here. For
a while now \cite{PP1, PG, Talk:NCSU-1604, Talk:Indiana-1611,
Talk:LesDiablerets-1708, Talk:Matemale-1804, Talk:Toronto-1811,
Talk:DaNang-1905, Talk:UCLA-191101} we've been studying quantum invariants
related to the Lie algebra $\sleps$, the 4-dimensional Lie algebra with generators $y,b,a,x$ and
brackets
\[ [b,x]=\eps x, \quad [b,y]=-\eps y, \quad [b,a]=0, \quad [a,x]=x, \quad [a,y]=-y, \quad [x,y]=b+\eps a, \]
where $\eps$ is a scalar. The beauty of this algebra stems from the following:
\begin{itemize}
\item It is a ``classical double'' of a two-dimensional the Lie bialgebra $\langle a,x\rangle$, with
\[ [a,x]=x,\qquad \delta(a)=0, \qquad \delta(x)=\eps x\wedge a, \]
and hence quantization tools are available and are used below (e.g.~\cite{EtingofSchiffman:QuantumGroups}).
\item At invertible $\eps$ it is isomorphic to $sl_2\oplus\langle
t\rangle$, where $t$ is a central element\footnotemark. Quantum
topology tells us that the algebra $sl_2$ is related to the
Jones polynomial. In fact, the universal quantum invariant
(see \cite{Lawrence:UniversalUsingQG, Lawrence:Universal,
Ohtsuki:QuantumInvariants}) for the Lie algebra $sl_2$ is equivalent
to the coloured Jones polynomial of~\cite{Jones:Hecke}.
\item At $\eps=0$ it becomes the diamond Lie algebra $\diamond$, a solvable
algebra in which computations are easier. The algebra $\diamond$
is the semi-direct product of the unique non-commutative 2D Lie
algebra $\fraka$ with its dual, and quantum topology tells us that it
is related to the Alexander polynomial~\cite{Talk:Chicago-1009, WKO1}.
\end{itemize}
The last two facts taken together tell us that the Alexander
polynomial is some limit of the coloured Jones polynomial
(originally conjectured~\cite{MelvinMorton:Coloured, Rozansky:TrivialFlatI} and proven
by other means~\cite{Bar-NatanGaroufalidis:MMR}).
\footnotetext{{\arraycolsep=1pt\def\arraystretch{1}
Via the isomorphism
$\left(\begin{array}{cc}1&0\\0&-1\end{array}\right) \leftrightarrow \eps^{-1}b+a$,
$\left(\begin{array}{cc}0&1\\0&0\end{array}\right) \leftrightarrow x$,
$\left(\begin{array}{cc}0&0\\1&0\end{array}\right) \leftrightarrow \eps^{-1}y$,
and $t \leftrightarrow b-\eps a$.
}}
We can make this a bit more explicit. By using the Drinfel'd quantum
double construction~\cite{Drinfeld:QuantumGroups} we find that the
universal enveloping algebra $\calU(\sleps)$ has a quantization $QU$,
which has an $R$-matrix solving the Yang-Baxter equation (meaning,
satisfying the R3 move, in the appropriate sense). These are given by:
\[ QU =
A\langle y,b,a,x\rangle\left/\begin{pmatrix}
[b,a]=0, \quad [b,x]=\eps x,\quad [b,y]=-\eps y, \\
[a,x]=x, \quad [a,y]=-y, \quad xy-qyx=\frac{1-\bbe^{-\hbar(b+\eps a)}}{\hbar}
\end{pmatrix}\right.,
\]
where $A\langle\text{\it gens}\rangle$ is the free associative algebra
with generators {\it gens}, and $q=\bbe^{\hbar\eps}$, and
\begin{multline*}
R=\sum_{m,n\geq 0}\frac{y^nb^m\otimes (\hbar a)^m(\hbar x)^n}{m![n]_q!}
\ifpub{\\}{\qquad}
\left(\text{where $[n]_q!=\prod_{k=1}^n\frac{1-q^k}{1-q}$ is a ``quantum factorial''}\right).
\end{multline*}
Thus there is an associated universal quantum invariant of knots $Z_\eps(K)\in QU$
(which, as stated, is equivalent to the coloured Jones polynomial). In
our talks and papers we show that $Z_\eps$ can be expanded as a power
series in $\eps$, that at $\eps=0$ it is equivalent to the Alexander
polynomial, and that in general, the coefficient $Z_{(k)}$ of $\eps^k$
in $Z_\eps$ can be computed in polynomial time and is homomorphic,
meaning that it leads to an ``algebraic knot theory'' in the sense of
(say)~\cite{Talk:Sydney-190916}. We also know that the excess information in $Z_{(k)}$ (beyond the
information in $\{Z_{(0)},\ldots,Z_{(k-1)}\}$) is contained in a single polynomial, $\rho_k$. The first of
these polynomials is $\rho_1$ of this paper.
But how did we arrive at the specific formulas of this paper? As
often seen with quantizations, $QU$ is isomorphic (though only as
an algebra, not as a Hopf algebra) with $\calU(\sleps)$, and the
latter can be represented into the Heisenberg algebra
\ifpub
{\[ \bbH=A\langle p,x\rangle/([p,x]=1) \]}
{$\bbH=A\langle p,x\rangle/([p,x]=1)$}
via
\[ y\to -tp-\eps\cdot xp^2,\qquad b\to t+\eps\cdot xp,\qquad a\to xp,\qquad x\to x, \]
(abstractly, $\sleps$ acts on its Verma module
$\calU(\sleps)/(\calU(\sleps)\langle y,a,b-\eps a-t\rangle)\cong\bbQ[x]$
by differential operators, namely via $\bbH$). So $QU$'s $R$-matrix
can be expanded in powers of $\eps$ and pushed to $\calU(\sleps)$ and on
to $\bbH$, resulting in $\calR=\calR_0(1+\eps \calR_1+\cdots)$, with
$\calR_0=\bbe^{t(xp\otimes 1-x\otimes p)}$ and $\calR_1$ a quartic
polynomial in $p$ and $x$. Now all the computations for $\rho_1$ can be carried out by pushing around a
rather small number of $p$'s and $x$'s (at most 4), and this can be done using the rules
\begin{align*}
(p\otimes 1)\calR_0 &= \calR_0(\bbe^t(p\otimes 1)+(1-\bbe^t)(1\otimes p)), \\
(1\otimes p)\calR_0 &= \calR_0(1\otimes p),
\end{align*}
which, after setting $T=\bbe^t$, must remind the reader of
Equation~\eqref{eq:A}. When all the dust settles, the resulting formulas
are similar to the ones in Equations~\eqref{eq:R1} and~\eqref{eq:rho1}
(but only similar, because we applied some ad hoc cosmetics to make the
formulas appear nicer).
There are some morals to this story:
\begin{enumerate}
\item The definition of $\rho_1$ in Section~\ref{sec:Formulas} and the
proofs of its invariance in Section~\ref{sec:Proofs} are clearly much
simpler than the origin story, as outlined above. So quite clearly, we
still don't understand $\rho_1$. There ought to be a room for it directly
within topology, which does not require that one would know anything about
quantum algebra. (And better if that room is large enough to accommodate
morals~(\ref{mor:rhok}) and~(\ref{mor:frakg}) below).
\item \label{mor:rhok}
Like there is $\rho_1$, there are $\rho_k$. The origin story tells us that
$\rho_k$ should have a formula as a summation over choices of $k$-tuples
of features of the knot (crossings and rotations), just as the formula
for $\rho_1$ is a single summation over these features. The summand
for $\rho_k$ will be a degree $2k$ polynomial in the Green function
$g_{\alpha\beta}$ (compare with~\eqref{eq:R1}, which is quadratic). As a
$k$-fold summation, after inverting $A$, $\rho_k$ should be computable in
$O(n^k)$ additions and multiplications of polynomials in $T$, where $n$
is the crossing number.
\item These $\rho_k$ should be equivalent to the invariants in our earlier
works \cite{PP1, PG, Talk:NCSU-1604, Talk:Indiana-1611,
Talk:LesDiablerets-1708, Talk:Matemale-1804, Talk:Toronto-1811,
Talk:DaNang-1905, Talk:UCLA-191101}.
\item These $\rho_k$ should be equivalent to the invariants studied
earlier by Rozansky and Overbay \cite{Rozansky:TrivialFlatI,
Rozansky:Burau, Rozansky:U1RCC, Overbay:Thesis}, as their quantum
origin is essentially the same (though strictly speaking, we have not
written proofs of that, and normalizations may differ). Our formulas are
significantly simpler and faster to compute than the Rozansky-Overbay
formulas, and in our language it is easier to see the behaviour of
$\rho_1$ under mirror reflection (see Section~\ref{ssec:Demo}).
\item Like the Rozansky-Overbay invariants, $\rho_k$ should be equivalent to the ``higher diagonals'' for the
Melvin-Morton expansion (e.g.~\cite{Rozansky:U1RCC}) and should be dominated by the ``loop expansion'' of
the Kontsevich integral~\cite{Kricker:Lines, GaroufalidisRozansky:LoopExpansion}.
\item \label{mor:frakg} The quantum algebra story extends to other Lie
algebras, beyond $sl_2$. So there should be variants $\rho_k^\frakg$
of $\rho_k$ at least for every semisimple Lie algebra $\frakg$, given by
more or less similar formulas. Quantum algebra suggests that
$\rho_k^\frakg$ should be a polynomial in as many variables as the
rank of $\frakg$, and should in general be stronger than the ``base''
$\rho_k$. We have not seriously explored $\rho_k^\frakg$ yet, though
some preliminary work was done by Schaveling~\cite{Schaveling:Thesis}.
\item It appears that $QU$ has interesting traces and therefore that there should be a link version of
$\rho_1$. We have not pursued this formally.
\item $QU$ has a co-product and an antipode, and so the universal
tangle invariant associated with $QU$ has formulas for strand reversal
and strand doubling (e.g.~\cite{PG, Talk:DaNang-1905}). This implies
(e.g., by following the ideas of~\cite{Talk:Sydney-190916}) that there
should be formulas for $\rho_1$ that start with a Seifert surface for
the knot. We are pursuing such formulas now; we already know that the
degree of $\rho_1$ is bounded by $2g$, where $g$ is the genus of a knot~\cite{PG}.
\item For the same reasons, for ribbon knots $\rho_1$ should
have a formula computable from a ribbon presentation, and its
values might be restricted in a manner similar to the Fox-Milnor
condition~\cite{FoxMilnor:CobordismOfKnots}. We are pursuing this now.
\item The coloured Jones polynomial is invariant under mutation so we
expect $\rho_1$ to likewise be invariant under mutation (and indeed, also
$\rho_k$), yet we do not have a direct proof of that yet. Note that we can expect $\rho_k^\frakg$ for
higher-rank $\frakg$ to no longer be invariant under mutation.
\end{enumerate}