\section{Introduction}

\begin{center}\footnotesize
(The impatient reader may skip straight to Section~\ref{ssec:Statement}, ``Statement of the Main Theorem'')
\end{center}


\subsection{From Algebras to Invariants}

There is a standard ``quantum algebra'' methodology that associates
a framed knot invariant to certain triples $(U,R,C)$, where $U$ is
a unital algebra and $R\in U\otimes U$ and $C\in U$ are invertible
(see e.g.~\cite[Section~4.2]{Ohtsuki:QuantumInvariants}). 
For convenience, we recall this methodology in Aside~\ref{aside:URCMethod}.

\begin{Aside}\fbox{\begin{minipage}{0.9\linewidth}\sl
\parpic[r]{\parbox[t]{1.9in}{\begin{center}
  \input{figs/URCMethod.pdf_t}
  \newline
  $\displaystyle z(K) = \sum_{i,j,k} b_ia_jb_kCa_ib_ja_k$
\end{center}}}
Draw $K$ as a long knot in the plane so that at each crossing the two
crossing strands are pointing up, and so that the two ends of $K$ are
pointing up.

Put a copy of $R=\sum a_i\otimes b_i$ on every positive crossing of
$K$ with the ``$a$'' side on the over-strand and the ``$b$''
side on the under-strand, labeling these $a$'s and $b$'s with distinct
indices $i,j,k,\ldots$ (similarly put copies of $R^{-1}=\sum a'_i\otimes
b'_i$ on the negative crossings; these are absent in our example). Put
a copy of $C^{\pm 1}$ on every cuap where the tangent to the knot is
pointing to the right (meaning, a $C$ on every such cup and a $C^{-1}$
on every such cap).

\picskip{2}
Form an expression $z(K)$ in $U$ by multiplying
all the $a$, $b$, $C$ letters as they are seen when traveling along
$K$ and then summing over all the indices, as shown.

If $R$ and $C$ satisfy some conditions dictated by the standard
Reidemeister moves of knot theory, the resulting $z(K)$ is a knot
invariant.

Abstractly, $z(K)$ is obtained by tensoring together several copies of
$R^{\pm 1}\in U^{\otimes 2}$ and $C^{\pm 1}\in U$ to get an intermediate
result $z_0\in U^{\otimes S}$, where $S$ is a finite set with two
elements for each crossing of $K$ and one element for each right-pointing cuap. 
We then multiply the different tensor factors in $z_0$ in an order dictated by $K$
to get an output in a single copy of $U$.

\caption{The standard methodology on an example knot.} \label{aside:URCMethod}
\end{minipage}
}\end{Aside}

The best algebras with which to apply this methodology, at least as
of 2017, are certain completions $\hatcalU(\frakg)$ of the universal
enveloping algebras $\calU(\frakg)$ of semi-simple Lie algebras $\frakg$
(or their quantizations). But these algebras are infinite dimensional,
and the sum in Aside~\ref{aside:URCMethod} is infinite and not
immediately computable.

The dogma solution is to pick a finite dimensional representation
of $\frakg$ and use it to represent all the elements appearing in
Aside~\ref{aside:URCMethod}, effectively replacing the algebra by the
algebra of endomorphisms of some finite dimensional vector space. This
turns the sum finite; yet if the knot $K$ has $n$ crossings, our
sum becomes a sum over $n$ indices $i_1,\ldots,i_n$. Thus there are
exponentially-many summands to consider and it takes an exponential amount
of time to compute $z(K)$, limiting its computation only to relatively
small knots.\footnoteT{``Divide and conquer'' methods often improve the
computation time to $O(e^{c\sqrt{n}})$ for some constant $c$. Utilizing
this, the simplest of these ``quantum invariants'', the Jones, HOMFLY-PT
and Kauffman polynomials, corresponding to $sl_2$, $sl_N$ and $so_N$
in their defining representations, can be computed for surprisingly large
knots even though ultimately $e^{c\sqrt{n}}$ grows more rapidly than any
polynomial.}$^,$\footnoteT{Note that almost any time the phrases ``braided
monoidal category'' or ``TQFT'' are used within low dimensional topology,
some tensor powers of some vector spaces need to be considered
at some point, and dimensions grow exponentially. Thus our criticism
applies in these cases too~\cite{Talk:Dogma}.} In addition, by choosing
a specific representation of $\frakg$, one looses the good behaviour of
$z$ under strand-doubling. In Section~\ref{sec:ops} we explain why such
good behaviour is a desirable property.

Alternatively, one may extract finite-type~\cite{Bar-Natan:OnVassiliev,
ChmutovDuzhinMostovoy:Vassiliev}
information out of $z$ by reducing modulo appropriate filtrations of $U$
and its tensor powers. Invariants of type $d$ are computable in time less
than or equal to $O(n^d)$~\cite{Bar-Natan:PolyPoly}, and thus for small
$d$, they are effectively computable. But there are only a few invariants
of sufficiently small type $d$, they are not very powerful, and there
are some no-go theorems that limit the power of any finite number of
finite-type invariants to resolve certain topological questions~\cite{Ng:Ribbon,
Stoimenow:Unknotting}.

Our approach to the computation of $z(K)$ is different. Instead of
working directly in $U^{\otimes S}$ (see Aside~\ref{aside:URCMethod}),
we work in relatively small\footnoteT{Ranks grow polynomially in $|S|$.}
spaces $\calF(S)$ of ``closed-form formulas for elements of $U^{\otimes
S}$''. For this to work, we need to ensure that the fundamentals $R$ and
$C$ would be described by ``closed-form formula'', and that the most basic
operations necessary for the computation of $z$, namely multiplication
of factors in $U^{\otimes S}$, can be implemented ``in closed form''.

In practice, the kind of terms that appear within formulas for $R$ and $C$
are exponentials of the form $\bbe^{\xi x}$, where $x$ is a generator of
$U$ and $\xi$ is a formal scalar variable, their iterated derivatives
$(\partial_\xi)^k\bbe^{\xi x}=x^k\bbe^{\xi x}$, and exponentials of
quadratics like $\bbe^{\lambda xy}$ or $\bbe^{\lambda x\otimes y}$, with
scalar $\lambda$ and $x,y\in U$.  We then need to multiply several such
exponentials and differentiated exponentials, and we need to learn how
to bring such products into some pre-chosen ``canonical order''. In the
standard $U\sim\hatcalU(\frakg)$ case, where $\frakg$ is semi-simple,
this is complicated. Yet if $\frakg$ is solvable, this is often easy
(see Aside~\ref{aside:SvsSS}). Wouldn't it be nice if it was possible
to approximate semi-simple Lie algebras with solvable ones?

\begin{Aside}\fbox{\begin{minipage}{0.9\linewidth}\sl
Indeed, here's a reoredering exercise that we will care about deeply later
in this paper. The semi-simple Lie algebra $sl_2$ is generated by elements $y$, $a$,
and $x$ with relations $[a,x]=2x$, $[a,y]=-2y$, and $[x,y]=a$.
If $\eta_i$, $\alpha_i$, and $\xi_i$ are scalars
we have can reorder $\bbe^{\eta_1 y}\bbe^{\alpha_1 a}\bbe^{\xi_1 x}
\bbe^{\eta_2 y}\bbe^{\alpha_2 a}\bbe^{\xi_2 x}$
to become $\bbe^{\eta_0 y}\bbe^{\alpha_0 a}\bbe^{\xi_0 x}$
where
\[
  (\eta_0,\alpha_0,\xi_0) = \left(
    \frac{e^{-2 \alpha _1} \eta _2}{\eta _2 \xi _1+1}+\eta _1,
    \alpha _1+\alpha _2+\log \left(\eta _2 \xi _1+1\right),
    \frac{\xi _1 \left(e^{-2 \alpha _2}+\eta _2 \xi _2\right)+\xi _2}{\eta _2 \xi _1+1}
  \right).
\]
In the solvable Lie algebra $sl_2^0$ obtained from $sl_2$ by adding a
central generator $c$ and replacing the last $sl_2$ relation with $[x,y]=c$
while keeping $[a,x]=2x$ and $[a,y]=-2y$ (thus separating the roles of $a$
as a ``number operator'' and as a ``Heisenberg-like commutator''), we have
$\bbe^{\eta_1 y}\bbe^{\alpha_1 a}\bbe^{\xi_1 x} \bbe^{\eta_2
y}\bbe^{\alpha_2 a}\bbe^{\xi_2 x} = \bbe^{\eta_0 y}\bbe^{\alpha_0
a}\bbe^{\xi_0 x}\bbe^{\gamma_0 c}$, where
\[
  (\eta_0,\alpha_0,\xi_0,\gamma_0) = \left(
    e^{-2 \alpha _1} \eta _2+\eta _1,
    \alpha _1+\alpha _2,
    e^{-2 \alpha _2} \xi _1+\xi _2,
    \eta _2 \xi _1
  \right).
\]
The $sl_2^0$ formulas are visibly simpler than the $sl_2$ formulas. What is
even more important is that the iterated derivatives of the $sl_2^0$ formulas
stay within a finite dimensional space of expressions. This is not the case
for the $sl_2$ formulas.

Notes. $\bullet$ The formulas within this Aside are
proven in Section~\ref{ssec:SvsSSProofs}. $\bullet$ Over
$\bbC$, $sl_2^0$ is isomorphic to the ``diamond Lie algebra''
of~\cite[Chapter~4.3]{Kirillov:OrbitMethod}, which is sometimes called
``the Nappi-Witten algebra''~\cite{NappiWitten:Nonsemisimple}.

\captionsetup{textfont=sf,width=0.9\linewidth}
\caption{Reordering differentiated exponentials is easier in the solvable
case then in the semi-simple case.} \label{aside:SvsSS}
\end{minipage}
}\end{Aside}

In this paper we exploit the little-known fact that this is (nearly)
possible. Precisely, given a semisimple $\frakg$, there exists a 
Lie algebra $\frakg^\epsilon$ defined over the ring $\bbQ[\epsilon]$ of
polynomials in a formal variable $\epsilon$ (in other words, $\frakg^\epsilon$ is
a ``one-parameter family of Lie algebras''), so that
\begin{enumerate}
\item If $\epsilon$ is fixed to be some constant not equal to zero, then
$\frakg^\epsilon$ is isomorphic to $\frakg^+\coloneqq\frakg\oplus\frakh$,
which is the original $\frakg$ with an additional copy of its own
(Abelian) Cartan subalgebra $\frakh$ added.
\item At $\epsilon=0$, $\frakg^0$ is solvable. Furthermore, $\frakg^\epsilon$ is
solvable in a formal neighborhood of $\epsilon=0$: for any natural number
$k\geq 0$ the reduction $\frakg^{\leq k}$ of $\frakg^\epsilon$ to the ring
$\bbQ[\epsilon]/(\epsilon^{k+1}=0)$ is solvable as a Lie algebra over
$\bbQ$ (whose dimension is $(k+1)\dim\frakg$).
\end{enumerate}

As $k$ gets larger, the solvable $\frakg^{\leq k}$ is closer and closer to
$\frakg^\epsilon$, as the reduction modulo $\epsilon^{k+1}=0$ means less and
less, and so at least informally,
$\frakg^{\leq k}\xrightarrow[k\to\infty]{}\frakg^+\sim\frakg$. See also
Aside~\ref{aside:ModuliOfLie}.

\begin{Aside}\fbox{\begin{minipage}{0.9\linewidth}\sl

Semisimple Lie algebras are famously ``rigid'' and allow no
deformations~\cite{Hanno:1219971} (within the universe of Lie algebras;
outside of it, there's ``quantum groups'', of course).  How is this
consistent with the existence of the family $\frakg^\epsilon$?

The short answer is that $\frakg^\epsilon$ is a deformation
of the solvable $\frakg_0$, not of $\frakg$. So $\frakg$ is a
deformation of $\frakg_0$, but $\frakg_0$ is not a deformation but a
contraction~\cite{InonuWigner:ContractionOfGroups, Gilmore:LieGroups}
of $\frakg$.

\parpic[r]{\input{figs/ModuliOfLie.pdf_t}}
It is perhaps a bit clearer to think in terms of the ``space of Lie
brackets''. Given a vector space $V$, a Lie bracket on $V$ is an element
$b\in V^\ast\otimes V^\ast\otimes V$ which satisfies a linear equation
(being anti-symmetric) and a quadratic equation (Jacobi; it is quadratic
as a function of $b$). So we can consider the variety $\calB(V)$ of all
Lie brackets on $V$. A very schematic depiction is on the right. Within
$\calB(V)$, Lie algebras isomorphic to some specific semisimple $\frakg^+$
make an open chamber $\calG$. Indeed that's the meaning of rigidity ---
when you move a tiny bit away from $\frakg^+$ what you see is isomorphic
to $\frakg^+$. Yet the closure $\bar\calG$ of $\calG$ contains other
Lie algebras, including $\frakg^0$.

\picskip{0}
Note that every cell in $\calB$ contains in its closure
the $0$ bracket, belonging to the
Abelian Lie algebra $\fraka$ on $V$. If a path $\frakg^\epsilon$ is
chosen as above but with $\frakg^0=\fraka$, then in the same sense as
above, $\frakg^\epsilon$ is nilpotent in a neighborhood of $\epsilon=0$. So in
the same sense as above, {\em every} Lie algebra can be approximated by
nilpotent Lie algebras. Why are we not exploiting this fact in this paper?
Because the knot invariants that arise from nilpotent approximation are
finite type invariants and with solvable approximation we do better.

\captionsetup{textfont=sf,width=0.9\linewidth}
\caption{How is this possible? The moduli of Lie algebras.}
\label{aside:ModuliOfLie}
\end{minipage}
}\end{Aside}

It remains to sketch why $\frakg^\epsilon$ exists. The short, precise, but
jargon-heavy answer is in the next paragraph. A jargon-free example, in
the case of $\frakg=gl_n$, is in Aside~\ref{aside:gln}.

Let $\frakg$ be a semisimple Lie algebra and let $\frakb^+$ and
$\frakb^-$ be its upper and lower Borel subalgebras, respectively,
Then $(\frakb^+)^\ast$ is $\frakb^-$, and as the latter has a Lie
bracket, it follows that $\frakb^+$ has a co-bracket $\delta$. In
fact, $\frakb^+$ along with its bracket $[\cdot,\cdot]$
and co-bracket $\delta$ is a ``Lie bialgebra'', and one may
recover $\frakg^+=\frakg\oplus\frakh=\frakb^-\oplus\frakb^+$
as the ``Drinfel'd double'' $\calD(\frakb^+,[\cdot,\cdot],\delta)$ of $\frakb^+$
(see e.g.~\cite[Chapter~4]{EtingofSchiffman:QuantumGroups}). By
a quick inspection, the axioms of a Lie bialgebra are homogeneous
in $\delta$: meaning that $(\frakb^+,[\cdot,\cdot],\epsilon\delta)$
is again a Lie bialgebra for any scalar $\epsilon$, and one may set
$\frakg^\epsilon\coloneqq\calD(\frakb^+,[\cdot,\cdot],\epsilon\delta)$. The
required properties are all easy to check. Perhaps the
most interesting is the solvability of $\frakg^0$: indeed
$\frakg^0=I\frakb^+\coloneqq(\frakb^+)^\ast\rtimes\frakb^+$
with $(\frakb^+)^\ast$ regarded as an Abelian Lie algebra and
$\frakb^+$ acts on $(\frakb^+)^\ast$ using the co-adjoint action,
and then the solvability of $I\frakb^+$ easily follows from the
solvability of $\frakb^+$. It is worth noting that the knot-theoretic
significance of $\frakb^\ast\rtimes\frakb$ for a general Lie algebra
$\frakb$ was studied extensively in the context of ``w-knots'' in
\cite{WKO1,WKO2,WKO3,WKO4,KBH}, and that these studies along with the
observations in this paragraph were in some sense the starting points
for our current study.

\begin{Aside}\fbox{\begin{minipage}{0.9\linewidth}\sl

The Lie algebra $gl_n^+$, namely $gl_n$ plus an additional $n$-dimensional
Abelian factor of ``diagonal matrices'', is the direct sum (as a vector
space) of two subalgebras of $gl_n$: the upper triangular matrices
$\uppertriang$ and the lower triangular matrices $\lowertriang$. With
some ambiguity regarding diagonal matrices, $gl_n^\epsilon$ is obtained
from $gl_n^+=\uppertriang\oplus\lowertriang$ by selectively multiplying
{\em some} of the structure constants of the latter by $\epsilon$.
In summary form, this is $[\uppertriang,\uppertriang]=\uppertriang$,
$[\lowertriang,\lowertriang]=\epsilon\lowertriang$, and
$[\uppertriang,\lowertriang]=\lowertriang+\epsilon\uppertriang$, which
stands for ``brackets within $\uppertriang$ are unchanged, brackets within
$\lowertriang$ are multiplied by $\epsilon$, and in a bracket of something
in $\uppertriang$ with something in $\lowertriang$, the part of the
output $z(K)$ in $\lowertriang$ is unchanged and the part in $\uppertriang$
is multiplied by $\epsilon$''.

\parpic[r]{\input{figs/GLnUL.pdf_t}}
\picskip{8}
Even more concretely, in terms of generators and relations, we have that
$gl_n^+$ is generated by $\{x_{ij}, y_{ji}\colon 1\leq i<j\leq n\} \cup
\{a_i,b_i\colon 1\leq i\leq n\}$, with relations
\newline
$[x_{ij},x_{kl}] = \delta_{j=k}x_{il}-\delta_{l=i}x_{kj}$,
  \hfill$[y_{ij},y_{kl}] = \epsilon\delta_{j=k}y_{il}-\epsilon\delta_{l=i}y_{kj}$,
\newline
$[x_{ij},y_{kl}]  = 
  \delta_{j=k}(\epsilon\delta_{j<k}x_{il}+\delta_{i=l}(b_i+\epsilon a_i)/2+\delta_{i>l}y_{il})$
\newline\null\hfill
  $-\delta_{l=i}(\epsilon\delta_{k<j}x_{kj}+\delta_{k=j}(b_j+\epsilon a_j)/2+\delta_{k>j}y_{kj})$,
\newline
$[a_i,x_{jk}]  =  (\delta_{i=j}-\delta_{i=k})x_{jk}$,
  \hfill$[b_i,x_{jk}]  =  \epsilon(\delta_{i=j}-\delta_{i=k})x_{jk}$,
\newline
$[a_i,y_{jk}]  =  (\delta_{i=j}-\delta_{i=k})y_{jk}$,
  \hfill$[b_i,y_{jk}]  =  \epsilon(\delta_{i=j}-\delta_{i=k})y_{jk}$,

where $\delta_{\text{cond}}$ is $1$ if cond is true and is $0$ otherwise.
As matrices, $x_{ij}$ is the upper triangular matrix with $1$ in position
$ij$ and $0$ elsewhere, $y_{ji}$ is the lower triangular matrix with $1$ at
$ji$ and $0$ elsewhere, and $a_i$ and $b_i$ are both diagonal with $1$ at
$ii$ and $0$ elsewhere, except $a_i$ is regarded as an upper triangular
matrix and $b_i$ as a lower triangular matrix.

\captionsetup{textfont=sf,width=0.9\linewidth}
\caption{A solvable approximation of $gl_n$.}
\label{aside:gln}
\end{minipage}
}\end{Aside}

We would have loved our own story a lot better if it had ended
here. Namely if for any semi-simple $\frakg$ and any $k\geq 0$ we knew how
to construct $R$ and $C$ in ``small'' spaces $\calF(S)$ of formulas for
elements of $\hat\calU(\frakg^{\leq k})^{\otimes S}$ and if we knew how
to efficiently ``multiply'' in $\calF(S)$. This in fact is almost true:
the only thing we miss are explicit formulas for $R$ and $C$. In order
to obtain such formulas we first have to replace $\frakg^\epsilon$ by
its ``quantized'' version $\frakg^\epsilon_q$, which is obtained from
$(\frakb^+,[\cdot,\cdot],\epsilon\delta)$ using Drinfel'd's ``quantum double'' construction. Very little of substance
changes in the formulas associated with $\frakg^\epsilon_q$ as opposed to $\frakg^\epsilon$; they are only just a bit
uglier.

In addition, we have so far worked out in detail only the case of $\frak=sl_2$. Almost everything seems to generalize
to arbitrary semi-simple $\frakg$, and we hope to return to the more general case in a later publication.

\subsection{Statement of the Main Theorem} \label{ssec:Statement}

{\red MORE.}

\subsection{Section Summaries and Dependencies}

{\red MORE.}

\subsection{Acknowledgement}

{\red MORE.}
