\draftcut
\section{w-Tangles} \label{sec:w-tangles}
\begin{quote} \small {\bf Section Summary. }
\summarytangles
\end{quote}
\subsection{v-Tangles and w-Tangles} \label{subsec:vw-tangles} With
the task of defining circuit algebras completed
in Section~\ref{subsec:CircuitAlgebras}, the definition of v-tangles
and w-tangles is simple.
\begin{definition}\label{def:vtanglediagrams}
The ($\calS$-graded) circuit algebra $\vD$ of v-tangle diagrams is
the $\calS$-graded directed circuit algebra freely generated by
two generators in $C_{2,2}$ called the {\em positive crossing},
$\tensor*[_1^4]{\text{\large$\overcrossing$}}{_2^3}$,
and the {\em negative crossing},
$\tensor*[_1^4]{\text{\large$\undercrossing$}}{_2^3}$. In as much
as possible we suppress the leg-numebering below; with this in
mind, $\vD:=$\raisebox{-1.5mm}{\input{figs/vDDef.pstex_t}}.
The skeleton of both crossings is the element
$\tensor*[_1^4]{\text{\large$\virtualcrossing$}}{_2^3}$
(the pairing of 1\&3 and 2\&4) in $\calS_{2,2}$. That is,
$\varsigma(\overcrossing)=\varsigma(\undercrossing)=\virtualcrossing$.
\end{definition}
\begin{figure}[b]
\input figs/wTangleExample.pstex_t
\caption{$V \in \vD_{3,3}$ is a $v$-tangle diagram. $V$ is the result of applying the circuit algebra operation
$D: C_{2,2} \times C_{2,2} \times C_{2,2} \to C_{3,3}$,
given by the wiring diagram shown, acting on two negative crossings and one positive crossing. In other words $V=D(\undercrossing, \undercrossing, \overcrossing)$.
The skeleton of $V$ is given by $\varsigma(V)=D(\virtualcrossing, \virtualcrossing, \virtualcrossing)$, which is equal in $\calS$ to the diagram shown here.
Note that we usually suppress the circuit algebra numbering of boundary points. Note also that the apparent ``virtual crossings'' of $V$ are not
virtual crossings but merely part of the circuit algebra structure, see Warning~\ref{warn:virtualxings}. The same is true for the
crossings appearing in the skeleton $\varsigma(V)$.}
\label{fig:wTangleExample}
\end{figure}
\begin{example}\label{ex:vDiag}
An example of a v-tangle diagram $V$ is shown the left side of Figure \ref{fig:wTangleExample}. $V$ is a circuit algebra composition of two negative crossings
and one positive crossing by the wiring diagram $D$, as shown. The right side of the same figure shows the skeleton $\varsigma(V)$ of $V$:
to produce the skeleton, replace each crossing by the element
$\virtualcrossing$ in $\calS$ and apply the same wiring diagram. The elements of $\calS$ are oriented 1-manifolds with numbered boundary points,
and hence the result is equal to the one shown in the figure.
\end{example}
\begin{warning}\label{warn:virtualxings}
People familiar with the planar presentation of virtual tangles may be accustomed to
the notion of there being another type of crossing: the ``virtual crossing''. The main point
of introducing circuit algebras (as opposed to working with planar algebras) is to eliminate the
need for virtual crossings: they become part of the CA structure. This greatly simplifies the presentation of both $v$- and $w$-tangles:
there is one less generator, as seen above, and far fewer relations, as we explain in Remark~\ref{rmk:VirtualXings}.
\end{warning}
\begin{definition}\label{def:vw-tangles} The ($\calS$-graded) circuit algebra $\glos{\vT}$ of
v-tangles is the $\calS$-graded directed circuit algebra of v-tangle diagrams $\vD$,
modulo the \Rs, R2 and R3 moves as depicted in
Figure~\ref{fig:VKnotRels}. These relations make sense as circuit
algebra relations between the two generators, and preserve skeleta.
To obtain the circuit algebra $\wT$ of $w$-tangles we also mod out by the OC relation of Figure~\ref{fig:VKnotRels}
(note that each side in that relation involves only two generators,
with the apparent third ``virtual'' crossing being merely a circuit algebra artifact).
In fewer words, $\vT:=$\raisebox{-1.5mm}{\input{figs/vTDef.pstex_t}}, and
$\glos{\wT}:=$\raisebox{-1.8mm}{\input{figs/wTDef.pstex_t}}.
\end{definition}
\begin{figure}[t]
\input figs/VKnotRels.pstex_t
\caption{The relations (``Reidemeister moves'') \Rs, R2 and R3 define $v$-tangles, adding OC to these defines $w$-tangles. VR1, VR2, VR3 and M
are not necessary as the circuit algebra presentation eliminates the need for ``virtual crossings'' as generators. R1 is not imposed for
framing reasons, and not imposing UC breaks the symmetry between over and under crossings in $\wT$.}
\label{fig:VKnotRels}
\end{figure}
\begin{remark}\label{rmk:VirtualXings}
One may also define v-tangles and w-tangles using the
language of planar algebras, except then another generator is required
(the ``virtual crossing'') and also a number of further relations shown in Figure \ref{fig:VKnotRels} (VR1--VR3,
M), and some of the operations (non-planar wirings) become less
elegant to define.
In our context ``virtual crossings'' are automatically
present (but unimportant) as part of the circuit algebra structure, and the ``virtual Reidemeister moves'' VR1--VR3 and M are
also automatically true. In fact, the ``rerouting move'' known in the planar presentation, which says that a purely virtual strand
of a $v$-tangle diagram can be re-routed in any other purely virtual way, is precisely the statement that virtual crossings are
unimportant, and the language of circuit algebras makes this fact manifest.
\end{remark}
\begin{remark}\label{rmk:skeleta}
For $S\in \calS$ a given skeleton, that is, an oriented 1-manifold with numbered ends, let us denote by
$\vT(S)$ and $\wT(S)$, respectively, the $v$- and $w$-tangles with skeleton $S$.
That is, $\vT(S)$ and $\wT(S)$ are the pre-images of $S$ under the skeleton map $\varsigma$.
Note that in our case the skeleton map
is ``forgetting topology'', in other words, forgetting the under/over information of crossings, resulting in empty circuits.
With this notation, $\wT(\uparrow)$, the set of w-tangles whose skeleton is a single line, is exactly
the set of (long) w-knots discussed in \cite[Section 3]{Bar-NatanDancso:WKO1}. Note also that $\wT(\uparrow_n)$, the set of w-tangles whose skeleton
is $n$ lines, includes w-braids with $n$ strands (\cite[Section 2]{Bar-NatanDancso:WKO1})
but it is more general. Neither w-knots nor w-braids are circuit algebras.
\end{remark}
%\parpic[r]{\raisebox{-17mm}{\input{figs/TangleExample.pstex_t}}}
\begin{remark}\label{rmk:Framing}
Since we do not mod out by the R1 relation, only by its weak (or
``spun'') version \Rs, it is more appropriate to call our class of
$v$/$w$-tangles {\em framed} $v$/$w$-tangles. (Recall that framed u-tangles
are characterized as the planar algebra generated by the positive and
negative crossings modulo the \Rs, R2 and R3 relations.) However, since
we are for the most part interested in studying the framed theories
(cf. Comment \ref{com:wTFFraming}), we will reserve the unqualified name
for the framed case, and will explicitly write ``unframed v/w-tangles''
if we wish to mod out by R1. For a more detailed explanation of framings
and R1 moves, see \cite[Remark~\ref{1-rem:Framing}]{Bar-NatanDancso:WKO1}.
\end{remark}
Our next task is to study the associated graded structures $\grad\vT$ and $\grad\wT$
of $\vT$ and $\wT$. These are ``arrow diagram spaces on tangle skeletons'':
directed analogues of the chord diagram spaces of ordinary finite type invariant theory,
and even more similar to the arrow diagram spaces for braids and knots discussed
in \cite{Bar-NatanDancso:WKO1}. Our convention for figures will be to show skeletons
as thick lines with thin arrows (directed chords).
Again, the language of circuit algebras makes defining these spaces
exceedingly simple.
\parpic[r]{\raisebox{-8mm}{$\pstex{arrows}$}}
\begin{definition} The ($\calS$-graded) circuit algebra
$\glos{\calD^v}=\glos{\calD^w}$ of arrow diagrams is the graded and
$\calS$-graded directed circuit algebra generated by a single degree
1 generator $a$ in $C_{2,2}$ called ``the arrow'' as shown on the
right, with the obvious meaning for its skeleton. There are morphisms
$\pi\colon \calD^v\to\vT$ and $\pi\colon \calD^w\to\wT$ defined by
mapping the arrow to an overcrossing minus a no-crossing. (On the
right some virtual crossings were added to make the skeleta match). Let
$\glos{\calA^v}$ be $\calD^v/6T$, let
$\glos{\calA^w}:=\calA^v/TC=\calD^w/(\aft,TC)$, and let
$\glos{\calA^{sv}}:=\calA^v/RI$ and $\glos{\calA^{sw}}:=\calA^w/RI$, with RI, $6T$, $\aft$, and $TC$ being the relations shown
in Figures~\ref{fig:ADand6T} and~\ref{fig:TCand4T}. Note that the pair of relations $(\aft, TC)$ is
equivalent to the pair $(6T,TC)$,
as discussed in \cite[Section 2.3.1]{Bar-NatanDancso:WKO1}.
\end{definition}
\begin{figure}
\input{figs/ADand6T.pstex_t}
\caption{Relations for v-arrow diagrams on tangle skeletons.
Skeleta parts that are not connected can lie on separate
skeleton components; and the dotted arrow that remains in the same position means
``all other arrows remain the same throughout''. }
\label{fig:ADand6T}
\end{figure}
\begin{figure}
\input{figs/TCand4T.pstex_t}
\caption{Relations for w-arrow diagrams on tangle skeletons.}
\label{fig:TCand4T}
\end{figure}
\begin{proposition} The maps $\pi$ above induce surjections
$\pi\colon \calA^{sv}\to\grad\vT$ and $\pi\colon
\calA^{sw}\to\grad\wT$. Hence in the language of
Definition~\ref{def:CanGrad}, $\calA^{sv}$ and $\calA^{sw}$ are candidate
associated graded structures of $\vT$ and $\wT$.
\end{proposition}
\begin{proof} Proving that $\pi$ is well-defined amounts to checking
directly that the RI and 6T or RI, $\aft$ and TC relations are in the
kernel of $\pi$. (Just like in the finite type theory of virtual knots and
braids.) Thanks to the circuit algebra structure, it is enough to verify
the surjectivity of $\pi$ in degree 1. We leave this as an exercise for
the reader. \qed
\end{proof}
We do not know if $\calA^{sv}$ is indeed the associated graded of $\vT$ (also
see~\cite{Bar-NatanHalachevaLeungRoukema:v-Dims}). Yet in the w case, the
picture is simple:
\begin{theorem}\label{thm:ExpansionForTangles} The assignment $\overcrossing\mapsto e^a$ (with $e^a$
denoting the exponential of a single arrow from the over strand to the
under strand, interpreted via its power series) extends to a well defined $Z\colon \wT\to\calA^{sw}$. The
resulting map $Z$ is a homomorphic $\calA^{sw}$-expansion, and in particular,
$\calA^{sw}\cong\grad\wT$ and $Z$ is a homomorphic expansion.
\end{theorem}
\begin{proof} The proof is essentially the same as the proof of
\cite[Theorem~\ref{1-thm:RInvariance}]{Bar-NatanDancso:WKO1},
and follows \cite{BerceanuPapadima:BraidPermutation,
AlekseevTorossian:KashiwaraVergne}.
One needs to check that $Z$ satisfies the
Reidemeister moves and the OC relation. \Rs~follows easily from $RI$, R2 is obvious, TC implies OC.
For R3, let $\calA^{sw}(\uparrow_n)$ denote the space of ``arrow diagrams on $n$ vertical strands''.
We need to verify that $R:=e^a\in \calA^{sw}(\uparrow_2)$ satisfies the Yang-Baxter equation
$$R^{12}R^{13}R^{23}=R^{23}R^{13}R^{12}, \quad \text{ in } \calA^{sw}(\up_3),$$
where $R^{ij}=e^{a_{ij}}$ means ``place R on strands $i$ and $j$''.
By $4T$ and $TC$ relations, both sides of the equation can be reduced to $e^{a_{12}+a_{13}+a_{23}}$,
proving the Reidemeister invariance of $Z$.
$Z$ is by definition a circuit algebra homomorphism.
Hence to show that $Z$ is an $\calA^{sw}$-expansion we only need to check the
universality property in degree one, where it is very easy.
The rest follows from Proposition~\ref{prop:CanGrad}. \qed
\end{proof}
\begin{remark}
Note that the restriction of $Z$ to w-knots and w-braids (in the sense of Remark~\ref{rmk:skeleta})
recovers the expansions constructed in \cite{Bar-NatanDancso:WKO1}. Note also that the
filtration and associated graded structure for w-braids fits into the general algebraic
framework of Section~\ref{sec:generalities} by applying the machinery to the skeleton-graded group
of w-braids instead the circuit algebra of w-tangles. (The skeleton of a w-braid is the permutation it represents.)
However, as w-knots do not form a finitely
presented algebraic structure in the sense of Section~\ref{sec:generalities}, the ``finite type'' filtration
used in \cite{Bar-NatanDancso:WKO1} does not arise as powers of any augmentation ideal.
This captures the reason why w-knots are ``the wrong objects to study'', as we have
mentioned at the beginning of Section 3 of \cite{Bar-NatanDancso:WKO1}.
\end{remark}
In a similar spirit to
\cite[Definition~\ref{1-def:wJac}]{Bar-NatanDancso:WKO1}, one may define a
``w-Jacobi diagram'' on an arbitrary skeleton:
\begin{definition}\label{def:wJac} A ``w-Jacobi diagram on a tangle
skeleton''\footnote{We usually short this to
``w-Jacobi diagram'', or sometimes ``arrow diagram'' or just ``diagram''.}
is a graph made of the following ingredients:
\begin{itemize}
\item An oriented ``skeleton'' consisting of long lines and circles
(i.e., an oriented one-manifold). In figures we draw the skeleton
lines thicker.
\item Other directed edges, usually called ``arrows''.
\item Trivalent ``skeleton vertices'' in which an arrow starts or ends on
the skeleton line.
\item Trivalent ``internal vertices'' in which two arrows end and one arrow
begins. The internal vertices are cyclically oriented; in figures the assumed
orientation is always counterclockwise unless marked otherwise. Furthermore,
all trivalent vertices must be connected to the skeleton via arrows (but not necessarily
following the direction of the arrows).
\end{itemize}
\end{definition}
Note that we allow multiple and loop arrow edges, as long as trivalence and the
two-in-one-out rule is respected.
Formal linear combinations of (w-Jacobi) arrow diagrams form a circuit algebra.
We denote by $\calA^{wt}$ the quotient of the circuit algebra of arrow diagrams modulo the $\aSTU_1$, $\aSTU_2$ relations of
Figure \ref{fig:STU}, and the TC relation. We denote $\calA^{wt}$ modulo the RI relation by $\calA^{swt}$. We then
have the following ``bracket-rise'' theorem:
\begin{figure}
\input{figs/aSTU.pstex_t}
\caption{The $\protect\aSTU$ relations for arrow diagrams, with their ``central edges'' marked $e$ for easier memorization.}
\label{fig:STU}
\end{figure}
\begin{figure}
\input{figs/aIHX.pstex_t}
\caption{The $\protect\aAS$ and $\protect\aIHX$ relations.}
\label{fig:aIHX}
\end{figure}
\begin{theorem}\label{thm:BracketRise} The obvious inclusion of arrow diagrams (with no internal vertices) into
w-Jacobi diagrams descends to a map
$\bar{\iota}: \calA^w\to\calA^{wt}$, which is a circuit
algebra isomorphism. Furthermore, the $\aAS$
and $\aIHX$ relations of Figure~\ref{fig:aIHX} hold in $\calA^{wt}$.
Consequently, it is also true that $\calA^{sw}\cong\calA^{swt}$.
\end{theorem}
\begin{proof} In the proof of
\cite[Theorem~\ref{1-thm:BracketRise}]{Bar-NatanDancso:WKO1} we showed
this for long w-knots (i.e., tangles whose skeleton is a single long
line). That proof applies here verbatim, noting that it does not make
use of the connectivity of the skeleton.
In short, to check that $\bar{\iota}$ is well-defined, we need to show that the $\aSTU$ relations
imply the $\aft$ relation. This is shown in Figure \ref{fig:STU4T}. To show that $\bar{\iota}$ is an isomorphism,
we construct an inverse $\calA^{wt} \to \calA^w$, which ``eliminates all internal vertices'' using
a sequence of $\aSTU$ relations. Checking that this is well-defined requires some case analysis;
the fact that it is an inverse to $\bar{\iota}$ is obvious. Verifying that the $\aAS$
and $\aIHX$ relations hold in $\calA^{wt}$ is an easy exercise.
\begin{figure}
\input{figs/STUto4T.pstex_t}
\caption{Applying $\protect\aSTU_1$ and $\protect\aSTU_2$ to the diagram on the left, we get the two sides of $\protect\aft$.}
\label{fig:STU4T}
\end{figure}
\qed
\end{proof}
Given the above theorem, we no longer keep the distinction between
$\calA^w$ and $\calA^{wt}$ and between $\calA^{sw}$ and $\calA^{swt}$.
We recall from \cite{Bar-NatanDancso:WKO1} that a ``$k$-wheel'', sometimes denoted $w_k$, is a
an arrow diagram consisting of an oriented cycle of arrows with $k$
incoming ``spokes'', the tails of which rest on the skeleton. An example
is shown in Figure \ref{fig:wheels}. In this language, the RI relation
can be rephrased using the $\aSTU$ relation to say that all one-wheels are 0,
or $w_1=0$.
\begin{figure}
\input{figs/wkl.pstex_t}
\caption{A $4$-wheel and the RI relation re-phrased.}
\label{fig:wheels}
\end{figure}
\begin{remark} \label{rem:HeadInvariance}
Note that if $T$ is an arbitrary $w$ tangle, then the equality on the
left side of the figure below always holds, while the one on the right
generally doesn't:
\begin{equation} \label{eq:TangleLassoMove}
\begin{array}{c}\input{figs/TangleLassoMove.pstex_t}\end{array}
\end{equation}
The
arrow diagram version of this statement is that if $D$ is an arbitrary
arrow diagram in $\calA^w$, then the left side equality in the
figure below always holds (we will sometimes refer to this as the
``head-invariance'' of arrow diagrams), while the right side equality
(``tail-invariance'') generally fails.
\begin{equation} \label{eq:HeadInvariance}
\begin{array}{c}\input{figs/HeadInvariance.pstex_t}\end{array}
\end{equation}
We leave it to the reader to ascertain that
Equation~\eqref{eq:TangleLassoMove} implies
Equation~\eqref{eq:HeadInvariance}. There is also a direct
proof of Equation~\eqref{eq:HeadInvariance} which we also leave
to the reader, though see an analogous statement and proof in
\cite[Lemma~3.4]{Bar-Natan:NAT}. Finally note that a restricted version of
tail-invariance does hold --- see Section~\ref{subsec:sder}.
\end{remark}
\draftcut
\subsection{$\calA^w(\uparrow_n)$ and the Alekseev-Torossian Spaces}
\label{subsec:ATSpaces}
\begin{definition} Let $\glos{\calA^v(\uparrow_n)}$ be the part of $\calA^v$ in
which the skeleton is the disjoint union of $n$ directed lines,
with similar definitions for $\glos{\calA^w(\uparrow_n)}$,
$\glos{\calA^{sv}(\uparrow_n)}$, and $\glos{\calA^{sw}(\uparrow_n)}$.
\end{definition}
\begin{theorem}
\em{(Diagrammatic PBW Theorem.)} Let $\glos{\calB^w_n}$ denote the space of uni-trivalent diagrams\footnote{
Oriented graphs with vertex degrees either 1 or 3, where trivalent vertices must have two edges incoming and
one edge outgoing and are cyclically oriented.}
with symmetrized ends coloured with colours in some $n$-element set
(say $\{x_1,\ldots,x_n\}$), modulo the $\aAS$ and $\aIHX$ relations of Figure \ref{fig:aIHX}.
Then there is an isomorphism $\calA^w(\uparrow_n)\cong \calB^w_n$.
\end{theorem}
{\it Proof sketch.} Readers familiar with the diagrammatic PBW theorem \cite[Theorem 8]{Bar-Natan:OnVassiliev}
will note that the proof carries through almost verbatim. There is a map $\chi: \calB^w_n \to \calA^w(\uparrow_n)$,
which sends each uni-trivalent diagram to the average of all ways of attaching their univalent ends
to the skeleton of $n$ lines, so that ends of colour $x_i$ are attached to the strand numbered $i$.
I.e., a diagram with $k_i$ uni-valent vertices of colour $x_i$ is sent to a sum of $\prod_i k_i!$ terms,
divided by $\prod_i k_i!$.
The goal is to show that $\chi$ is an isomorphism by constructing an inverse for it. The image of
$\chi$ are {\em symmetric} sums of diagrams, that is, sums of diagrams that are invariant under
permuting arrow endings on the same skeleton component.
One can show that in fact any arrow diagram $D$ in $\calA^w(\uparrow_n)$ is equivalent via $\aSTU$ and $TC$ relations
to a symmetric sum. The obvious candidate is its ``symmetrization'' $Sym(D)$: the average of all ways of permuting the
arrow endings on each skeleton component of $D$. It is not true that each diagram is equivalent to its symmetrization
(hence, the ``simply delete the skeleton'' map is not an inverse for $\chi$),
but it is true that $D-Sym(D)$ has fewer skeleton vertices (lower degree) than $D$, hence we can
construct $\chi^{-1}$ inductively.
The fact that this inductive procedure is well-defined requires a proof; that proof is essentially the same as the proof
of the corresponding fact in \cite[Theorem 8]{Bar-Natan:OnVassiliev}.
\qed
Both $\calA^w(\uparrow_n)$ and $\calB^w_n$ have a natural bi-algebra structure. In $\calA^w(\uparrow_n)$
multiplication is given by stacking. For a diagram $D\in \calA^w(\uparrow_n)$, the co-product $\glos{\Delta}(D)$
is given by the sum of all ways of dividing $D$ between a ``left co-factor'' and a ``right cofactor'' so
that the connected components of $D-S$ are kept intact, where $S$ is the skeleton of $D$. In $\calB^w_n$
multiplication is given by disjoint union, and $\Delta$ is the sum of all ways of dividing the connected
components of a diagram between two co-factors (here there is no skeleton).
Note that the isomorphism $\chi$ above is a co-algebra isomorphism, but not an algebra homomorphism.
The primitives $\glos{\calP^w_n}$ of $\calB^w_n$ are the
connected diagrams (and hence the primitives of $\calA^w(\uparrow_n)$
are the diagrams that remain connected even when the skeleton is
removed). Given the ``two in one out'' rule for internal vertices,
the diagrams in $\calP^w_n$ can only be trees (diagrams with no cycles) or wheels (a
single oriented cycle with a number of ``spokes'', or leaves, attached to it). ``Wheels of
trees'' can be reduced to simple wheels by repeatedly using $\aIHX$,
as in Figure~\ref{fig:WheelOfTreesAndPrince}.
\begin{figure}
\input{figs/WheelOfTrees.pstex_t}
\caption{A wheel of trees can be reduced to a combination of wheels, and a wheel of trees with
a Little Prince.}\label{fig:WheelOfTreesAndPrince}
\end{figure}
Thus as a vector space $\calP^w_n$ is easy to identify. It is a direct sum
$\calP^w_n=\langle\text{trees}\rangle\oplus\langle\text{wheels}\rangle$.
The wheels part is simply the graded vector space generated by
all cyclic words in the letters $x_1,\ldots,x_n$. Alekseev and
Torossian~\cite{AlekseevTorossian:KashiwaraVergne} denote the
space of cyclic words by $\glos{\attr_n}$, and so shall we. The trees in
$\calP^w_n$ have leafs coloured $x_1,\ldots,x_n$. Modulo $\aAS$ and
$\aIHX$, they correspond to elements of the free Lie algebra $\glos{\lie_n}$
on the generators $x_1,\ldots,x_n$. But the root of each such tree
also carries a label in $\{x_1,\ldots,x_n\}$, hence there are $n$
types of such trees as separated by their roots, and so $\calP^w_n$
is isomorphic to the direct sum $\attr_n\oplus\bigoplus_{i=1}^n\lie_n$.
Note that with $\calB_n^{sw}$ and $\calP_n^{sw}$ defined in the analogous manner
(i.e., factoring out by one-wheels, as in the RI relation),
we can also conclude that
$\calP^{sw}_n\cong\attr_n/(\text{deg }1)\oplus\bigoplus_{i=1}^n\lie_n$.
By the Milnor-Moore theorem~\cite{MilnorMoore:Hopf}, $\calA^w(\uparrow_n)$
is isomorphic to the universal enveloping algebra $\calU(\calP^w_n)$,
with $\calP^w_n$ identified as the subspace $\glos{\calP^w(\uparrow_n)}$
of primitives of $\calA^w(\uparrow_n)$ using the PBW symmetrization
map $\chi\colon \calB^w_n\to\calA^w(\uparrow_n)$. Thus in order to
understand $\calA^w(\uparrow_n)$ as an associative algebra, it is enough
to understand the Lie algebra structure induced on $\calP^w_n$ via the
commutator bracket of $\calA^w(\uparrow_n)$.
Our goal is to identify $\calP^w(\uparrow_n)$ as the Lie algebra
$\attr_n\rtimes(\fraka_n\oplus\tder_n)$,
which in itself is a combination of the Lie algebras
$\fraka_n$, $\tder_n$ and $\attr_n$ studied by Alekseev and
Torossian~\cite{AlekseevTorossian:KashiwaraVergne}. Here are the relevant
definitions:
\begin{definition} Let $\glos{\fraka_n}$ denote the vector space with basis
$x_1,\ldots,x_n$, also regarded as an Abelian Lie algebra of dimension $n$.
As before, let $\lie_n=\lie(\fraka_n)$ denote the free Lie algebra on $n$
generators, now identified as the basis elements of $\fraka_n$. Let
$\glos{\der_n}=\der(\lie_n)$ be the (graded) Lie algebra of derivations
acting on $\lie_n$, and let
\[ \glos{\tder_n}=\left\{D\in\der_n\colon \forall i\ \exists a_i\text{ s.t.{}
}D(x_i)=[x_i,a_i]\right\}
\]
denote the subalgebra of ``tangential derivations''. A tangential
derivation $D$ is determined by the $a_i$'s for which $D(x_i)=[x_i,a_i]$,
and determines them up to the ambiguity $a_i\mapsto a_i+\alpha_ix_i$, where
the $\alpha_i$'s are scalars. Thus as vector spaces,
$\fraka_n\oplus\tder_n\cong\bigoplus_{i=1}^n\lie_n$.
\end{definition}
\begin{definition} Let $\glos{\Ass_n}=\calU(\lie_n)$ be the free associative
algebra ``of words'', and let $\glos{\Ass_n^+}$ be the degree $>0$ part of
$\Ass_n$. As before, we let $\attr_n=\Ass^+_n/(x_{i_1}x_{i_2}\cdots
x_{i_m}=x_{i_2}\cdots x_{i_m}x_{i_1})$ denote ``cyclic words'' or
``(coloured) wheels''. $\Ass_n$, $\Ass_n^+$, and $\attr_n$ are
$\tder_n$-modules and there is an obvious equivariant ``trace''
$\tr\colon \Ass^+_n\to\attr_n$.
\end{definition}
\begin{proposition}\label{prop:Pnses}
There
is a split short exact sequence of Lie algebras
\[ 0 \longrightarrow \attr_n
\stackrel{\glos{\iota}}{\longrightarrow} \calP^w(\uparrow_n)
\stackrel{\glos{\pi}}{\longrightarrow} \fraka_n \oplus \tder_n
\longrightarrow 0.
\]
\end{proposition}
\begin{proof}
The inclusion $\iota$ is defined the natural way: $\attr_n$ is
spanned by coloured ``floating'' wheels, and such a wheel is mapped
into $\calP^w(\uparrow_n)$ by attaching its ends to their assigned strands in
arbitrary order. Note that this is well-defined: wheels have only tails,
and tails commute.
As vector spaces, the statement is already proven: $\calP^w(\uparrow_n)$
is generated by trees
and wheels (with the all arrow endings fixed on $n$ strands). When factoring out by the wheels,
only trees remain. Trees have one head and many tails. All the tails commute with
each other, and commuting a tail with a head on a strand costs a wheel (by $\aSTU$),
thus in the quotient the head also commutes with the tails. Therefore, the quotient
is the space of floating (coloured) trees, which we have previously identified with
$\bigoplus_{i=1}^{n} \lie_n \cong \fraka_n\oplus\tder_n$.
It remains to show that the maps $\iota$ and $\pi$ are Lie algebra maps as well. For $\iota$ this
is easy: the Lie algebra $\attr_n$ is commutative, and is mapped to the commutative
(due to $TC$)
subalgebra of $\calP^w(\uparrow_n)$ generated by wheels.
To show that $\pi$ is a map of Lie algebras we give two proofs,
first a ``hands-on'' one, then a ``conceptual'' one.
{\bf Hands-on argument.} $\fraka_n$ is the image of single arrows on one strand.
These commute with everything in $\calP^w(\uparrow_n)$, and so does $\fraka_n$
in the direct sum $\fraka_n \oplus \tder_n$.
It remains to show that the bracket of $\tder_n$ works the same way as
commuting trees in $\calP^w(\uparrow_n)$. Let $D$ and $D'$ be elements of
$\tder_n$ represented by $(a_1,\ldots ,a_n)$ and $(a_1',\ldots ,a_n')$, meaning
that $D(x_i)=[x_i,a_i]$ and $D'(x_i)=[x_i,a_i']$ for $i=1,\ldots ,n$. Let
us compute the commutator of these elements:
\begin{multline*}
[D,D'](x_i)=(DD'-D'D)(x_i)=D[x_i,a_i']-D'[x_i,a_i]= \\
=[[x_i,a_i],a_i']+[x_i,Da_i']-[[x_i,a_i'],a_i]-[x_i,D'a_i]
= [x_i,Da_i'-D'a_i+[a_i,a_i']].
\end{multline*}
Now let $T$ and $T'$ be two trees in $\calP^w(\uparrow_n)/\attr_n$,
their heads on strands $i$ and $j$, respectively ($i$ may or may not
equal $j$). Let us denote by $a_i$ (resp. $a_j'$) the element in $\lie_n$ given by forming
the appropriate commutator of the colours of the tails of $T$'s (resp. $T'$).
In $\tder_n$, let $D=\pi(T)$ and
$D'=\pi(T')$. $D$ and $D'$ are determined by $(0,\ldots,a_i,\ldots,0)$,
and $(0,\ldots,a_j',\ldots0)$, respectively. (In each case, the $i$-th or
the $j$-th is the only non-zero component.) The commutator of these
elements is given by $[D,D'](x_i)=[Da_i'-D'a_i+[a_i,a_i'],x_i]$, and
$[D,D'](x_j)=[Da_j'-D'a_j+[a_j,a_j'],x_j].$ Note that unless $i=j$,
$a_j=a_i'=0$.
In $\calP^w(\uparrow_n)/\attr_n$, all tails commute, as well as a head of a tree with its
own tails. Therefore, commuting two trees only incurs a cost when commuting a head of
one tree over the tails of the other on the same strand, and the two heads over each other,
if they are on the same strand.
If $i \neq j$, then commuting the head of $T$ over the tails of $T'$ by $\aSTU$
costs a sum of trees given by $Da_j'$, with heads on strand $j$, while moving
the head of $T'$ over the tails of $T$ costs exactly $-D'a_i$, with heads on strand $i$,
as needed.
If $i=j$, then everything happens on strand $i$, and the cost is
$(Da_i'-D'a_i+[a_i,a_i'])$, where the last term happens when
the two heads cross each other.
{\bf Conceptual argument.}
There is an action of $\calP^w(\uparrow_n)$ on $\lie_n$, as follows: introduce
and extra strand on the right. An element $L$ of $\lie_n$ corresponds to a tree with
its head on the extra strand. Its commutator with an element of $\calP^w(\uparrow_n)$
(considered as an element of $\calP^w(\uparrow_{n+1})$ by the obvious inclusion)
is again a tree with head on strand $(n+1)$, defined to be the result of the action.
Since $L$ has only tails on the first $n$ strands,
elements of $\attr_n$, which
also only have tails, act trivially. So do single (local) arrows on one strand
($\fraka_n$). It remains to show that trees act as $\tder_n$, and it is enough
to check this on the generators of $\lie_n$ (as the Leibniz rule is obviously
satisfied). The generators of $\lie_n$ are arrows pointing from one of the first
$n$ strands, say strand $i$, to strand $(n+1)$. A tree with head on strand $i$
acts on this element, according $\aSTU$, by forming the commutator, which
is exactly the action of $\tder_n$.
\end{proof}
To identify $\calP^w(\uparrow_n)$ as the semidirect product
$\attr_n\rtimes(\fraka_n\oplus\tder_n)$, it remains to show that
the short exact sequence of the Proposition splits. This is indeed the case,
although not canonically. Two ---of the many--- splitting maps
$\glos{u},\glos{l}\colon \tder_n\oplus\fraka_n \to \calP^w(\uparrow_n)$
are described as follows: $\tder_n\oplus\fraka_n$ is identified with
$\bigoplus_{i=1}^n\lie_n$, which in turn is identified with floating
(coloured) trees. A map to $\calP^w(\uparrow_n)$ can
be given by specifying how to place the legs on their specified strands.
A tree may have many tails but has only one head, and due to $TC$, only
the positioning of the head matters. Let $u$ (for {\it upper}) be the map
placing the head of each tree above all its tails on the same strand,
while $l$ (for {\it lower}) places the head below all the tails. It is
obvious that these are both Lie algebra maps and that $\pi \circ u$ and
$\pi \circ l$ are both the identity of $\tder_n \oplus \fraka_n$. This
makes $\calP^w(\uparrow_n)$ a semidirect product. \qed
\begin{remark} Let $\glos{\attr_n^s}$ denote $\attr_n$ mod out by its
degree one part (one-wheels). Since the RI relation is in the kernel of
$\pi$, there is a similar split exact sequence
\[ 0\to \attr_n^s \stackrel{\overline{\iota}}{\rightarrow} \calP^{sw}
\stackrel{\overline{\pi}}{\rightarrow} \fraka_n \oplus \tder_n.
\]
\end{remark}
\begin{definition}\label{div}
For any $D \in \tder_n$, $(l-u)D$ is in the kernel of $\pi$, therefore
is in the image of $\iota$, so $\iota^{-1}(l-u)D$ makes sense. We call
this element $\glos{\divop}D$.
\end{definition}
\begin{definition}
In \cite{AlekseevTorossian:KashiwaraVergne}
div is defined as follows: div$(a_1,\ldots,a_n):=\sum_{k=1}^n \tr((\partial_k a_k)x_k)$,
where $\partial_k$ picks out the words of a sum which end in $x_k$ and deletes their last letter
$x_k$, and deletes all other words (the ones which do not end in $x_k$).
\end{definition}
\begin{proposition}
The div of Definition \ref{div} and the div of \cite{AlekseevTorossian:KashiwaraVergne} are
the same.
\end{proposition}
\parpic[r]{\input{figs/combtree.pstex_t}}
{\it Proof.}
It is enough to verify the claim for the linear generators of $\tder_n$, namely, elements
of the form $(0,\ldots,a_j,\ldots,0)$, where $a_j \in \lie_n$ or equivalently, single (floating,
coloured) trees, where the colour of
the head is $j$. By the Jacobi identity, each $a_j$ can be written
in a form $a_j=[x_{i_1},[x_{i_2},[\ldots,x_{i_k}]\ldots]$.
Equivalently, by $\aIHX$, each tree has a
standard ``comb'' form, as shown on the picture on the right.
For an associative word $Y=y_1y_2\ldots y_l \in \Ass_n^+$,
we introduce the notation $[Y]:=[y_1,[y_2,[\ldots,y_l]\ldots]$.
The div of \cite{AlekseevTorossian:KashiwaraVergne} picks out the
words that end in $x_j$, forgets the rest, and considers these as
cyclic words. Therefore, by interpreting the Lie brackets as commutators,
one can easily check that for $a_j$ written as above,
\begin{equation}\label{divformula}
{\rm div}((0,\ldots,a_j,\ldots,0))=\sum_{\alpha\colon i_{\alpha}=x_j}
-x_{i_1}\ldots x_{i_{\alpha-1}}[x_{i_{\alpha+1}}\ldots x_{i_k}]x_j.
\end{equation}
\parpic[r]{\input{figs/divproof.pstex_t}}
In Definition \ref{div}, div of a tree is the difference between attaching its
head on the appropriate strand (here, strand $j$) below all of its tails and above.
As shown in the figure on the right, moving the head across each of the tails on
strand $j$ requires an $\aSTU$ relation,
which ``costs'' a wheel (of trees, which is equivalent to a sum of honest wheels).
Namely, the head gets connected to the tail in question.
So div of the tree represented by $a_j$ is given by
\begin{center}
$\sum_{\alpha\colon x_{i_{\alpha}}=j}$``\rm connect the head to the $\alpha$ leaf''.
\end{center}
\noindent
This in turn gets mapped to the formula above via the correspondence between
wheels and cyclic words. \qed
\parpic[r]{\input{figs/treeactonwheel.pstex_t}}
\begin{remark}\label{rem:tderontr}
There is an action of $\tder_n$ on $\attr_n$ as
follows. Represent a cyclic word $w \in \attr_n$ as a
wheel in $\calP^w(\uparrow_n)$ via the map $\iota$. Given
an element $D \in \tder_n$, $u(D)$, as defined above, is a tree
in $\calP^w(\uparrow_n)$ whose head is above all of its tails. We
define $D \cdot w:=\iota^{-1}(u(D)\iota(w)-\iota(w)u(D))$. Note that
$u(D)\iota(w)-\iota(w)u(D)$ is in the image of $\iota$, i.e., a linear
combination of wheels, for the following reason. The wheel $\iota(w)$ has only tails. As we commute
the tree $u(D)$ across the wheel, the head of the tree is commuted
across tails of the wheel on the same strand. Each time this happens
the cost, by the $\aSTU$ relation, is a wheel with the tree attached
to it, as shown on the right, which in turn (by $\aIHX$ relations,
as Figure~\ref{fig:WheelOfTreesAndPrince} shows) is a sum of wheels.
Once the head of the tree has been moved to the top, the tails of the
tree commute up for free by $TC$. Note that the alternative definition,
$D \cdot w:=\iota^{-1}(l(D)\iota(w)-\iota(w)l(D))$ is in fact equal to
the definition above.
\end{remark}
\begin{definition}
In \cite{AlekseevTorossian:KashiwaraVergne}, the group $\glos{\TAut_n}$
is defined as $\exp(\tder_n)$. Note that $\tder_n$ is positively
graded, hence it integrates to a group. Note also that $\TAut_n$ is
the group of ``basis-conjugating'' automorphisms of $\lie_n$, i.e.,
for $g \in \TAut_n$, and any $x_i$, $i=1,\ldots ,n$ generator of
$\lie_n$, there exists an element $g_i \in \exp(\lie_n)$ such that
$g(x_i)=g_i^{-1}x_ig_i$.
\end{definition}
The action of $\tder_n$ on $\attr_n$ lifts to an action of $\TAut_n$ on $\attr_n$,
by interpreting exponentials formally, in other words $e^D$ acts as
$\sum_{n=0}^\infty\frac{D^n}{n!}$. The lifted action is by conjugation:
for $w \in \attr_n$ and $e^D \in \TAut_n$,
$e^D \cdot w=\iota^{-1}(e^{uD} \iota(w) e^{-uD})$.
Recall that in Section 5.1 of \cite{AlekseevTorossian:KashiwaraVergne}
Alekseev and Torossian construct a map $\glos{j}\colon \TAut_n \to
\attr_n$ which is characterized by two properties: the cocycle property
\begin{equation}\label{eq:jcocycle}
j(gh)=j(g)+g\cdot j(h),
\end{equation}
where in the second term multiplication by $g$ denotes the action described above;
and the condition
\begin{equation}\label{eq:jderiv}
\frac{d}{ds}j(\exp(sD))|_{s=0}=\divop(D).
\end{equation}
Now let us interpret $j$ in our context.
\begin{definition}\label{def:Adjoint}
The adjoint map $\glos{*}\colon \calA^w(\uparrow_n) \to
\calA^w(\uparrow_n)$ acts by ``flipping over diagrams and negating arrow
heads on the skeleton''. In other words, for an arrow diagram $D$,
\[ D^*:=(-1)^{\#\{\text{tails on skeleton}\}}S(D), \]
where $S$ denotes the map which switches the orientation of the skeleton
strands (i.e. flips the diagram over), and multiplies by $(-1)^{\#
\text{skeleton vertices}}$.
\end{definition}
\begin{proposition}\label{prop:Jandj}For $D \in \tder_n$,
define a map $\glos{J}\colon \TAut_n \to \exp(\attr_n)$ by
$J(e^D):=e^{uD}(e^{uD})^*$. Then
$$\exp(j(e^D))=J(e^D).$$
\end{proposition}
\begin{proof}
Note that $(e^{uD})^*=e^{-lD}$, due to ``Tails Commute'' and the fact that a
tree has only one head.
Let us check that $\log J$ satisfies properties \eqref{eq:jcocycle} and
\eqref{eq:jderiv}. Namely, with $g=e^{D_1}$ and $h=e^{D_2}$, and
using that $\attr_n$ is commutative, we need to show that
\begin{equation}
J(e^{D_1}e^{D_2})=J(e^{D_1})\big(e^{uD_1}\cdot J(e^{D_2})\big),
\end{equation}
where $\cdot$ denotes the action of $\tder_n$ on $\attr_n$; and that
\begin{equation}
\frac{d}{ds}J(e^{sD})|_{s=0}=\divop D.
\end{equation}
Indeed, with $\operatorname{BCH}(D_1,D_2)=\log e^{D_1}e^{D_2}$ being the
standard Baker--Campbell--Hausdorff formula,
\begin{multline*}
J(e^{D_1}e^{D_2})=J(e^{\operatorname{BCH}(D_1,D_2)})
=e^{u(\operatorname{BCH}(D_1,D_2)}
e^{-l(\operatorname{BCH}(D_1,D_2)}=
e^{\operatorname{BCH}(uD_1,uD_2)}
e^{-\operatorname{BCH}(lD_1,lD_2)} \\
=e^{uD_1}e^{uD_2}e^{-lD_2}e^{-lD_1}=
e^{uD_1}(e^{uD_2}e^{-lD_2})e^{-uD_1}e^{uD_1}e^{lD_1}
=(e^{uD_1}\cdot J(D_2))J(D_1),
\end{multline*}
as needed.
As for condition~\eqref{eq:jderiv}, a direct computation of the derivative
yields
$$\frac{d}{ds}J(e^{sD})|_{s=0}=uD-lD=\divop D,$$
as desired. \qed
\end{proof}
\draftcut
\subsection{The Relationship with u-Tangles} \label{subsec:sder} Let
$\glos{\uT}$ be the planar algebra of classical, or ``{\it u}sual''
tangles. There is a map $a\colon \uT \to \wT$ of $u$-tangles into
$w$-tangles: algebraically, it is defined in the obvious way on the planar
algebra generators of $\uT$. (It can also be interpreted topologically
as Satoh's tubing map, see
\cite[Section~\ref{1-subsubsec:TopTube}]{Bar-NatanDancso:WKO1},
where a u-tangle is a tangle drawn on a sphere. However, it is only
conjectured that the circuit algebra presented here is a Reidemeister
theory for ``tangled ribbon tubes in $\bbR^4$''.) The map $a$ induces a
corresponding map $\alpha\colon \calA^u \to \calA^{sw}$, which maps an
ordinary Jacobi diagram (i.e., unoriented chords with internal trivalent
vertices modulo the usual $AS$, $IHX$ and $STU$ relations) to the sum
of all possible orientations of its chords (many of which are zero in
$\calA^{sw}$ due to the ``two in one out'' rule).
\parpic[l]{$\xymatrix{
\uT \ar@{.>}[r]^{Z^u} \ar[d]^a & \calA^u \ar[d]^\alpha \\
\wT \ar[r]^{Z^w} & \calA^{sw}
}$}
It is tempting to ask whether the square on the left
commutes. Unfortunately, this question hardly makes sense, as there
is no canonical choice for the dotted line in it. Similarly to the
braid case of \cite[Section~\ref{1-subsubsec:RelWithu}]{Bar-NatanDancso:WKO1}, the definition of the
homomorphic expansion (Kontsevich integral) for $u$-tangles typically depends on various choices
of ``parenthesizations''. Choosing parenthesizations, this square becomes
commutative up to some fixed corrections. The details are in
Proposition~\ref{prop:uwBT}.
Yet already at this point we can recover something from the existence of
the map $a\colon\uT\to\wT$, namely an interpretation of the
Alekseev-Torossian~\cite{AlekseevTorossian:KashiwaraVergne} space of
special derivations, $$\glos{\sder_n}:=\{ D\in\tder_n\colon D(\sum_{i=1}^n
x_i)=0\}.$$ Recall from Remark \ref{rem:HeadInvariance} that
in general it is not possible to slide a strand under an arbitrary $w$-tangle.
However, it is possible to slide strands freely under
tangles {\em in the image of $a$}, and thus by reasoning similar to the
reasoning in Remark~\ref{rem:HeadInvariance}, diagrams $D$ in the image
of $\alpha$ respect ``tail-invariance'':
\begin{equation} \label{eq:TailInvariance}
\begin{array}{c}\input{figs/TailInvariance.pstex_t}\end{array}
\end{equation}
Let $\calP^u(\uparrow_n)$ denote the primitives of $\calA^u(\uparrow_n)$,
that is, Jacobi diagrams that remain connected when the skeleton is
removed. Remember that $\calP^{w}(\uparrow_n)$ stands for the primitives
of $\calA^{w}(\uparrow_n)$. Equation~\eqref{eq:TailInvariance} readily
implies that the image of the composition
\[ \xymatrix{
\calP^u(\uparrow_n) \ar[r]^(0.48){\alpha}
& \calP^w(\uparrow_n) \ar[r]^(0.45)\pi
& \fraka_n \oplus \tder_n
} \]
is contained in $\fraka_n \oplus \sder_n$. Even better is true.
\begin{theorem}\label{thm:sder}
The image of $\pi\alpha$ is precisely $\fraka_n \oplus \sder_n$.
\end{theorem}
This theorem was first proven by Drinfel'd (Lemma after Proposition 6.1
in \cite{Drinfeld:GalQQ}), but the proof we give here is due to Levine
\cite{Levine:Addendum}.
\begin{proof}
Let $\lie_n^d$ denote the degree $d$ piece of $\lie_n$. Let $V_n$ be
the vector space with basis $x_1, x_2, \ldots , x_n$. Note that
$$V_n \otimes \lie_n^d \cong \bigoplus_{i=1}^n \lie_n^d \cong
(\tder_n \oplus \fraka_n)^d,$$
where $\tder_n$ is graded by the number of tails of a tree, and $\fraka_n$
is contained in degree 1.
The bracket defines a map $\beta\colon V_n \otimes \lie_n^d \to \lie_n^{d+1}$:
for $a_i \in \lie_n^d$ where $i=1,\ldots ,n$, the ``tree''
$D=(a_1,a_2,\ldots ,a_n) \in (\tder_n \oplus \fraka_n)^d$ is mapped to
$$\beta(D)=\sum_{i=1}^n[x_i,a_i]=D\left(\sum_{i=1}^n x_i\right),$$
where the first equality is by the definition of tensor product and the bracket,
and the second is by the definition of the action of $\tder_n$ on $\lie_n$.
Since $\fraka_n$ is contained in degree 1, by definition
$\sder_n^d=(\operatorname{ker}\beta)^d$ for $d\geq2$. In degree
1, $\fraka_n$ is obviously in the kernel, hence
$(\operatorname{ker}\beta)^1= \fraka_n \oplus \sder_n^1$. So overall,
$\operatorname{ker}\beta=\fraka_n\oplus\sder_n$.
We want to study the image of the map $\calP^u(\uparrow^n)
\stackrel{\pi\alpha}{\longrightarrow} \fraka_n \oplus \tder_n$.
Under $\alpha$, all connected Jacobi diagrams that are not trees or
wheels go to zero, and under $\pi$ so do all wheels. Furthermore, $\pi$
maps trees that live on $n$ strands to ``floating'' trees with univalent
vertices coloured by the strand they used to end on. So for determining
the image, we may replace $\calP^u(\uparrow^n)$ by the space $\calT_n$
of connected {\em un}oriented ``floating trees'' (uni-trivalent graphs), the ends (univalent vertices)
of which are coloured by the $\{x_i\}_{i=1,..,n}$. We denote the degree
$d$ piece of $\calT_n$, i.e., the space of trees with $d+1$ ends,
by $\calT_n^{d}$. Abusing notation, we shall denote the map induced by
$\pi\alpha$ on $\calT_n$ by $\alpha\colon \calT_n \to \fraka_n \oplus
\tder_n$. Since choosing a ``head'' determines the entire orientation of
a tree by the two-in-one-out rule, $\alpha$ maps a tree in $\calT_n^d$
to the sum of $d+1$ ways of choosing one of the ends to be the ``head''.
We want to show that $\operatorname{ker}\beta=\operatorname{im}\alpha$.
This is equivalent to saying that $\bar{\beta}$ is injective, where
$\bar{\beta}\colon V_n\otimes\lie_n/\operatorname{im}\alpha
\to \lie_n$ is map induced by $\beta$ on the quotient by
$\operatorname{im}\alpha$.
\parpic[r]{\input{figs/beta.pstex_t}}
The degree $d$ piece of $V_n \otimes \lie_n$, in the pictorial
description, is generated by floating trees with $d$ tails and one head,
all coloured by $x_i$, $i=1,\ldots ,n$. This is mapped to $\lie_n^{d+1}$,
which is isomorphic to the space of floating trees with $d+1$ tails and
one head, where only the tails are coloured by the $x_i$. The map $\beta$
acts as shown on the picture on the right.
\parpic[r]{\input{figs/taudef.pstex_t}}
We show that $\bar{\beta}$ is injective by exhibiting a map $\tau\colon
\lie_n^{d+1} \to V_n\otimes\lie_n^d/\operatorname{im}\alpha$ so that
$\tau\bar{\beta}=I$. The map $\tau$ is defined as follows: given a tree with
one head and $d+1$ tails $\tau$ acts by deleting the head and the
arc connecting it to the rest of the tree and summing over all ways of
choosing a new head from one of the tails on the left half of the tree relative to the
original placement of the head (see the
picture on the right). As long as we show that $\tau$ is well-defined,
it follows from the definition and the pictorial description of $\beta$
that $\tau\bar{\beta}=I$.
For well-definedness we need to check that the images of $\aAS$ and
$\aIHX$ relations under $\tau$ are in the image of $\alpha$. This we do
in the picture below. In both cases it is enough to check the
case when the ``head'' of the relation is the head of the tree
itself, as otherwise an $\aAS$ or $\aIHX$ relation in the domain is mapped
to an $\aAS$ or $\aIHX$ relation, thus zero, in the image.
\[ \input figs/tauproof.pstex_t \]
\[ \input figs/tauproof2.pstex_t \]
In the $\aIHX$ picture, in higher degrees $A$, $B$ and $C$ may denote
an entire tree. In this case, the arrow at $A$ (for example) means the
sum of all head choices from the tree $A$.
\qed
\end{proof}
\begin{comment} In view of the relation between the right half of
Equation~\eqref{eq:TailInvariance} and the special derivations $\sder$,
it makes sense to call w-tangles that satisfy the condition in the left
half of Equation~\eqref{eq:TailInvariance} ``special''. The $a$ images
of u-tangles are thus special. We do not know if the global version of
Theorem~\ref{thm:sder} holds true. Namely, we do not know whether every
special w-tangle is the $a$-image of a u-tangle.
\end{comment}
\draftcut
\subsection{The local topology of w-tangles}\label{subsec:TangleTopology}
So far throughout this section we have presented $w$-tangles as a Reidemeister theory:
a circuit algebra given by generators and relations. There is a topological intuition behind
this definition: we can interpret
the strings of a w-tangle diagram as oriented tubes in $\bbR^4$, as shown in Figure \ref{fig:CrossingTubes}.
Each tube has a 3-dimensional ``filling'', and
each crossings represents a ribbon intersection between the tubes where the one corresponding to
the under-strand intersects the filling of the over-strand. (For an explanation of ribbon intersections see
\cite[Section~\ref{1-subsubsec:ribbon}]{Bar-NatanDancso:WKO1}.)
In Figure~\ref{fig:CrossingTubes} we use the drawing conventions of \cite{CarterSaito:KnottedSurfaces}: we draw surfaces as if projected
from $\bbR^4$ to $\bbR^3$, and cut them open when they are ``hidden'' by something with a higher 4-th coordinate.
Note that w-braids can also be thought of in terms of
flying rings, with ``time'' being the fourth dimension; this is equivalent to the tube interpretation in the obvious way. In this language a crossing
represents a ring (the under strand), flying through another (the over strand). This is described in detail in
\cite[Section~\ref{1-subsubsec:FlyingRings}]{Bar-NatanDancso:WKO1}.
\begin{figure}
\input{figs/RibbonTubes.pstex_t}
\caption{A virtual crossing corresponds to non-interacting tubes, while a crossing means
that the tube corresponding to the under strand ``goes through'' the tube corresponding to the
over strand.}\label{fig:CrossingTubes}
\end{figure}
The assignment of tangled ribbon tubes in $\bbR^4$ to w-tangles is
well-defined (the Reidemeister and OC relations are satisfied), and after
Satoh \cite{Satoh:RibbonTorusKnots} we call it the tubing map and denote
it by $\glos{\delta}\colon\{\text{w-tangles}\} \to \{\text{Ribbon tubes in
} \bbR^4\}$. It is natural to expect that $\delta$ is an isomorphism, and indeed it is a
surjection. However,
the injectivity of $\delta$ remains unproven even for long w-knots. Nonetheless, ribbon
tubes in $\bbR^4$ will serve as the topological motivation and local
topological interpretation behind the circuit algebras presented in
this paper. In
\cite[Section~\ref{1-subsubsec:TopTube}]{Bar-NatanDancso:WKO1} we present
a topological construction for $\delta$. We will mention that construction
occasionally in this paper, but only for motivational purposes.
\parpic[r]{\input{figs/TubeOrientation.pstex_t}}
We observe that the ribbon tubes in the image of $\delta$ are endowed with two orientations,
we will call these the 1- and 2-dimensional orientations. The one
dimensional orientation is the direction of the tube as a ``strand'' of the tangle. In other
words, each tube has a ``core''\footnote{The core of Lord Voldemort's wand was made of a phoenix feather.}:
a distinguished line along the tube,
which is oriented as a 1-dimensional manifold. Furthermore, the tube as a
2-dimensional surface is oriented as given by $\delta$. An example is shown on the right.
Next we wish to understand the topological meaning of crossing
signs. Recall that a tube in $\bbR^4$ has a ``filling'': a solid
(3-dimensional) cylinder embedded in $\bbR^4$, with boundary the tube, and
the 2D orientation of the tube induces an orientation of its filling as a
3-dimensional manifold. At a (non-virtual) crossing the core of one tube
intersects the filling of another transversely. Due to the complementary
dimensions, the intersection is a single point, and the 1D orientation of
the core along with the 3D orientation of the filling it passes through
determines an orientation of the ambient space. We say that the crossing
is positive if this agrees with the standard orientation of $\bbR^4$,
and negative otherwise. Hence, there are four types of crossings, given
by whether the core of tube A intersects the filling of B or vice versa,
and two possible signs in each case. In the flying ring interpretation,
the 1D orientation of the tube is the direction of the flow of time. The
2D and 1D orientations of the tube together induce an orientation of the
flying ring which is a cross-section of the tube at each moment. Hence,
saying ``below'' and ``above'' the ring makes sense, and there are
four types of crossings: ring A flies through ring B from below or
from above; and ring B flies through ring A from below or from above
(cf.~\cite[Exercise~\ref{1-ex:swBn}]{Bar-NatanDancso:WKO1}). A crossing
is positive if the inner ring comes from below, and negative otherwise.
\parpic[r]{\input{figs/PushMembranes.pstex_t}}
We take the opportunity here to introduce another notation,
to be called the ``band notation'', which is more suggestive of the 4D topology than the strand notation we have been using so far.
We represent a tube in $\bbR^4$
by a picture of an oriented band in $\bbR^3$.
By ``oriented band'' we mean that it has two orientations: a 1D direction (for example an orientation of one of the edges),
and a 2D orientation as a surface. To interpret the 3D picture
of a band as an tube in $\bbR^4$, we add an extra coordinate. Let us refer to the $\bbR^3$ coordinates as $x, y$ and $t$,
and to the extra coordinate as $z$. Think of $\bbR^3$ as being embedded in $\bbR^4$ as the hyperplane $z=0$, and think of the
band as being made of a thin double membrane. Push the membrane up and down
in the $z$ direction at each point as far as the distance of that point from the boundary of the band, as shown on the right.
Furthermore, keep the 2D orientation of the top membrane (the one being pushed up), but reverse it on the bottom. This produces
an oriented tube embedded in $\bbR^4$.
In band notation, the four possible crossings appear as in Figure~\ref{fig:BandCrossings}, where underneath each crossing we indicate the corresponding
strand picture.
The signs for each type of crossing are also shown. Note that the sign of a crossing depends on the 2D orientation of the
over-strand, as well as the 1D direction of the under-strand. Hence, switching only
the direction (1D orientation) of a strand changes the sign of the crossing if and only if the strand involved is the under
strand. However, fully changing the orientation (both 1D and 2D) always switches the
sign of the crossing. Note that switching the strand direction in the strand notation corresponds to the complete (both 1D and 2D)
orientation switch.
\begin{figure}
\input{figs/BandCrossings.pstex_t}
\caption{Crossings and crossing signs in band notation.}\label{fig:BandCrossings}
\end{figure}
\draftcut
\subsection{Good properties and uniqueness of the homomorphic expansion}
\label{subsec:UniquenessForTangles}
In much the same way as in the case of braids
\cite[Section~\ref{1-subsubsec:BraidCompatibility}]{Bar-NatanDancso:WKO1},
$Z$ has a number of good properties with respect to various tangle
operations: it is group-like\footnote{In practice this simply means that
the value of the crossing is an exponential.}; it commutes with adding
an inert strand (note that this is a circuit algebra operation, hence it
doesn't add anything beyond homomorphicity); and it commutes with deleting
a strand and with strand orientation reversals. All but the last of
these were explained in the context of braids and the explanations still
hold. Orientation reversal $\glos{S_k}\colon\wT\to\wT$ is the operation
which reverses the orientation of the $k$-th component. Note that in the
world of topology (via Satoh's tubing map) this means reversing both the
1D and the 2D orientations. The induced diagrammatic operation $S_k\colon
\calA^w(T) \to \calA^w(S_k(T))$, where $T$ denotes the skeleton of a
given w-tangle, acts by multiplying each arrow diagram by $(-1)$ raised
to the power the number of arrow endings (both heads and tails) on the
$k$-th strand, as well as reversing the strand orientation. Saying that
``$Z$ commutes with $S_k$'' means that the appropriate square commutes.
The following theorem asserts that a well-behaved homomorphic expansion of
$w$-tangles is unique:
\begin{theorem}\label{thm:Tangleuniqueness}
The only homomorphic expansion satisfying the good properties described
above is the $Z$ defined in Section \ref{subsec:vw-tangles}.
\end{theorem}
\parpic[r]{\input{figs/rho.pstex_t}}
\begin{proof}
We first prove the following claim: Assume, by contradiction, that $Z'$ is a different
homomorphic expansion
of $w$-tangles with the good properties described above. Let $R'=Z'(\overcrossing)$ and
$R=Z(\overcrossing)$, and denote by $\rho$ the lowest degree homogeneous
non-vanishing term of $R'-R$. (Note that $R'$ determines $Z'$, so if $Z'\neq Z$, then
$R' \neq R$.) Suppose $\rho$ is of degree $k$.
Then we claim that $\rho=\alpha_1 w_k^1+\alpha_2 w_k^2$ is a linear combination of $w_k^1$ and $w_k^2$,
where $w_k^i$ denotes a $k$-wheel
living on strand $i$, as shown on the right.
Before proving the claim, note that it leads to a contradiction.
Let $d_i$ denote the operation ``delete strand $i$''.
Then up to degree $k$, we have $d_1(R')=\alpha_2 w_k^1$ and $d_2(R')=\alpha_1 w_k^2$, but
$Z'$ is compatible with strand deletions, so $\alpha_1=\alpha_2=0$. Hence
$Z$ is unique, as stated.
On to the proof of the claim, note that $Z'$ being an expansion determines the degree 1 term of $R'$
(namely, the single arrow
$a^{12}$ from strand 1 to strand 2, with coefficient 1). So we can assume that $k \geq 2$. Note also that since both $R'$ and $R$ are
group-like, $\rho$ is primitive. Hence $\rho$ is a linear combination of connected diagrams,
namely trees and wheels.
Both $R$ and $R'$ satisfy the Reidemeister 3 relation:
$$R^{12}R^{13}R^{23}=R^{23}R^{13}R^{12}, \qquad R'^{12}R'^{13}R'^{23}=R'^{23}R'^{13}R'^{12}$$
where the superscripts denote the strands on which $R$ is placed
(compare with the proof of Theorem \ref{thm:ExpansionForTangles}).
We focus our attention on the degree $k+1$ part of the equation for $R'$,
and use that up to degree $k+1$. We can write $R'=R+\rho+\mu$, where $\mu$ denotes the degree
$k+1$ homogeneous part of $R'-R$. Thus, up to degree $k+1$, we have
$$(R^{12}\!+\!\rho^{12}\!+\!\mu^{12})(R^{13}\!+\!\rho^{13}\!+\!\mu^{13})(R^{23}\!+\!\rho^{23}\!+\!\mu^{23})=
(R^{23}\!+\!\rho^{23}\!+\!\mu^{23})(R^{13}\!+\!\rho^{13}\!+\!\mu^{13})(R^{12}\!+\!\rho^{12}\!+\!\mu^{12}).$$
The homogeneous degree $k+1$ part of this equation is a sum of some terms which contain $\rho$
and some which don't. The diligent reader can check that those which don't involve $\rho$
cancel on both sides, either due to the
fact that $R$ satisfies the Reidemeister 3 relation, or by simple degree counting.
Rearranging all the terms which do involve $\rho$ to the left side, we get the following equation,
where $a^{ij}$ denotes an arrow pointing from strand $i$ to strand $j$:
\begin{equation}\label{eq:Reid3forrho}
[a^{12}, \rho^{13}]+[\rho^{12},a^{13}]+[a^{12},\rho^{23}]+[\rho^{12},a^{23}]+
[a^{13},\rho^{23}]+[\rho^{13},a^{23}]=0.
\end{equation}
The third and fifth terms sum to $[a^{12}+a^{13},\rho^{23}]$,
which is zero due to the ``head-invariance'' of diagrams, as in Remark
\ref{rem:HeadInvariance}.
We treat the tree and wheel components of $\rho$ separately.
Let us first assume that $\rho$ is a linear combination of trees. Recall that the
space of trees on two strands is isomorphic to $\lie_2 \oplus \lie_2$, the
first component given by trees whose head is on the first strand, and the second
component by trees with their head on the second strand.
Let $\rho=\rho_1 +\rho_2$, where $\rho_i$ is the projection to the $i$-th component
for $i=1,2$.
Note that due to $TC$, we have $[a^{12}, \rho^{13}_2]=[\rho^{12}_2,a^{13}]=
[\rho^{12}_1,a^{23}]=0$. So Equation (\ref{eq:Reid3forrho}) reduces to
$$[a^{12},\rho^{13}_1]+[\rho^{12}_1,a^{13}]+[\rho^{12}_2,a^{23}]+[\rho^{13}_1,a^{23}]+[\rho^{13}_2,a^{23}]=0$$
The left side of this equation lives in $\bigoplus_{i=1}^3 \lie_3$. Notice that only the
first term lies in the second direct sum component, while the second, third and last terms live in the third one,
and the fourth term lives in the first.
This in particular means that the first term is itself zero. By $\aSTU$, this implies
$$0=[a^{12},\rho^{13}_1]=-[\rho_1, x_1]^{13}_2,$$
where $[\rho_1, x_1]^{13}_2$ means the tree defined by the element $[\rho_1,x_1] \in \lie_2$,
with its tails on strands 1 and 3, and head on strand 2. Hence, $[\rho_1, x_1]=0$, so $\rho_1$
is a multiple of $x_1$. The tree given by $\rho_1=x_1$ is a degree 1 element, a possibility we have eliminated, so
$\rho_1=0$.
Equation (\ref{eq:Reid3forrho}) is now reduced to
$$[\rho^{12}_2,a^{23}]+[\rho^{13}_2,a^{23}]=0.$$
Both terms are words in $\lie_3$, but notice that the first term does not involve
the letter $x_3$. This means that if the second term involves $x_3$ at all, i.e., if
$\rho_2$ has tails on the second strand, then both terms have to be zero individually.
Assuming this and looking at the first term, $\rho^{12}_2$ is a Lie word in $x_1$ and $x_2$,
which does involve $x_2$ by assumption. We have
$[\rho^{12}_2,a^{23}]=[x_2, \rho^{12}_2]=0$, which implies $\rho^{12}_2$ is a multiple of $x_2$, in
other words, $\rho$ is a single arrow on the second strand. This is ruled out by the
assumption that $k \geq 2$.
On the other hand if the second term does not involve $x_3$ at all, then $\rho_2$ has no tails on the second
strand, hence it is of degree 1, but again $k \geq 2$. We have proven that the ``tree part''
of $\rho$ is zero.
So $\rho$ is a linear combination of wheels.
Wheels have only tails, so the
first, second and fourth terms of (\ref{eq:Reid3forrho}) are zero due to the tails commute relation.
What remains is $[\rho^{13}, a^{23}]=0$. We assert that this is true if and only if each
linear component of $\rho$ has all of its tails on one strand.
To prove this, recall each wheel of $\rho^{13}$ represents a cyclic word in letters $x_1$ and $x_3$.
The map $r\colon \rho^{13} \mapsto [\rho^{13}, a^{23}]$ is a map $\attr_2 \to \attr_3$, which sends each
cyclic word in letters $x_1$ and $x_3$ to the sum of all ways of substituting $[x_2,x_3]$ for one
of the $x_3$'s in the word.
Note that if we expand the commutators, then all terms that have $x_2$
between two $x_3$'s cancel. Hence all remaining terms will be cyclic words in $x_1$ and $x_3$ with
a single occurrence of $x_2$ in between an $x_1$ and an $x_3$.
We construct an almost-inverse $r'$ to $r$: for a cyclic word $w$ in $\attr_3$ with one occurrence of $x_2$,
let $r'$ be the map that deletes $x_2$ from $w$ and maps it to the resulting word in
$\attr_2$ if $x_2$ is followed by $x_3$ in $w$, and maps it to 0 otherwise. On the rest of $\attr_3$
the map $r'$ may be defined to be 0.
The composition $r'r$ takes a cyclic word in $x_1$ and $x_3$ to itself multiplied by the number of times
a letter $x_3$ follows a letter $x_1$ in it. The kernel of this map can consist only of cyclic words
that do not contain the sub-word $x_3x_1$, namely, these are the words of the form $x_3^k$ or $x_1^k$.
Such words are indeed in the kernel of $r$, so these make up exactly the kernel of $r$. This is exactly what
needed to be proven: all wheels in $\rho$ have all their tails on one strand.
This concludes the proof of the claim, and the proof of the theorem. \qed
\end{proof}