
\draftcut
\section{w-Knots} \label{sec:w-knots}

\begin{quote} \small {\bf Section Summary. }
  \summaryknots
\end{quote}

{\bf Knots are the wrong objects for study in knot theory,}
v-knots are the wrong objects for study in the theory of v-knotted
objects and w-knots are the wrong objects for study in the theory of
w-knotted objects. Studying uvw-knots on their own is the parallel of
studying cakes, cookies and pastries as they come out of the bakery --- we sure
want to make them our own, but the theory of desserts is more about the
ingredients and how they are put together than about the end products. In
algebraic knot theory this reflects through the fact that knots are
not finitely generated in any sense (hence, they must be made of some
more basic ingredients), and through the fact that there are very few
operations defined on knots (connected sums and satellite operations
being the main exceptions), and thus, most interesting properties of
knots are transcendental, or non-algebraic, when viewed from within the
algebra of knots and operations on knots~\cite{Bar-Natan:AKT-CFA}.

The right objects for study in knot theory, or v-knot theory or w-knot
theory, are thus the ingredients that make up knots and that permit a
richer algebraic structure. These are braids, studied in the previous
section, and even more so tangles and tangled graphs, studied in 
\cite{Bar-NatanDancso:WKO2}.  Yet tradition has its place and the sweets are
tempting, and we can introduce and apply some of the tools we will
use in the deeper and healthier study of w-tangles and w-tangled foams
in the limited, but tasty, arena of the baked goods of knot theory,
the knots themselves.

\draftcut \subsection{v-Knots and w-Knots} \label{subsec:VirtualKnots}
v-Knots may be understood either as knots drawn on surfaces modulo the
addition or removal of empty handles~\cite{Kuperberg:VirtualLink} or as
``Gauss diagrams'' (see Remark~\ref{rem:GD}), or simply ``unembedded
but wired together'' crossings modulo the Reidemeister moves
(\cite{Kauffman:VirtualKnotTheory, ManturovIlyutko:VirtualKnotsBook,
Roukema:GPV} and Section 2 of \cite{Bar-NatanDancso:WKO2}). But right now
we forgo the topological and the abstract and give only the ``planar''
(and somewhat less philosophically satisfying) definition of v-knots.

\begin{figure}[h]
\[ \pstex{VKnot} \]
\caption{
  A long v-knot diagram with 2 virtual crossings, 2 positive crossings and
  2 negative crossings. A positive-negative pair can easily be cancelled
  using R2, and then a virtual crossing can be cancelled using VR1, and it
  seems that the rest cannot be simplified any further.
} \label{fig:VKnot}
\end{figure}

\begin{definition} A ``long v-knot diagram'' is an arc smoothly
drawn in the plane from $-\infty$ to $+\infty$, with finitely many
self-intersections, divided into ``virtual crossings'' $\virtualcrossing$,
overcrossings $\overcrossing$ (a.k.a.~positive crossings), and undercrossings $\undercrossing$ (a.k.a.~negative crossings);
and regarded up to planar isotopy. A picture is worth more than a more
formal definition, and one appears in Figure~\ref{fig:VKnot}. A ``long
v-knot'' is an equivalence class of long v-knot diagrams, modulo the
equivalence generated by the Reidemeister $1^{\!s}$, 2 and 3 moves
(\glost{\Rs}, \glost{R2} and \glost{R3})\footnote{
\Rs\ is the ``spun'' version of R1 --- kinks can
be spun around, but not removed outright. See
Figure~\ref{fig:VKnotRels}.}, the virtual Reidemeister 1 through 3 moves
(\glost{VR1}, \glost{VR2}, \glost{VR3}), and by the mixed relations
(\glost{M}); all these are shown in Figure~\ref{fig:VKnotRels}. Finally,
``long w-knots'' are obtained from long v-knots by also dividing
by the overcrossings commute (OC) relations, also shown in
Figure~\ref{fig:VKnotRels}.  Note that we never mod out by the
Reidemeister 1 (\glost{R1}) move nor by the undercrossings commute
relation (UC).
\end{definition}

\begin{figure}
\[ \pstex{VKnotRels} \]
\caption{
  The relations defining v-knots and w-knots, along with two relations that
  are {\em not} imposed.
} \label{fig:VKnotRels}
\end{figure}

\begin{defwarn} A ``circular v-knot'' is like a long v-knot, except
parametrized by a circle rather than by a long line. Unlike the case of
usual knots, circular v-knots are {\bf not} equivalent to long v-knots
\cite{ManturovIlyutko:VirtualKnotsBook}.  The same applies to w-knots.
\end{defwarn}

\begin{defwarn} Long v-knots form a monoid using the concatenation
operation $\#$. Unlike the case of usual knots, the resulting monoid
is {\bf not} abelian \cite{ManturovIlyutko:VirtualKnotsBook}.
%  The same applies to w-knots.
\end{defwarn}

\begin{remark} \label{rem:GD} A
``Gauss diagram'' is a straight ``skeleton
line'' along with signed directed chords (signed ``arrows'') marked along
it (more at~\cite{Kauffman:VirtualKnotTheory, ManturovIlyutko:VirtualKnotsBook,
GoussarovPolyakViro:VirtualKnots}). Gauss diagrams are in obvious
bijection with long v-knot diagrams; the skeleton line of a Gauss diagram
corresponds to the parameter space of the v-knot, and the arrows
correspond to the crossings, with each arrow heading from the upper strand
to the lower strand, marked by the sign of the relevant crossing:
\[ \pstex{GDExample} \]
One may also describe the relations in Figure~\ref{fig:VKnotRels} as well
as circular v-knots and other types of v-knots (as we will encounter later)
in terms of Gauss diagrams with varying skeletons.
\end{remark}

\begin{figure}
\[ \pstex{Kinks} \]
\caption{
  The positive and negative under-then-over kinks (left), and the positive
  and negative over-then-under kinks (right). In each pair the
  negative kink is the $\#$-inverse of the positive kink, where $\#$
  denotes the concatenation operation.
\label{fig:Kinks}}
\end{figure}

\begin{remark}\label{rem:Framing} 
Since we do not mod out by R1, it is perhaps more
appropriate to call our class of v/w-knots ``framed long v/w-knots'',
but since we care more about framed v/w-knots than about unframed ones,
we reserve the unqualified name for the framed case, and when we do wish to
mod out by R1 we will explicitly write ``unframed long v/w-knots''.

Recall that in the case of ``usual knots'', or u-knots, dropping the R1
relation altogether also results in a $\bbZ^2$-extension of unframed
knot theory, where the two factors of $\bbZ$ are framing and rotation
number. If one wants to talk about ``true'' framed knots, one mods out
by the spun Reidemeister 1 relation (\Rs\ of Figure~\ref{fig:VKnotRels}),
which preserves the blackboard framing but does not preserve the rotation
number. We take the analogous approach here, including the \Rs\ relation
-- but not R1 -- in the v and w cases.

This said, note that the monoid of long v-knots is just a central extension
by $\bbZ$ of the monoid of unframed long v-knots, and so studying the
framed case is not very different from studying the unframed case. Indeed
the four ``kinks'' of Figure~\ref{fig:Kinks} generate a central $\bbZ$ within
long v-knots, and it is not hard to show that the sequence
\begin{equation} \label{eq:FramedAndUnframed}
   1\longrightarrow
   \bbZ \longrightarrow
   \{\text{long v-knots}\} \longrightarrow
   \{\text{unframed long v-knots}\} \longrightarrow 1
\end{equation}
is split and exact. The same can be said for w-knots.
\end{remark}

\begin{exercise} \label{ex:sl} Show that a splitting of the
sequence~\eqref{eq:FramedAndUnframed} is given by the ``self-linking''
invariant $\glos{\sl}\colon \{\text{long v-knots}\}\to\bbZ$ defined by
\[
  \sl(K):=\sum_{\text{crossings}\atop x\text{ in }K}\sign x ,
\]
where $K$ is a v-knot diagram, and the sign of a crossing $x$ is defined
so as to agree with the signs in Figure~\ref{fig:Kinks}.
\end{exercise}

\begin{remark} Note that  w-knots are strictly weaker than v-knots --- a notorious
example is the Kishino knot (e.g.~\cite{Dye:Kishinos}) which is non-trivial
as a v-knot; yet both it and its mirror are trivial as w-knots. Yet ordinary
knots inject even into w-knots, as the Wirtinger presentation makes sense
for w-knots and therefore w-knots have a ``fundamental quandle'' which
generalizes the fundamental quandle of ordinary
knots~\cite{Kauffman:VirtualKnotTheory, ManturovIlyutko:VirtualKnotsBook},
and as the fundamental quandle of ordinary knots separates ordinary
knots~\cite[Corollary~16.3]{Joyce:TheKnotQuandle}.
\end{remark}


\subsubsection{A topological construction of Satoh's tubing map}
\label{subsubsec:TopTube}
Following Satoh~\cite{Satoh:RibbonTorusKnots}
and using the same constructions as in Section~\ref{subsubsec:ribbon}, we
can map w-knots to (``long'') ribbon tubes in $\bbR^4$ (and the relations
in Figure~\ref{fig:VKnotRels} still hold). It is natural to expect that
this ``tubing'' map is an isomorphism; in other words, that the theory
of w-knots provides a ``Reidemeister framework'' for long ribbon tubes
in $\bbR^4$ --- that every long ribbon tube is in the image of this map
and that two ``w-knot diagrams'' represent the same long ribbon tube iff
they differ by a sequence of moves as in Figure~\ref{fig:VKnotRels}. This
remains unproven.

Let $\glos{\delta}\colon\{\text{v-knots}\} \to \{\text{Ribbon
tori in } \bbR^4\}$ denote the tubing map. In Satoh's~\cite{Satoh:RibbonTorusKnots}
$\delta$ is called ``Tube''.  It is worthwhile to give a completely
``topological'' definition of $\delta$. To do this we must start with
a topological interpretation of v-knots.

The standard topological interpretation of v-knots
(see e.g.~\cite{Kuperberg:VirtualLink}) is that they are oriented framed knots
drawn\footnote{Here and below, ``drawn on $\Sigma$'' means ``embedded in
$\Sigma\times[-\epsilon,\epsilon]$''.} on an oriented surface $\Sigma$,
modulo ``stabilization'', which is the addition and/or removal of empty
handles (handles that do not intersect with the knot). We prefer an
equivalent, yet even more bare-bones approach. For us, a virtual knot is an
oriented framed knot $\gamma$ drawn on a ``virtual surface $\glos{\Sigma}$ for
$\gamma$''. More precisely, $\Sigma$ is an oriented surface that may have
a boundary, $\gamma$ is drawn on $\Sigma$, and the pair $(\Sigma,\gamma)$
is taken modulo the following relations:
\begin{itemize}
\item Isotopies of $\gamma$ on $\Sigma$ (meaning, in
  $\Sigma\times[-\epsilon,\epsilon]$).
\item Tearing and puncturing parts of $\Sigma$ away from $\gamma$:
\end{itemize}
\[ \input{figs/TearingAndPuncturing.pstex_t} \]
(We call $\Sigma$ a ``virtual surface'' because tearing and puncturing
imply that we only care about it in the immediate vicinity of $\gamma$).

We can now define\footnote{Following a private discussion with Dylan
Thurston.} a map $\delta$, defined on v-knots and taking values
in ribbon tori in $\bbR^4$: given $(\Sigma,\gamma)$, embed $\Sigma$
arbitrarily in $\bbR^3_{xzt}\subset\bbR^4$. Note that the unit normal
bundle of $\Sigma$ in $\bbR^4$ is a trivial circle bundle and it has a
distinguished trivialization, constructed using its positive-$y$-direction
section and the orientation that gives each fibre a linking number
$+1$ with the base $\Sigma$.  We say that a normal vector to $\Sigma$
in $\bbR^4$ is ``near unit'' if its norm is between $1-\epsilon$ and
$1+\epsilon$. The near-unit normal bundle of $\Sigma$ has as fibre
an annulus that can be identified with $[-\epsilon,\epsilon]\times
S^1$ (identifying the radial direction $[1-\epsilon,1+\epsilon]$
with $[-\epsilon,\epsilon]$ in an orientation-preserving manner), and
hence, the near-unit normal bundle of $\Sigma$ defines an embedding
of $\Sigma\times[-\epsilon,\epsilon]\times S^1$ into $\bbR^4$. On the
other hand, $\gamma$ is embedded in $\Sigma\times[-\epsilon,\epsilon]$ so
$\gamma\times S^1$ is embedded in $\Sigma\times[-\epsilon,\epsilon]\times
S^1$, and we can let $\delta(\gamma)$ be the composition
\[ \gamma\times S^1
  \hookrightarrow\Sigma\times[-\epsilon,\epsilon]\times S^1
  \hookrightarrow\bbR^4,
\]
which is a torus in $\bbR^4$, oriented using the given orientation of
$\gamma$ and the standard orientation of $S^1$.

A framing of a knot (or a v-knot) $\gamma$ can be thought of as a
``nearby companion'' to $\gamma$. Applying the above procedure to a knot
and a nearby companion simultaneously, we find that $\delta$ takes framed
v-knots to framed ribbon tori in $\bbR^4$, where a framing of a tube in
$\bbR^4$ is a continuous up-to-homotopy choice of unit normal vector at
every point of the tube. Note that from the perspective of flying rings as
in Section~\ref{subsubsec:FlyingRings}, a framing is a ``companion ring''
to a flying ring. In the framing of $\delta(\gamma)$ the companion ring
is never linked with the main ring, but can fly parallel inside, outside,
above or below it and change these positions, as shown in Figure~\ref{fig:CompanionRing}.

\begin{figure}
 \input figs/CompanionRing.pstex_t
\caption{Framing as companion rings.}\label{fig:CompanionRing}
\end{figure}

We leave it to the reader to verify that $\delta(\gamma)$ is ribbon, that
it is independent of the choices made within its construction, that it is
invariant under isotopies of $\gamma$ and under tearing and puncturing of
$\Sigma$, that it is also invariant under the OC
relation of Figure~\ref{fig:VKnotRels} and hence, the true domain of
$\delta$ is w-knots, and that it is equivalent to Satoh's tubing map.

\draftcut
\subsection{Finite Type Invariants of v-Knots and w-Knots}
\label{subsec:FTforvwKnots}

Much as for v-braids and w-braids (see Section~\ref{subsec:FT4Braids}) and
much as for ordinary knots (e.g.~\cite{Bar-Natan:OnVassiliev}) we define
finite type invariants for v-knots and for w-knots using an alternation
scheme with $\semivirtualover\to\overcrossing-\virtualcrossing$
and $\semivirtualunder\to\virtualcrossing-\undercrossing$. That is,
given any invariant of v- or w-knots taking values in an abelian group, 
we extend the invariant to v- or
w-knots also containing ``semi-virtual crossings'' like $\semivirtualover$
and $\semivirtualunder$ using the above assignments, and we declare an
invariant to be ``of type $m$'' if it vanishes on v- or w-knots with more
than $m$ semi-virtuals. As for v- and w-braids and as for ordinary knots,
such invariants have an ``$m$th derivative'', their ``weight system'',
which is a linear functional on the space $\calA^{sv}(\uparrow)$ (for
v-knots) or $\calA^{sw}(\uparrow)$ (for w-knots). We turn to the definitions
of these spaces, following~\cite{GoussarovPolyakViro:VirtualKnots,
Bar-NatanHalachevaLeungRoukema:v-Dims}:

\begin{definition} \label{def:ArrowDiagrams} An ``arrow diagram''
is a chord diagram along a long line (called ``the skeleton''),
in which the chords are oriented (hence ``arrows''). An example is
given in Figure~\ref{fig:ADand6T}. Let $\glos{\calD^v(\uparrow)}$ be the space of
formal linear combinations of arrow diagrams.  Let $\glos{\calA^v(\uparrow)}$
be $\calD^v(\uparrow)$ modulo all ``6T relations''. Here a 6T relation is
any (signed) combination of arrow diagrams obtained from the diagrams in
Figure~\ref{fig:6T} by placing the 3 vertical strands there along a long
line skeleton in any order, and possibly adding some further arrows in between, 
as shown in Figure~\ref{fig:ADand6T}. Let $\glos{\calA^{sv}(\uparrow)}$
be the further quotient of $\calA^v(\uparrow)$ by the \glost{RI} relation,
where the RI (for rotation number independence) relation asserts that an
isolated arrow pointing to the right equals an isolated arrow pointing
to the left\footnote{
  The XII relation of~\cite{Bar-NatanHalachevaLeungRoukema:v-Dims} follows
  from RI and need not be imposed.
}, as shown in Figure~\ref{fig:ADand6T}.

\begin{figure}
\[ \pstex{ADand6T} \]
\caption{
  An arrow diagram of degree 6, a 6T relation, and an RI relation. The dotted parts indicate 
  that there may be more arrows on other parts of the skeleton, however these remain the 
  same throughout the relation.
} \label{fig:ADand6T}
\end{figure}

Let $\glos{\calA^w(\uparrow)}$ be the further quotient of $\calA^v(\uparrow)$
by the TC relation, first displayed
in Figure~\ref{fig:TCand4T} and reproduced for the case of a
long line skeleton in Figure~\ref{fig:TCand4TForKnots}. Likewise, let
$\glos{\calA^{sw}(\uparrow)}:=\calA^{sv}(\uparrow)/TC=\calA^w(\uparrow)/RI$.
Alternatively, noting that given TC two of the terms in 6T drop out,
$\calA^w(\uparrow)$ is the space of formal linear combinations
of arrow diagrams modulo TC and $\aft$ relations, displayed in
Figures~\ref{fig:TCand4T} and~\ref{fig:TCand4TForKnots}. Likewise,
$\calA^{sw}=\calD^v/TC,\aft,RI$. Finally, grade $\calD^v(\uparrow)$ and
all of its quotients by declaring that the degree of an arrow diagram
is the number of arrows in it.

\begin{figure}
\[ \pstex{TCand4TForKnots} \]
\caption{The TC and the $\protect\aft$ relations for knots.}
\label{fig:TCand4TForKnots}
\end{figure}

\end{definition}

As an example, the spaces $\calA^{v,sv,w,sw}(\uparrow)$ (that is, any of the spaces above)
restricted to degrees up to 2 are studied in detail in
Section~\ref{subsec:ToTwo}.

In the same manner as in the theory of finite type invariants of ordinary
knots (see especially~\cite[Section~3]{Bar-Natan:OnVassiliev}), the spaces
$\calA^{\star}(\uparrow)$ (meaning, all of the spaces above) carry much algebraic structure.  The
juxtaposition product makes them into graded algebras. The product of two
finite type invariants is a finite type invariant (whose type is the sum
of the types of the factors); this induces a product on weight systems,
and therefore a co-product $\Delta$ on arrow diagrams. In brief (and much
the same as in the usual finite type story), the co-product $\Delta D$
of an arrow diagram $D$ is the sum of all ways of dividing the arrows
in $D$ between a ``left co-factor'' and a ``right co-factor''. In summary:

\begin{proposition} \label{prop:CoarseStructure} $\calA^v(\uparrow)$,
$\calA^{sv}(\uparrow)$, $\calA^w(\uparrow)$, and
$\calA^{sw}(\uparrow)$ are co-commutative graded bialgebras.
\end{proposition}

By the Milnor-Moore theorem~\cite[Theorem 6.11]{MilnorMoore:Hopf} we find that
$\calA^{v,sv,w,sw}(\uparrow)$ are the
universal enveloping algebras of their Lie algebras of primitive
elements (that is, elements $D$ such that $\Delta(D)=1\otimes D + D \otimes 1$). Denote these (graded) Lie algebras by
$\glos{\calP^{v,sv,w,sw}(\uparrow)}$, respectively.

When we grow up we'd like to understand $\calA^v(\uparrow)$ and
$\calA^{sv}(\uparrow)$. At the moment we know only very little about these
spaces beyond the generalities of Proposition~\ref{prop:CoarseStructure}.
In Section \ref{subsec:SomeDimensions} some dimensions of low degree parts of
$\calA^{v,sv}(\uparrow)$ are discussed. Also, given a finite dimensional
Lie bialgebra and a finite dimensional representation thereof, we know
how to construct linear functionals on $\calA^v(\uparrow)$ (one in each
degree~\cite{Haviv:DiagrammaticAnalogue, Leung:CombinatorialFormulas}),
but not on $\calA^{sv}(\uparrow)$. But we don't even know which degree
$m$ linear functionals on $\calA^{sv}(\uparrow)$ are the weight systems
of degree $m$ invariants of v-knots (that is, we have not solved the
``Fundamental Problem''~\cite{Bar-NatanStoimenow:Fundamental} for
v-knots).

As we shall see below, the situation is much brighter for
$\calA^{w,sw}(\uparrow)$.


\draftcut
\subsection{Expansions for w-Knots} \label{subsec:Z4Knots}
The notion of ``an expansion'' (or ``a universal finite type invariant'')
for w-knots (or v-knots) is defined in complete analogy with the
parallel notion for usual knots (see e.g.~\cite{Bar-Natan:OnVassiliev}),
except replacing double points $\doublepoint$ with semi-virtual
crossings $\semivirtualover$ and $\semivirtualunder$, and replacing
chord diagrams by arrow diagrams. Alternatively, it is the same as
an expansion for w-braids (as in Definition~\ref{def:vwbraidexpansion}),
simply replacing w-braids by w-knots. Just as in the
cases of u-knots (i.e., ordinary knots) and/or w-braids, the existence of an expansion
$Z\colon \{\text{w-knots}\}\to\calA^{sw}(\uparrow)$ is equivalent to the
statement ``every weight system integrates'', i.e., ``every degree $m$
linear functional on $\calA^{sw}(\uparrow)$ is the $m$th derivative of
a type $m$ invariant of long w-knots''.

\begin{theorem} \label{thm:ExpansionForKnots}
There exists an expansion $Z\colon \{\text{w-knots}\}\to\calA^{sw}(\uparrow)$.
\end{theorem}

\begin{proof} It is best to define $Z$ by an example, and it is best to
display the example only as a picture:
\[ \pstex{ZwKnotsExample} \]
It is clear how to define $Z(K)$ in the general case --- for every crossing
in $K$ place an exponential reservoir of arrows (compare
with Equation~\eqref{eq:reservoir}) next to that crossing, with
the arrows heading from the upper strand to the lower strand, taking
positive reservoirs ($e^a$, with $a$ symbolizing the arrow) for positive
crossings and negative reservoirs ($e^{-a}$) for negative crossings, and
then tug the skeleton until it looks like a straight line. Note that the
TC relation in $\calA^{sw}$ is used to show that all reasonable
ways of placing an arrow reservoir at a crossing (with its heading and sign
fixed) are equivalent:
\[ \pstex{FourWays} \]

The same proof that shows the invariance of $Z$ in the braid case
(see Theorem~\ref{thm:RInvariance}) works here as well\footnote{A tiny bit of
extra care is required for invariance under \Rs: it easily follows from
RI.}, and the same argument as in the braid case shows the universality
of $Z$. \qed
\end{proof}

\begin{remark} \label{rem:ZwForGD} Using the language of Gauss diagrams
(Remark~\ref{rem:GD}) the definition of $Z$ is even simpler. Simply map
every positive arrow in a Gauss diagram to a positive ($e^a$) reservoir,
and every negative one to a negative ($e^{-a}$) reservoir:
\[ \pstex{ZwForGD} \]
\end{remark}

An expansion (a universal finite type invariant) is as interesting as its
target space, for it is just a tool that takes linear functionals on the
target space to finite type invariants on its domain space. The purpose
of the next section is to find out how interesting are our present target
space, $\calA^{sw}(\uparrow)$, and its ``parent'', $\calA^w(\uparrow)$.

\draftcut
\subsection{Jacobi Diagrams, Trees and Wheels} \label{subsec:Jacobi}

In studying $\calA^w(\uparrow)$ we again follow the model set
by usual knots: we introduce the space $\calA^{wt}$ of ``w-Jacobi diagrams'' and show that
it is isomorphic to $\calA^w$. Major advantages of working with $\calA^{wt}$
are that the co-product, the primitives, and the relationship with Lie algebras are
much more natural and easy to describe. Compare the following definitions and theorem
with~\cite[Section~3]{Bar-Natan:OnVassiliev}.

\begin{definition} \label{def:wJac} A ``w-Jacobi diagram on a long line
skeleton''\footnote{What a mouthful! We usually short this to
``w-Jacobi diagram'', or sometimes ``arrow diagram'' or just ``diagram''.}
is a connected graph made of the following ingredients:
\begin{itemize}
\item A ``long'' oriented ``skeleton'' line. We usually draw the skeleton
  line a bit thicker for emphasis.
\item Other directed edges, usually called ``arrows''.
\item Trivalent ``skeleton vertices'' in which an arrow starts or ends on
  the skeleton line.
\item Trivalent ``internal vertices'' in which {\em two arrows end and one arrow
  begins} (this will be important in Section \ref{subsec:LieAlgebras} where
  we relate these diagrams to Lie algebras). 
  The internal vertices are ``oriented'' --- of the two arrows that
  end in an internal vertices, one is marked as ``left'' and the other is
  marked as ``right''. In reality when a diagram is drawn in the plane, we
  almost never mark ``left'' and ``right'', but instead assume the
  ``left'' and ``right'' inherited from the plane, as seen from the
  outgoing arrow from the given vertex.
\end{itemize}
Note that we allow multiple arrows connecting the same two vertices
(though at most two are possible, given connectedness and trivalence)
and we allow ``bubbles'' --- arrows that begin and end in the same
vertex. Also keep in mind that for the purpose of determining equality of diagrams
the skeleton line is distinguished.
The ``degree'' of a w-Jacobi diagram is half the number of
trivalent vertices in it, including both internal and skeleton vertices.
An example of a w-Jacobi diagram is in Figure~\ref{fig:wJacDiag}.
\end{definition}

\begin{figure}
\[ \pstex{wJacDiag} \]
\caption{A degree 11 w-Jacobi diagram on a long line skeleton. It has a
skeleton line at the bottom, 13 vertices along the skeleton (of which 2 are
incoming and 11 are outgoing), 9 internal vertices (with only one
explicitly marked with ``left'' ($l$) and ``right'' ($r$)) and one
bubble. The five quadrivalent vertices that seem to appear in the diagram
are just projection artifacts and graph-theoretically, they don't exist.}
\label{fig:wJacDiag}
\end{figure}

\begin{definition}
Let $\glos{\calD^{wt}}(\uparrow)$ be the graded vector space of formal
linear combinations\footnote{$\bbQ$-linear, or any other field of 
characteristic 0.} of w-Jacobi diagrams on a long line skeleton,
and let $\glos{\calA^{wt}}(\uparrow)$ be $\calD^{wt}(\uparrow)$
modulo the $\glos{\aSTU_1}$, $\glos{\aSTU_2}$, and TC relations of
Figure~\ref{fig:aSTU}. Note that each diagram appearing in each
$\aSTU$ relation has a ``central edge'' $e$ which can serve as an
``identifying name'' for that $\aSTU$. Thus, given a diagram $D$ with
a marked edge $e$ which is either on the skeleton or which contacts
the skeleton, there is an unambiguous $\aSTU$ relation ``around'' or
``along'' the edge $e$.
\end{definition}

\begin{figure}
\[ \pstex{aSTU} \]
\caption{The $\protect\aSTU_{1,2}$ and TC relations with
their ``central edges'' marked $e$.}
\label{fig:aSTU}
\end{figure}

\begin{figure}
\[ \pstex{aIHX} \]
\caption{The $\protect\aAS$ and $\protect\aIHX$ relations.}
\label{fig:aIHX}
\end{figure}

We like to call the following theorem ``the bracket-rise theorem'',
for it justifies the introduction of internal vertices, and as
should be clear from the $\aSTU$ relations and as will become even
clearer in Section~\ref{subsec:LieAlgebras}, internal vertices can be
viewed as ``brackets''. Two other bracket-rise theorems are Theorem~6
of~\cite{Bar-Natan:OnVassiliev} and Ohtsuki's theorem, i.e., Theorem~4.9
of~\cite{Polyak:ArrowDiagrams}.

\begin{theorem}[Bracket-rise] \label{thm:BracketRise} The obvious inclusion
$\iota\colon \calD^v(\uparrow)\to\calD^{wt}(\uparrow)$ of arrow diagrams
(see Definition~\ref{def:ArrowDiagrams}) into w-Jacobi diagrams descends
to the quotient $\calA^w(\uparrow)$ and induces an isomorphism\footnote{At 
this point a vector space isomorphism, but we'll soon define a bialgebra 
structure on $\calA^{wt}$ to make it into an isomorphism of bialgebras.}
$\bar\iota\colon \calA^w(\uparrow)\stackrel{\sim}{\longrightarrow}
\calA^{wt}(\uparrow)$.  Furthermore, the $\glos{\aAS}$ and $\glos{\aIHX}$
relations of Figure~\ref{fig:aIHX} hold in $\calA^{wt}(\uparrow)$.
\end{theorem}

\begin{proof} The proof, joint with D.~Thurston, is modelled after
the proof of Theorem~6 of~\cite{Bar-Natan:OnVassiliev}. To show that
$\iota$ descends to $\calA^w(\uparrow)$ we just need to show that in
$\calA^{wt}(\uparrow)$, $\aft$ follows from $\aSTU_{1,2}$. Indeed,
applying $\aSTU_1$ along the edge $e_1$ and $\aSTU_2$ along $e_2$ in
the picture below, we get the two sides of $\aft$:
\begin{equation} \label{eq:STUto4T}
  \pstex{STUto4T}
\end{equation}

The fact that $\bar\iota$ is surjective is easy: indeed, for diagrams
in $\calA^{wt}(\uparrow)$ that have no internal vertices there is
nothing to show, for they are really in $\calA^w(\uparrow)$. Further,
by repeated use of $\aSTU_{1,2}$ relations, all internal vertices in
any diagram in $\calA^{wt}(\uparrow)$ can be removed (remember that
the diagrams in $\calA^{wt}(\uparrow)$ are always connected, and in
particular, if they have an internal vertex they must have an internal
vertex connected by an edge to the long line skeleton, and the latter 
vertex can be removed first).

To complete the proof that $\bar\iota$ is an isomorphism it is enough
to show that the ``elimination of internal vertices'' procedure of
the last paragraph is well-defined -- that its output is independent
of the order in which $\aSTU_{1,2}$ relations are applied in order to
eliminate internal vertices. Indeed, this done, the elimination map would
by definition satisfy the $\aSTU_{1,2}$ relations and thus, descend to
a well-defined inverse for $\bar\iota$.

On diagrams with just one internal vertex, Equation~\eqref{eq:STUto4T}
shows that all ways of eliminating that vertex are equivalent modulo $\aft$
relations, and hence, the elimination map is well-defined on such diagrams.

We proceed by induction on the number of internal vertices. We have shown
that $\bar\iota$ is well-defined if there is only one internal vertex.
Now assume that we have shown that the elimination map is well defined
on all diagrams with at most $k$ internal vertices for some positive
integer $k\geq 1$, and let $D$ be a diagram with $(k+1)$ internal
vertices. Let $e$ and $e'$ be edges in $D$ that connect the skeleton of
$D$ to an internal vertex. We need to show that any elimination process
that begins with eliminating $e$ yields the same answer, modulo $\aft$,
as any elimination process that begins with eliminating $e'$. There are
several cases to consider.

\parpic[r]{$\pstex{CaseI}$}
{\bf Case I.} The edges $e$ and $e'$ connect the skeleton to {\em different} internal
vertices of $D$. In this case, after eliminating $e$ we get a signed sum
of two diagrams with exactly 7 internal vertices, and since the elimination
process is well-defined on such diagrams, we may as well continue by
eliminating $e'$ in each of those, getting a signed sum of 4 diagrams with
6 internal vertices each. On the other hand, if we start by eliminating
$e'$ we can continue by eliminating $e$, and we get the {\em same} signed
sum of 4 diagrams with 6 internal vertices.

\parpic[r]{$\pstex{CaseII}$}
{\bf Case II.} The edges $e$ and $e'$ are connected to the same internal vertex $v$
of $D$, yet some other edge $e''$ exists in $D$ that connects the skeleton
of $D$ to some other internal vertex $v'$ in $D$. In that case, use the
previous case and the transitivity of equality: (elimination starting with
$e$)=(elimination starting with $e''$)=(elimination starting with $e'$).

\parpic[r]{$\pstex{CaseIII}$}
{\bf Case III.} This is what remains if neither Case I nor Case II
hold. In that case, $D$ must have a schematic form as on the right,
with the ``blob'' not connected to the skeleton other than via $e$
or $e'$, yet further arrows may exist outside of the blob. Let $f$
denote the edge connecting the blob to $e$ and $e'$. The ``two in one
out'' rule for vertices implies that any part of a diagram must have
an excess of incoming edges over outgoing edges, equal to the total
number of vertices in that diagram part. Applying this principle to
the blob, we find that it must contain exactly one vertex, as shown on
the right below. Then by the ``two in one out'' rule $f$ must be oriented 
upwards, and hence, by the ``two in one out'' rule again, 
$e$ and $e'$ must be oriented upwards as well.

\parpic[r]{$\pstex{CaseIIIa}$}
We leave it to the reader to verify that in this case the two ways of
applying the elimination procedure, $e$ and then $f$ or $e'$ and then $f$,
yield the same answer modulo $\aft$ (in fact, that answer is $0$).

We also leave it to the reader to verify that $\aSTU_1$ implies $\aAS$
and $\aIHX$.  In Section \ref{subsec:LieAlgebras} we'll describe
the relationship between $\calA^{wt}$ and Lie algebras. 
Algebraically, the relations $\aSTU_1$, $\aAS$
and $\aIHX$ are restatements of the anti-symmetry
of the bracket and of Jacobi's identity: if $[x,y]:=xy-yx$, then
$0=[x,y]+[y,x]$ and $[x,[y,z]]=[[x,y],z]-[[x,z],y]$. \qed
\end{proof}

Note that $\calA^{wt}(\uparrow)$ inherits algebraic structure from
$\calA^w(\uparrow)$: it is an algebra by concatenation of diagrams,
and a co-algebra with $\Delta(D)$, for $D\in\calD^{wt}(\uparrow)$,
being the sum of all ways of dividing $D$ between a ``left co-factor''
and a ``right co-factor'' so that connected components of $D-S$
are kept intact, where $S$ is the skeleton line of $D$ (compare
with~\cite[Definition~3.7]{Bar-Natan:OnVassiliev}).

As $\calA^w(\uparrow)$ and $\calA^{wt}(\uparrow)$ are canonically
isomorphic, from this point on we will not keep the distinction between
the two spaces. 
One may add the RI relation to the definition of $\calA^{wt}(\uparrow)$
to get a space $\calA^{swt}(\uparrow)$. For an unframed version one may 
add the stronger framing independence (FI) relation, setting $D_L=D_R=0$, with $D_L$
and $D_R$ the single arrows as in Figure~\ref{fig:AwGenerators}. The resulting space is 
called $\calA^{rwt}(\uparrow)$. The statement and proof of the
bracket rise theorem adapt with no difficulty, and we find that
$\calA^{sw}(\uparrow)\cong\calA^{swt}(\uparrow)$ and
$\calA^{rw}(\uparrow)\cong\calA^{rwt}(\uparrow)$. In the
future we'll drop the $t$ from all superscripts.

The advantages of allowing internal trivalent vertices are
already apparent (for example, note that there is a nice description of
primitive elements: they are the arrow diagrams which remain connected 
if the skeleton is removed). Further advantages will emerge in Section
\ref{subsec:LieAlgebras}.

\begin{figure}
\[ \pstex{AwGenerators} \]
\caption{The left-arrow diagram $D_L$, the right-arrow diagram $D_R$ and
  the $k$-wheel $w_k$.}
\label{fig:AwGenerators}
\end{figure}

\begin{theorem} \label{thm:Aw}
The bialgebra $\calA^w(\uparrow)$ is the bialgebra of polynomials
in the diagrams $\glos{D_L}$, $\glos{D_R}$ and $\glos{w_k}$
(for $k\geq 1$) shown in Figure~\ref{fig:AwGenerators}, where
$\deg D_L=\deg D_R=1$ and $\deg w_k=k$, subject to the one relation
$w_1=D_L-D_R$. Thus, $\calA^w(\uparrow)$ has two generators in degree
1 and one generator in every degree greater than 1, as stated in
Section~\ref{subsec:SomeDimensions}.
\end{theorem}

\begin{proof} (sketch). Readers familiar with the diagrammatic PBW
theorem~\cite[Theorem~8]{Bar-Natan:OnVassiliev} will note that it has
a direct analogue for the $\calA^w(\uparrow)$ case, and that the proof
in~\cite{Bar-Natan:OnVassiliev} carries through almost verbatim. Namely,
the space $\calA^w(\uparrow)$ is isomorphic to a space $\glos{\calB^w}$
of ``unitrivalent diagrams'' with symmetrized univalent ends modulo
$\aAS$ and $\aIHX$. Given the ``two in one out'' rule for arrow
diagrams in $\calA^w(\uparrow)$ (and hence, in $\calB^w$)
the connected components of diagrams in $\calB^w$ can only be
``trees'' or ``wheels''. A tree is a unitrivalent diagram with no
cycles (oriented or not). A wheel is a single oriented cycle with some
number of incoming ``spokes'' (see $w_k$ in Figure \ref{fig:AwGenerators} and remove the skeleton line).
The reader might object that there are also ``wheels of trees'': trees 
attached to an oriented cycle, but these can be reduced to linear
combinations of wheels using the $\aIHX$ relation.

Trees vanish if they have more than one leaf, as their
leafs are symmetric while their internal vertices are anti-symmetric,
so $\calB^w$ is generated by wheels and by the one-leaf-one-root tree, which is simply
a single arrow. Wheels map to the $w_k$ in
$\calA^w(\uparrow)$ under the isomorphism, and  the arrow maps to the average of 
$D_L$ and $D_R$. The relation $w_1=D_L-D_R$ is then easily verified using $\aSTU_2$.

One may also argue directly, without using $\calB^w$. In
short, let $D$ be a diagram in $\calA^w(\uparrow)$ and $S$ is its
skeleton. Then $D-S$ may have several connected components, whose ``legs''
are intermingled along $S$. Using the $\aSTU$ relations these legs can
be sorted (at a cost of diagrams with fewer connected components, which
could have been treated earlier in an inductive proof). At the end of the
sorting procedure one can see that the only diagrams that remain are our
declared generators. It remains to show that our generators are linearly
independent (apart from the relation $w_1=D_L-D_R$). For the generators
in degree 1, simply write everything out explicitly in the spirit of
Section~\ref{subsubsec:DegreeOne}. In higher degrees there is only one
primitive diagram in each degree, so it is enough to show that $w_k\neq
0$ for every $k$. This can be done ``by hand'', but it is more easily
done using Lie algebraic tools in Section~\ref{subsec:LieAlgebras}. \qed
\end{proof}

\begin{exercise} \label{exe:Asw} Show that the bialgebra
$\calA^{rw}(\uparrow)$ (see Section~\ref{subsec:SomeDimensions}) is
the bialgebra of polynomials in the wheel diagrams $w_k$ ($k\geq 2$),
and that $\calA^{sw}(\uparrow)$ is the bialgebra of polynomials in the
same wheel diagrams and an additional generator $\glos{D_A}:=D_L=D_R$.
\end{exercise}

\begin{proposition} \label{prop:AwCirc} In $\calA^w(\bigcirc)$ all wheels
vanish, and hence, the bialgebra $\calA^w(\bigcirc)$ is the bialgebra
of polynomials in a single variable $D_L=D_R$.
\end{proposition}

\begin{proof} This is Lemma~2.7 of~\cite{Naot:BF}. In short, a wheel in
$\calA^w(\bigcirc)$ can be reduced using $\aSTU_2$ to a difference of
trees, as shown in Figure \ref{fig:WheelInCircle}. 
One of these trees has two adjoining leafs, and hence, it is 0 by TC and
$\aAS$. In the other two of the leafs can be commuted ``around the circle''
using TC until they are adjoining and hence vanish by TC and $\aAS$.
\qed
\end{proof}

\begin{figure}
 \input{figs/WheelInCircle.pstex_t}
\caption{Wheels in a circle vanish.}\label{fig:WheelInCircle}
 \end{figure}


\begin{exercise} Show that $\calA^{sw}(\bigcirc)\cong\calA^w(\bigcirc)$
yet $\calA^{rw}(\bigcirc)$ vanishes except in degree $0$.
\end{exercise}

The following two exercises may help the reader to develop a better
``feel'' for $\calA^w(\uparrow)$ and will be needed, within the discussion
of the Alexander polynomial (especially within
Definition~\ref{def:InterpretationMap}).

\parpic[r]{\raisebox{-12mm}{$\pstex{CC}$}}
\begin{exercise} Show
that the ``commutators commute'' (\glost{CC}) relation,
shown on the right, holds in $\calA^w(\uparrow)$. (Interpreted in
terms of Lie algebras as in the next section, this relation becomes $[[x,y],
[z,w]]=0$, and hence the name ``commutators commute''). Note that the
proof of CC depends on the skeleton having a single component; later,
when we will work with $\calA^w$-spaces with more complicated skeleta,
the CC relation will not hold.
\end{exercise}

\parpic[r]{\raisebox{-2mm}{$\pstex{Hair}$}}
\begin{exercise} \label{ex:Hair} Show that ``detached wheels'' and
``hairy $Y$ s'' make sense in $\calA^w(\uparrow)$. As on the right, a
detached wheel is a wheel with a number of spokes, and a hairy $Y$ is a
combinatorial $Y$ shape (three arrows meeting at a single internal vertex) 
with further ``hair'' on its trunk (its outgoing
arrow). It is specified where the trunk and the leafs of the $Y$ connect
to the skeleton, but it is not specified where the spokes of the wheel
and where the hair on the $Y$ connect to the skeleton. The content of the
exercise is to show that modulo the relations of $\calA^w(\uparrow)$,
it is not necessary to specify this further information: all ways of
connecting the spokes and the hair to the skeleton are equivalent. Like
the previous exercise, this result depends on the skeleton having a
single component.
\end{exercise}

\begin{remark} In the case of usual knots and usual chord diagrams,
Jacobi diagrams have a topological interpretation using the
Goussarov-Habiro calculus of claspers~\cite{Goussarov:3Manifolds,
Habiro:Claspers}. In the w case a similar such calculus was developed by 
Watanabe in~\cite{Watanabe:ClasperMoves}. Various related results are 
at~\cite{HabiroKanenobuShima:R2K, HabiroShima:R2KII}.
\end{remark}

\draftcut
\subsection{The Relation with Lie Algebras} \label{subsec:LieAlgebras}
The
theory of finite type invariants of knots is related to the theory
of metrized Lie algebras via the space $\calA$ of chord diagrams, as
explained in~\cite[Theorem~4 and Exercise~5.1]{Bar-Natan:OnVassiliev}. In
a similar manner the theory of finite type invariants of w-knots is
related to arbitrary finite-dimensional Lie algebras (or equivalently, to
doubles of co-commutative Lie bialgebras, as explained below) via the space $\calA^w(\uparrow)$
of arrow diagrams.

\subsubsection{Preliminaries} Given a finite dimensional Lie 
algebra\footnote{Over $\bbQ$, or another field of characteristic 0.}
$\glos{\frakg}$ let $\glos{I\frakg}:=\frakg^\ast\rtimes\frakg$ be the
semi-direct product of the dual $\frakg^\ast$ of $\frakg$ with $\frakg$,
with $\frakg^\ast$ taken as an abelian algebra and with $\frakg$ acting
on $\frakg^\ast$ by the usual coadjoint action. In formulae,
\[ I\frakg=\{(\varphi, x)\colon \,\varphi\in\frakg^\ast,\,x\in\frakg\}, \]
\[ [(\varphi_1,x_1), (\varphi_2,x_2)]
  = (x_1\varphi_2-x_2\varphi_1, [x_1,x_2]).
\]

In the case where $\frakg$ is the algebra $\mathfrak{so}(3)$ of infinitesimal
symmetries of $\bbR^3$, its dual $\frakg^\ast$ is $\bbR^3$ itself with the
usual action of $\mathfrak{so}(3)$ on it, and $I\frakg$ is the algebra $\bbR^3\rtimes
\mathfrak{so}(3)$ of infinitesimal affine isometries of $\bbR^3$. This is the
Lie algebra of the Euclidean group of isometries of $\bbR^3$, which is
often denoted $ISO(3)$. This explains our choice of the name $I\frakg$.

Note that, if $\frakg$ is a co-commutative Lie bialgebra, then $I\frakg$
is the ``double'' of $\frakg$~\cite{Drinfeld:QuantumGroups}. This
is a significant observation, for it is a part of the relationship
between this paper and the Etingof-Kazhdan theory of quantization of
Lie bialgebras~\cite{EtingofKazhdan:BialgebrasI}. Yet we will make no
explicit use of this observation below.

In the construction that follows we are going to define a map
from $\calA^w$ to $\glos{\calU}(I\frakg)$, the universal enveloping algebra of
$I\frakg$.  Note that a map ${\calA}^w
\to \calU(I\frakg)$ is ``almost the same'' as a map $\calA^{sw} \to
\calU(I\frakg)$, in the following sense.  The quotient
map $p\colon {\calA}^w \to \calA^{sw}$ has a one-sided inverse
$F\colon  \calA^{sw} \to {\calA}^w$ defined by
\[ F(D)= \sum_{k=0}^\infty \frac{(-1)^k}{k!} S_L^k(D)\cdot w_1^k. \]
Here $S_L$ denotes the map that sends an arrow diagram to the sum of
all ways of deleting a left-going arrow, $S_L^k$ is $S_L$ applied $k$ times, 
and $w_1$ denotes the 1-wheel,
as shown in Figure~\ref{fig:AwGenerators}.  The reader can verify that
$F$ is well-defined, an algebra- and co-algebra homomorphism, and that
$p\circ F= id_{\calA^{sw}}$.

\subsubsection{The Construction} Fixing a finite dimensional Lie algebra
$\frakg$, we construct a map $\glos{\calT^w_\frakg}\colon
{\calA}^w\to\calU(I\frakg)$ which assigns to every arrow diagram $D$
an element of the universal enveloping algebra $\calU(I\frakg)$. As is
often the case in our subject, a picture of a typical example is worth
more than a formal definition:
\[
  \def\I{{$I$}}
  \def\B{{$B$}}
  \def\g{{$\frakg$}}
  \def\d{{$\frakg^\ast$}}
  \def\p{{$\frakg^\ast\otimes\frakg^\ast\otimes\frakg\otimes\frakg
    \otimes\frakg^\ast\otimes\frakg^\ast$}}
  \def\u{{$\calU(I\frakg)$}}
  \pstex{Twg}
\]

In short, we break up the diagram $D$ into its constituent
pieces and assign a copy of the structure constants tensor
$B\in\frakg^\ast\otimes\frakg^\ast\otimes\frakg$ to each internal
vertex $v$ of $D$ (keeping an association between the tensor factors
in $\frakg^\ast\otimes\frakg^\ast\otimes\frakg$ and the edges emanating
from $v$, as dictated by the orientations of the edges and of the vertex
$v$ itself). We assign the identity tensor in
$\frakg^\ast\otimes\frakg$ to every arrow in $D$ that is not connected to an
internal vertex, and contract any pair of factors connected by a fully
internal arrow. The remaining tensor factors
($\frakg^\ast\otimes\frakg^\ast\otimes\frakg\otimes\frakg
\otimes\frakg^\ast\otimes\frakg^\ast$ in our examples) are all along the
skeleton and can thus be ordered by the skeleton. We then multiply these
factors to get an output $\calT^w_\frakg(D)$ in $\calU(I\frakg)$.

It is also useful to restate this construction given a choice of a basis.
Let $\glos{(x_j)}$ be a basis of $\frakg$ and let $\glos{(\varphi^i)}$
be the dual basis of $\frakg^\ast$, so that $\varphi^i(x_j)=\delta^i_j$,
and let $\glos{b_{ij}^k}$ denote the structure constants of $\frakg$ in
the chosen basis: $[x_i,x_j]=\sum b_{ij}^kx_k$. Mark every arrow in $D$
with lower case Latin letter from within\footnote{The
supply of these can be made inexhaustible by the addition of numerical
subscripts.} $\{i,j,k,\dots\}$. Form a product $P_D$ by taking one $b_{\alpha\beta}^\gamma$
factor for each internal vertex $v$ of $D$ using the letters marking the
edges around $v$ for $\alpha$, $\beta$ and $\gamma$ and by taking one
$x_\alpha$ or $\varphi^\beta$ factor for each skeleton vertex of $D$,
taken in the order that they appear along the long line skeleton, with the indices
$\alpha$ and $\beta$ dictated by the edge markings and with the choice
between factors in $\frakg$ and factors in $\frakg^\ast$ dictated by the
orientations of the edges. Finally let $\calT^w_\frakg(D)$ be the sum
of $P_D$ over the indices $i,j,k,\dots$ running from $1$ to $\dim\frakg$:

\begin{equation} \label{eq:Twb}
  \def\P{{$\displaystyle
    \sum_{i,j,k,l,m,n=1}^{\dim\frakg}
    \hspace{-4mm} b_{ij}^kb_{kl}^m
    \varphi^i\varphi^jx_nx_m\varphi^l\in\calU(I\frakg)
  $}}
  \pstex{Twb}
\end{equation}

The next proposition is easy to verify (compare with~\cite[Theorem~4 and
Exercise~5.1]{Bar-Natan:OnVassiliev}):

\begin{proposition} The above two definitions of $T^w_\frakg$ agree, are
independent of the choices made within them, and respect all the relations
defining ${\calA}^w$. \qed
\end{proposition}

While we do not provide a proof of this proposition here, it is worthwhile
to state the correspondence between the relations defining ${\calA}^w$
and the Lie algebraic information in $\calU(I\frakg)$: $\aAS$ is
the antisymmetry of the bracket of $\frakg$, $\aIHX$ is the Jacobi
identity of $\frakg$, $\aSTU_1$ and  $\aSTU_2$ are the relations
$[x_i,x_j]=x_ix_j-x_jx_i$ and $[\varphi^i,x_j]=\varphi^ix_j-x_j\varphi^i$
in $\calU(I\frakg)$, $TC$ is the fact that $\frakg^\star$ is taken as an
abelian algebra, and $\aft$ is the fact that the identity tensor in
$\frakg^\ast\otimes\frakg$ is $\frakg$-invariant.

\subsubsection{Example: The 2 Dimensional Non-Abelian Lie Algebra}
Let $\frakg$ be the Lie algebra with two generators $x_{1,2}$
satisfying $[x_1,x_2]=x_2$, so that the only non-vanishing structure
constants $b_{ij}^k$ of $\frakg$ are $b_{12}^2=-b_{21}^2=1$. Let
$\varphi^i\in\frakg^\ast$ be the dual basis of $x_i$; by an easy
calculation, we find that in $I\frakg$ the element $\varphi^1$ is
central, while $[x_1,\varphi^2]=-\varphi^2$ and
$[x_2,\varphi^2]=\varphi^1$. We calculate $\calT^w_\frakg(D_L)$,
$\calT^w_\frakg(D_R)$ and $\calT^w_\frakg(w_k)$ using the ``in basis''
technique of Equation~\eqref{eq:Twb}. The outputs of these calculations lie
in $\calU(I\frakg)$; we display these results in a PBW basis in which the
elements of $\frakg^\ast$ precede the elements of $\frakg$:

\begin{eqnarray} 
  \calT^w_\frakg(D_L)
    &=& x_1\varphi^1+x_2\varphi^2 =
      \varphi^1x_1+\varphi^2x_2+[x_2,\varphi^2]
      = \varphi^1x_1+\varphi^2x_2+\varphi_1, \notag \\
  \calT^w_\frakg(D_R) &=& \varphi^1x_1+\varphi^2x_2, \label{eq:2DExample} \\
  \calT^w_\frakg(w_k) &=& (\varphi^1)^k. \notag
\end{eqnarray}

\parpic[r]{$\pstex{4wheel}$}
For the last assertion above, note that all non-vanishing structure
constants $b_{ij}^k$ in our case have $k=2$, and therefore all indices
corresponding to edges that exit an internal vertex must be set equal to
$2$. This forces the ``hub'' of $w_k$ to be marked $2$ and therefore the
legs to be marked $1$, and therefore $w_k$ is mapped to $(\varphi^1)^k$.

Note that the calculations in~\eqref{eq:2DExample} are consistent with the
relation $D_L-D_R=w_1$ of Theorem~\ref{thm:Aw} and that they show that
other than that relation, the generators of ${\calA}^w$ are linearly
independent.

\draftcut \subsection{The Alexander Polynomial} \label{subsec:Alexander}
Let $K$ be a long w-knot, and let $Z(K)$ be the invariant of
Theorem~\ref{thm:ExpansionForKnots}. Theorem~\ref{thm:Alexander} below
asserts that apart from self-linking, $Z(K)$ contains precisely the
same information as the Alexander polynomial $A(K)$ of $K$ (recalled
below). But we have to start with some definitions.

\begin{figure}
\begin{center}
  $\pstex{8-17}$
  \qquad
  \raisebox{-18mm}{\includegraphics[height=40mm]{figs/SandersonsGarden_640.ps}}
\end{center}
\caption{
  A long $8_{17}$, with the span of crossing $\#3$
  marked.  The projection is as in Brian Sanderson's garden.
  See~\cite{WKO}/\href{http://www.math.toronto.edu/~drorbn/papers/WKO/SandersonsGarden.html}{\tt
  SandersonsGarden.html}.
} \label{fig:817}
\end{figure}

\begin{definition} \label{def:STA} Enumerate the crossings of $K$
from $1$ to $n$ in some arbitrary order. For \linebreak $1\leq j\leq n$, the
``span'' of crossing $\#i$ is the connected open interval along the line
parametrizing $K$ between the two times $K$ ``visits'' crossing $\#i$
(see Figure~\ref{fig:817}). Form a matrix $T=\glos{T(K)}$ with $T_{ij}$
the indicator function of ``the lower strand of crossing $\#j$ is within
the span of crossing $\#i$'' (so $T_{ij}$ is $1$ if for a given $i,j$
the quoted statement is true, and $0$ otherwise). Let $\glos{s_i}$ be the
sign of crossing $\#i$ (recall that $\overcrossing$ is positive, 
$\undercrossing$ is negative; $(-,-,-,-,+,+,+,+)$ for Figure~\ref{fig:817}),
let $\glos{d_i}$ be $+1$ if $K$ visits the ``over'' strand of crossing
$\#i$ before visiting the ``under'' strand of that crossing, and let
$d_i=-1$ otherwise ($(-,+,-,+,-,+,-,+$) for Figure~\ref{fig:817}). Let
$S=\glos{S(K)}$ be the diagonal matrix with $S_{ii}=s_id_i$, and for
an indeterminate $\glos{X}$, let $X^{-S}$ denote the diagonal matrix
with diagonal entries $X^{-s_id_i}$.  Finally, let $\glos{A(K)}$ be the
Laurent polynomial in $\bbZ[X,X^{-1}]$ given by
\begin{equation} \label{eq:AKDef}
  A(K)(X) := \det\left(I+T(I-X^{-S})\right).
\end{equation}
\end{definition}

\begin{example} For the knot diagram in Figure~\ref{fig:817},
\[ \scriptstyle
  T = \left(\begin{smallmatrix}
    0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 \\
    0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\
    0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\
    0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 \\
    0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
    0 & 0 & 0 & 1 & 0 & 1 & 0 & 0
  \end{smallmatrix}\right),
  \quad
  S = \left(\begin{smallmatrix}
    1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
    0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\
    0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
    0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\
    0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\
    0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
    0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\
    0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
  \end{smallmatrix}\right),
  \quad\text{and}\quad
  A = \left|\begin{smallmatrix}
    1 & 1-X & 1-X^{-1} & 1-X & 1-X & 0 & 1-X & 0 \\
    0 & 1 & 1-X^{-1} & 0 & 1-X & 0 & 0 & 0 \\
    0 & 1-X & 1 & 0 & 1-X & 0 & 0 & 0 \\
    0 & 1-X & 0 & 1 & 1-X & 0 & 1-X & 0 \\
    0 & 1-X & 0 & 1-X & 1 & 1-X^{-1} & 1-X & 1-X^{-1} \\
    0 & 1-X & 0 & 1-X & 0 & 1 & 1-X & 0 \\
    0 & 0 & 0 & 1-X & 0 & 1-X^{-1} & 1 & 0 \\
    0 & 0 & 0 & 1-X & 0 & 1-X^{-1} & 0 & 1
  \end{smallmatrix}\right|.
\]
The last determinant equals $-X^3+4X^2-8X+11-8X^{-1}+4X^{-2}-X^{-3}$,
the Alexander polynomial of the knot $8_{17}$
(see e.g.~\cite{Rolfsen:KnotsAndLinks}).
\end{example}

\begin{theorem} \label{thm:AlexanderFormula}
(Lee,~\cite[Theorem~1]{Lee:AlexanderInvariant}) For any (classical)
knot $K$, $A(K)$ is equal to the normalized Alexander
polynomial~\cite{Rolfsen:KnotsAndLinks} of $K$. \qed
\end{theorem}

The Mathematica notebook~\cite[``wA'']{WKO} verifies
Theorem~\ref{thm:AlexanderFormula} for all prime knots with up to 11
crossings.

The following theorem asserts that $Z(K)$ can be computed from $A(K)$
(see Equation~\eqref{eq:AtoZ}) and that modulo a certain additional relation
and with the appropriate identifications in place, $Z(K)$ {\em is} $A(K)$
(see Equation~\eqref{eq:ZisA}).

\begin{theorem} \label{thm:Alexander} (Proof in
Section~\ref{subsec:AlexanderProof}). Let $x$ be an indeterminate, let $\sl$
be self-linking as in Exercise~\ref{ex:sl}, let $D_A:=D_L=D_R$ and $w_k$
be as in Figure~\ref{fig:AwGenerators}, and let $\glos{w}\colon
\bbQ\llbracket x\rrbracket \to\calA^w$ be the linear map defined by
$x^k\mapsto w_k$. Then for a long w-knot $K$,
\begin{equation} \label{eq:AtoZ}
  Z(K) = 
    \underbrace{
      \exp_{\calA^{sw}}\left(\sl(K)D_A\right)
    }_\text{$\sl$ coded in arrows} \cdot
    \underbrace{
      \exp_{\calA^{sw}}\left(-w\left(\log_{\bbQ\llbracket x\rrbracket}
        A(K)(e^x)
      \right)\right)
    }_\text{main part: Alexander coded in wheels},
\end{equation}
where the logarithm and inner exponentiation are computed by formal power
series in $\bbQ\llbracket x\rrbracket$ and the outer exponentiations
are likewise computed in $\calA^{sw}$.
\end{theorem}

\parpic[r]{$\pstex{wkl}$}
Let $\calA^\text{reduced}$ be $\calA^{sw}$ modulo the additional
relations $D_A=0$ and $w_kw_l=w_{k+l}$ for $k,l\neq 1$. The quotient
$\calA^\text{reduced}$ can be identified with the vector space of (infinite)
linear combinations of the $w_k$ (with $k\neq 1$).  Identifying the
$k$-wheel $w_k$ with $x^k$, we see that $\calA^\text{reduced}$ is
the space of power series in $x$ having no linear terms. Note by
inspecting Equation~\eqref{eq:AKDef} that $A(K)(e^x)$ never has a term linear
in $x$, and that modulo $w_kw_l=w_{k+l}$, the exponential and the
logarithm in Equation~\eqref{eq:AtoZ} cancel each other out. Hence, within
$\calA^\text{reduced}$,

\begin{equation} \label{eq:ZisA} Z(K) = A^{-1}(K)(e^x). \end{equation}

\begin{remark} In~\cite{HabiroKanenobuShima:R2K} Habiro, Kanenobu,
and Shima show that all coefficients of the Alexander polynomial are
finite type invariants of w-knots, and in~\cite{HabiroShima:R2KII}
Habiro and Shima show that all finite type invariants of w-knots are
polynomials in the coefficients of the Alexander polynomial. Thus,
Theorem~\ref{thm:Alexander} is merely an ``explicit form'' of these earlier
results.
\end{remark}

\draftcut
\subsection{Proof of Theorem~\ref{thm:Alexander}}
\label{subsec:AlexanderProof}

We start with a sketch. The proof of Theorem~\ref{thm:Alexander} can be
divided into three parts: differentiation, bulk management, and computation.

\noindent{\bf Differentiation.} 
Both sides of our goal, that is,
Equation~\eqref{eq:AtoZ}, are exponential in nature. When seeking to
show an equality of exponentials it is often beneficial to compare
their derivatives\footnote{Thanks, Dylan.}. In our case the useful
``derivatives'' to use are the ``Euler operator'' $\glos{E}$ (``multiply
every term by its degree'', an analogue of $f\mapsto xf'$, defined
in Section~\ref{subsubsec:Euler}), and the ``normalized Euler
operator'' $Z\mapsto\glos{\tilE} Z:=Z^{-1}EZ$, which is a variant of the
logarithmic derivative $f\mapsto x(\log f)'=xf'/f$. Since $\tilE$
is one to one (see Section~\ref{subsubsec:Euler}) and since we know how
to apply $\tilE$ to the right hand side of Equation~\eqref{eq:AtoZ}
(see Section~\ref{subsubsec:Euler}), it is enough to show that with
$\glos{B}:=T(\exp(-xS)-I)$ and suppressing the fixed w-knot $K$ from the
notation,
\begin{equation} \label{eq:EofAtoZ}
  EZ = Z\cdot\left(
    \sl\cdot D_A-w\!\left[x\tr\left( (I-B)^{-1}TS\exp(-xS) \right)\right]
  \right) \qquad \text{ in }\calA^{sw}.
\end{equation}

\noindent{\bf Bulk Management.}
Next we seek to understand the left hand
side of Equation~\eqref{eq:EofAtoZ}. $Z$ is made up of ``quantities in bulk'':
arrows that come in exponential ``reservoirs''. As it turns out,
$EZ$ is made up of the same bulk quantities, but also allowing for a
single non-bulk ``excitation'', which we often highlight in red. (compare
with $Ee^x={\red x}\cdot e^x$; the
``bulk'' $e^x$ remains, and single ``excited red'' $\red x$ gets created). We
wish to manipulate and simplify that red excitation. This is best done by
introducing a certain module, $\glos{\IAM_K}$, the ``Infinitesimal Alexander
Module'' of $K$ (see Section~\ref{subsubsec:IAM}). The elements of $\IAM_K$
can be thought of as names for ``bulk objects with a red excitation'',
and hence, there is an ``interpretation map'' $\glos{\iota}\colon
\IAM_K\to\calA^{sw}$, which maps every ``name'' into the object it
represents. There are three special elements in $\IAM_K$: an element
$\glos{\lambda}$, which is the name of $EZ$ (that is, $\iota(\lambda)=EZ$),
the element $\glos{\delta_A}$ which is the name of $D_A\cdot Z$
(so $\iota(\delta_A)=D_A\cdot Z$), and an element $\glos{\omega_1}$
which is the name of a ``detached'' 1-wheel that is appended to
$Z$. The latter can take a coefficient which is a power of $x$,
with $\iota(x^k\omega_1)=w(x^{k+1})\cdot Z=(Z\text{ times a }
(k+1)\text{-wheel})$. Thus, it is enough to show that in $\IAM_K$,
\begin{equation} \label{eq:GoalInIAM}
  \lambda = \sl\cdot\delta_A
    - \tr\left((I-B)^{-1}TSX^{-S}\right)\omega_1,
  \quad\text{with}\quad X=e^x.
\end{equation}
Indeed, applying $\iota$ to both sides of the above equation, we get
Equation~\eqref{eq:EofAtoZ} back again.

\noindent{\bf Computation.} Last, we show in
Section~\ref{sec:ComputeLambda} that Equation~\eqref{eq:GoalInIAM} holds true. This
is a computation that happens entirely in $\IAM_K$ and does not mention
finite type invariants, expansions or arrow diagrams in any way.

\subsubsection{The Euler Operator} \label{subsubsec:Euler} Let $A$ be
a completed, graded algebra with unit, in which all degrees are $\geq
0$. Define a continuous linear operator $E\colon A\to A$ by setting $Ea=(\deg
a)a$ for homogeneous $a\in A$. In the case $A=\bbQ\llbracket x\rrbracket$,
we have $Ef=xf'$, the standard ``Euler operator'': indeed, for each $n$, 
$Ex^n=nx^n=x\cdot(x^n)'$. Hence, we adopt
the name $E$ for this operator in general.

We say that $Z\in A$ is a ``perturbation of the identity'' if its
degree 0 piece is 1. Such a $Z$ is always invertible. For such a $Z$,
set $\tilE Z:=Z^{-1}\cdot EZ$, and call the thus (partially) defined
operator $\tilE \colon A\to A$ the ``normalized Euler operator''. From this
point on when we write $\tilE Z$ for some $Z\in A$, we automatically
assume that $Z$ is a perturbation of the identity or that it is trivial
to show that $Z$ is a perturbation of the identity. Note that for
$f\in\bbQ\llbracket x\rrbracket$, we have $\tilE f=x(\log f)'$,
so $\tilE$ is a variant of the logarithmic derivative.

\begin{claim} $\tilE$ is one to one.
\end{claim}

\begin{proof} Assume $Z_1\neq Z_2$ and let $d$ be the smallest degree
in which they differ. Then $d>0$ and in degree $d$ the difference
$\tilE Z_1-\tilE Z_2$ is $d$ times the difference $Z_1-Z_2$, and
hence, $\tilE Z_1\neq\tilE Z_2$. \qed
\end{proof}

Thus, in order to prove our goal, that is, Equation~\eqref{eq:AtoZ}, it is enough to
compute $\tilE$ of both sides and to show the equality then. We start
with the right hand side of Equation~\eqref{eq:AtoZ}; but first, we need some
simple properties of $E$ and $\tilE$. The proofs of these properties are
routine, and hence, they are omitted.

\begin{proposition} The following hold true:
\begin{enumerate}
\item $E$ is a derivation: $E(fg)=(Ef)g+f(Eg)$.
\item If $Z_1$ commutes with $Z_2$, then $\tilE(Z_1Z_2)=\tilE Z_1+\tilE Z_2$.
\item If $z$ commutes with $Ez$, then $Ee^z=e^z(Ez)$ and $\tilE e^z=Ez$.
\item If $w\colon A\to\calA$ is a morphism of graded algebras,
then it commutes with $E$ and $\tilE$. \qed
\end{enumerate}
\end{proposition}

Let us denote the right hand side of Equation~\eqref{eq:AtoZ} by $Z_1(K)$. Then, by
the above proposition, remembering (see Theorem~\ref{thm:Aw}) that $\calA^{sw}$ is
commutative and that $\deg D_A=1$, we have
\[ \tilE Z_1(K) = \sl\cdot D_A-w(E\log A(K)(e^x))
  = \sl\cdot D_A-w\left(x\frac{d}{dx}\log A(K)(e^x)\right).
\]
The rest is an exercise in matrices and
differentiation. $A(K)$ is a determinant, see Equation~\eqref{eq:AKDef}, and in general,
$\frac{d}{dx}\log\det(M) = \tr\left(M^{-1}\frac{d}{dx}M\right)$. So with
$B=T(e^{-xS}-I)$ (so $M=I-B$), we have
\[ \tilE Z_1(K) =
  \sl\cdot D_A + w\left(x\tr\left((I-B)^{-1}\frac{d}{dx}B\right)\right)
  = \sl\cdot D_A - w\left(x\tr\left((I-B)^{-1}TSe^{-xS}\right)\right),
\]
as promised in Equation~\eqref{eq:EofAtoZ}.

\subsubsection{The Infinitesimal Alexander Module} \label{subsubsec:IAM}
Let $K$ be a w-knot diagram. The ``Infinitesimal Alexander Module'' $\IAM_K$
of $K$, which is defined in detail below, is a certain module made from
a certain space $\glos{\IAM^0_K}$ of pictures ``annotating'' $K$ with
``red excitations'' modulo some pictorial relations that indicate how
the red excitations can be moved around. The space $\IAM^0_K$ in itself
is made of three pieces, or ``sectors''. The ``A sector'' in which the
excitations are red arrows, the ``Y sector'' in which the excitations
are ``red hairy Y-diagrams'', and a rank 1 ``W sector'' for ``red
hairy wheels''. There is an ``interpretation map'' $\glos{\iota}\colon
\IAM^0_K\to\calA^w$ which descends to a well-defined (and homonymous)
$\iota\colon \IAM_K\to\calA^w$. Finally, there are some special elements
$\lambda$ and $\delta_A$ that live in the A sector of $\IAM^0_K$ and
$\omega_1$ that lives in the W sector.

In principle, the description of $\IAM^0_K$ and of $\IAM_K$ can be given
independently of the interpretation map $\iota$, and there are some good
questions to ask about $\IAM_K$ (and the special elements in it) that are
completely independent of the interpretation of the elements of $\IAM_K$ as
``perturbed bulk quantities'' within $\calA^{sw}$. Yet $\IAM_K$ is a
complicated object and we fear its definition will appear completely
artificial without its interpretation. Hence, below the two definitions will
be woven together.

$\IAM_K$ and $\iota$ may equally well be described in terms of $K$ or in
terms of the Gauss diagram of $K$ (see Remark~\ref{rem:GD}). For pictorial
simplicity, we choose to use the latter; so let $G=G(K)$ be the Gauss
diagram of $K$. It is best to read the following definition while at the
same time studying Figure~\ref{fig:IAM0Def}.

\begin{figure}
\[ \pstex{IAM0Def} \]
\caption{
  A sample w-knot $K$, its Gauss diagram $G$, and one generator from
  each of the A, Y, and W sectors of $\IAM^0_K$. Red parts are marked
  with the letter ``r''.
} \label{fig:IAM0Def}
\end{figure}

\begin{definition} Let $\glos{R}$ be the ring $\bbZ[X,X^{-1}]$ of Laurent
polynomials in a variable $X$ with integer coefficients\footnote{Later,
$X$ is interpreted in $\calA^w$ as a formal exponential $e^x$. So within $\IAM$
we can restrict to coefficients in $\bbZ$, yet in $\calA^w$ we
must allow coefficients in $\bbQ$.}, and let $\glos{R_1}$ be the subring
of polynomials that vanish at $X=1$ (i.e., whose sum of coefficients
is $0$)\footnote{$R_1$ is only very lightly needed, and only within
Definition~\ref{def:InterpretationMap}. In particular, all that we say
about $\IAM_K$ that does not concern the interpretation map $\iota$ is
equally valid with $R$ replacing $R_1$.}.  Let $\IAM^0_K$ be the direct
sum of the following three modules (which for the purpose of taking the
direct sum, are all regarded as $\bbZ$-modules):
\begin{enumerate}
\item The ``A sector'' is the free $\bbZ$-module generated by all diagrams
made from $G$ by the addition of a single unmarked ``red excitation''
arrow, whose endpoints are on the long line skeleton of $G$ and are distinct from
each other and from all other endpoints of arrows in $G$. Such diagrams
are considered combinatorially --- so two are equivalent iff they differ
only by an orientation preserving diffeomorphism of the skeleton. Let
us count: if $K$ has $n$ crossings, then $G$ has $n$ arrows and the
skeleton of $G$ gets subdivided into $m:=2n+1$ arcs. An A sector diagram
is specified by the choice of an arc for the tail of the red arrow and
an arc for the head ($m^2$ choices), except if the head and the tail
fall within the same arc, then their relative ordering has to be specified
as well ($m$ further choices). So the rank of the A sector over $\bbZ$
is $m(m+1)$.
\item The ``Y sector'' is the free $R_1$-module generated by all
diagrams made from $G$ by the addition of a single ``red excitation''
$Y$-shape single-vertex graph, with two incoming edges (``tails'') and
one outgoing (``head''), modulo anti-symmetry for the two incoming edges
(again, considered combinatorially). Counting is more elaborate: when
the three edges of the $Y$ end in distinct arcs in the skeleton of $G$,
we have $\frac12m(m-1)(m-2)$ possibilities ($\frac12$ for the
antisymmetry). When the two tails of the $Y$ lie on the same arc, we get $0$
by anti-symmetry, unless they are separated by the head of that $Y$ ($m$
possibilities). The remaining possibility is to have the head and
one tail on one arc (order matters!) and the other tail on another,
at $2m(m-1)$ possibilities. So the rank of the Y sector over $R_1$
is $\frac12m^2(m+1)$.
\item The ``W sector'' is the rank 1 free $R$-module with a single
generator $w_1$. It is not necessary for $w_1$ to have a pictorial
representation, yet one, involving a single ``red'' 1-wheel, is shown in
Figure~\ref{fig:IAM0Def}. This pictorial representation is consistent
with the interpretation in the definition below of $\omega_1$ as a
detached 1-wheel.
\end{enumerate}
\end{definition}

\begin{definition} \label{def:InterpretationMap}
The ``interpretation map'' $\iota\colon \IAM^0_K\to\calA^w$
is defined by sending the arrows (marked $+$ or $-$) of a diagram in
$\IAM^0_K$ to $(e^{\pm a})$-exponential reservoirs of arrows, as in the
definition of $Z$ (see Remark~\ref{rem:ZwForGD}). In addition, the red
excitations of diagrams in $\IAM^0_K$ are interpreted as follows:
\begin{enumerate}
\item In the A sector, the red arrow is simply mapped to itself, with the
colour red suppressed.
\item In the Y sector diagrams have red $Y$~s and coefficients $f\in
R_1$. Substitute $X=e^x$ in $f$, expand in powers of $x$,
and interpret $x^kY$ as a ``hairy $Y$ with $k-1$ hairs'' as in
Exercise~\ref{ex:Hair}. Note that $f(1)=0$, so only positive powers of $x$
occur. So we never need to worry about ``$Y$~s with $-1$ hairs''. This is
the only point where the condition $f\in R_1$ (as opposed to $f\in R$) is
needed.
\item In the W sector treat the coefficients as above, but interpret
$x^kw_1$ as a detached $w_{k+1}$. I.e., as a detached wheel with $k+1$
spokes, as in Exercise~\ref{ex:Hair}.
\end{enumerate}
\end{definition}

As stated above, $\IAM_K$ is the quotient of $\IAM^0_K$ by some set of
relations. The best way to think of this set of relations is as
``everything that's obviously annihilated by $\iota$''. Here's the same
thing, in a more formal language:

\begin{figure}
\[ \pstex{IAMRelations} \]
\caption{The relations $\calR$ making $\IAM_K$.} \label{fig:IAMRelations}
\end{figure}

\begin{definition} Let $\glos{\IAM_K}:=\IAM^0_K/\calR$, where
$\glos{\calR}$ is the linear span of the relations depicted in
Figure~\ref{fig:IAMRelations}. The top 8 relations are about moving
a leg of the red excitation across an arrow head or an arrow tail in
$G$. Since the red excitation may be either an arrow $A$ or a $Y$,
its leg in motion may be either a tail or a head, and it may be moving
either past a tail or past a head, there are 8 relations of that type. The
$A_w$ relation corresponds to $D_L-D_R=w_1=0$. The $Y_w$ relation indicates
the ``price'' (always a red $w_1$) of commuting a red head across a red
tail. As per custom, in each case only the changing part of the diagrams
involved is shown. Further, the red excitations are marked with the
letter ``r'' and the sign of an arrow in $G$ is marked $s$; so always
$s\in\{\pm 1\}$. The ``$A$'' relations in Figure~\ref{fig:IAMRelations}
($A_{tt}$, $A_{th}$, $A_{ht}$, $A_{hh}$, $A_w$) may be multiplied
by a scalar in $\bbZ$, while the ``$Y$'' relations may be
multiplied by a scalar in $R$. Hence, for example, $x^0w_1=0$ by $A_w$,
yet $x^kw_1\neq 0$ for $k>0$.
\end{definition}

\begin{proposition} The interpretation map $\iota$ indeed annihilates all
the relations in $\calR$.
\end{proposition}

\begin{proof} Both $\iota A_{tt}$ and $\iota Y_{tt}$ follow immediately from
the TC relation. The formal identity $e^{\ad b}(a)=e^bae^{-b}$ (here $\ad$ denotes 
the adjoint representation) implies
$e^{\ad b}(a)e^b=e^ba$, and hence, $ae^b-e^ba=(1-e^{\ad b})(a)e^b$. With
$a$ interpreted as ``red head'', $b$ as ``black head'', and $\ad b$
as ``hair'' (justified by the $\iota$-meaning of hair and by the
$\aSTU_1$ relation, see Figure~\ref{fig:aSTU}), the last equality becomes
a proof of $\iota Y_{hh}$.  Further pushing that same equality, we get
$ae^b-e^ba=\frac{1-e^{\ad b}}{\ad b}([b,a])$, where $\frac{1-e^{\ad
b}}{\ad b}$ is first interpreted as a power series $\frac{1-e^y}{y}$
involving only non-negative powers of $y$, and then the substitution
$y=\ad b$ is made. But that's $\iota A_{hh}$, when one remembers
that $\iota$ on the Y sector automatically contains a single
``$\frac{1}{\text{hair}}$'' factor. Similar arguments, though using
$\aSTU_2$ instead of $\aSTU_1$, prove that $Y_{ht}$, $Y_{th}$, $A_{ht}$,
and $A_{th}$ are all in $\ker\iota$. Finally, $\iota A_w$ is RI,
and $\iota Y_w$ is a direct consequence of $\aSTU_2$. \qed
\end{proof}

Finally, we come to the special elements $\lambda$, $\delta_A$, and $\omega_1$.

\begin{figure}
\[ \pstex{SpecialElements} \]
\caption{The special elements $\omega_1$, $\delta_A$, and $\lambda$
  in $\IAM_G$, for a sample 3-arrow Gauss diagram $G$.
} \label{fig:SpecialElements}
\end{figure}

\begin{definition} Within $\IAM_G$, let $\omega_1$ be, as before, the
generator of the W sector. Let $\delta_A$ be a ``short''
red arrow, as in the $A_w$ relation (exercise:
modulo $\calR$, this is independent of the placement of the short
arrows within $G$). Finally, let $\lambda$ be the signed sum of exciting
each of the (black) arrows in $G$ in turn. The picture says all, and it is
Figure~\ref{fig:SpecialElements}.
\end{definition}

\begin{lemma} In $\calA^{sw}(\uparrow)$, the special elements of
$\IAM_G$ are interpreted as follows: \linebreak $\iota(\omega_1)=Zw_1$,
$\iota(\delta_A)=ZD_A$, and most interestingly, $\iota(\lambda)=EZ$.
Therefore, Equation~\eqref{eq:GoalInIAM} (if true) implies
Equation~\eqref{eq:EofAtoZ} and hence, it implies our goal,
Theorem~\ref{thm:Alexander}.
\end{lemma}

\begin{proof} For the proof of this lemma, the only thing that isn't
done yet and isn't trivial is the assertion $\iota(\lambda)=EZ$. But this
assertion is a consequence of $Ee^{\pm a}=\pm ae^{\pm a}$ and of a
Leibniz law for the derivation $E$, appropriately generalized to a
context where $Z$ can be thought of as a ``product'' of ``arrow
reservoirs''. The details are left to the reader. \qed
\end{proof}

\subsubsection{The Computation of $\lambda$} \label{sec:ComputeLambda}

Naturally, our next task is to prove Equation~\eqref{eq:GoalInIAM}. This is
done entirely algebraically within the finite rank module $\IAM_G$. To
read this section one need not know about $\calA^{sw}(\uparrow)$, or $\iota$,
or $Z$, but we do need to lay out some notation. Start by marking the arrows
of $G$ with $a_1$ through $a_n$ in some order.

Let $\epsilon$ stand for the informal yet useful quantity
``a little''. Let $\lambda_{ij}$ denote the difference
$\lambda'_{ij}-\lambda''_{ij}$ of red excitations in the A sector of
$\IAM_G$, where $\lambda'_{ij}$ is the diagram with a red arrow whose
tail is $\epsilon$ to the right of the left end of $a_i$ and whose head
is $\frac12\epsilon$ away from the head of $a_j$ in the direction of the
tail of $a_j$, and where $\lambda''_{ij}$ has a red arrow whose tail
is $\epsilon$ to the left of the right end of $a_i$ and whose head is
as before, $\frac12\epsilon$ away from head of $a_j$ in the direction
of the tail of $a_j$.  Let $\Lambda=(\lambda_{ij})$ be the matrix whose
entries are the $\lambda_{ij}$, as shown in Figure~\ref{fig:LambdaAndY}.

\begin{figure}
\[ \pstex{LambdaAndY} \]
\caption{The matrices $\Lambda$ and $Y$ for a sample 2-arrow Gauss
  diagram (the signs on $a_1$ and $a_2$ are suppressed, and so are the $r$
  marks). The twists in $y_{11}$ and $y_{22}$ may be replaced by minus
  signs.
} \label{fig:LambdaAndY}
\end{figure}

Similarly, let $y_{ij}$ denote the element in the Y sector of $\IAM_G$
whose red Y has its head $\frac12\epsilon$ away from head of $a_j$
in the direction of the tail of $a_j$, its right tail (as seen
from the head) $\epsilon$ to the left of the right end of $a_i$ and
its left tail $\epsilon$ to the right of the left end of $a_i$. Let
$Y=(y_{ij})$ be the matrix whose entries are the $y_{ij}$, as shown
in Figure~\ref{fig:LambdaAndY}.

\begin{lemma} \label{lem:IAMStructure}
With $S$ and $T$ as in Definition~\ref{def:STA}, and with $B=T(X^{-S}-I)$
and $\lambda$ as above, the following identities between elements
of $\IAM_G$ and matrices with entries in $\IAM_G$ hold true:
\begin{eqnarray}
  \lambda-\sl\cdot D_A &=& \tr S\Lambda \label{eq:lambda}, \\
  \Lambda &=& -BY-TX^{-S}w_1 \label{eq:Lambda}, \\
  Y &=& BY + TX^{-S}w_1 \label{eq:Y}.
\end{eqnarray}
\end{lemma}

\noindent{\em Proof of Equation~\eqref{eq:GoalInIAM} given
Lemma~\ref{lem:IAMStructure}.} The last of the equalities above
implies that $Y=(I-B)^{-1}TX^{-S}w_1$. Thus,
\begin{align*}
  \lambda-\sl\cdot D_A = \tr S\Lambda = -\tr S(BY+TX^{-S}w_1) &=
    -\tr S(B(I-B)^{-1}TX^{-S}+TX^{-S})w_1 \\
  &= -\tr\left((I-B)^{-1}TSX^{-S}\right)w_1,
\end{align*}
and this is exactly Equation~\eqref{eq:GoalInIAM}. \qed

\noindent{\em Proof of Lemma~\ref{lem:IAMStructure}.}
Equation~\eqref{eq:lambda} is trivial. The proofs of
Equations~\eqref{eq:Lambda} and~\eqref{eq:Y} both have the same simple
cores, that have to be supplemented by highly unpleasant tracking of signs
and conventions and powers of $X$. Let us start from the cores.

To prove Equation~\eqref{eq:Lambda} we wish to ``compute''
$\lambda_{ik}=\lambda'_{ik}-\lambda''_{ik}$. As $\lambda'_{ik}$ and
$\lambda''_{ik}$ have their heads in the same place, we can compute their
difference by gradually sliding the tail of $\lambda'_{ik}$ from its
original position near the left end of $a_i$ towards the right end of
$a_i$, where it would be cancelled by $\lambda''_{ik}$. As the tail slides
we pick up a $y_{jk}$ term each time it crosses a head of an $a_j$ (relation
$A_{th}$), we pick up a vanishing term each time it crosses a tail
(relation $A_{tt}$), and we pick up a $w_1$ term if the tail needs to
cross over its own head (relation $A_w$). Ignoring signs and $(X^{\pm
1}-1)$ factors, the sum of the $y_{jk}$-terms should be proportional
to $TY$, for indeed, the matrix $T$ has non-zero entries precisely when
the head of an $a_j$ falls within the span of an $a_i$. Unignoring these
signs and factors, we get $-BY$ (recall that $B=T(X^{-S}-I)$ is just $T$
with added $(X^{\pm 1}-1)$ factors). Similarly, a $w_1$ term arises
in this process when a tail has to cross over its own head, that is,
when the head of $a_k$ is within the span of $a_i$. Thus, the $w_1$
term should be proportional to $Tw_1$, and we claim it is $-TX^{-S}w_1$.

The core of the proof of Equation~\eqref{eq:Y} is more or less the
same. We wish to ``compute'' $y_{ik}$ by sliding its left leg, starting
near the left end of $a_i$, towards its right leg, which is stationary
near the right end of $a_i$. When the two legs come together, we get 0
because of the anti-symmetry of Y excitations. Along the way we pick up
further Y terms from the $Y_{th}$ relations, and sometimes a $w_1$ term
from the $Y_w$ relation. When all signs and $(X^{\pm 1}-1)$ factors are
accounted for, we get Equation~\eqref{eq:Y}.

We leave it to the reader to complete the details in the above proofs. It
is a major headache, and we would not have trusted ourselves had we not
written a computer program to manipulate quantities in $\IAM_G$ by a
brute force application of the relations in $\calR$. Everything checks;
see~\cite[``The Infinitesimal Alexander Module'']{WKO}. \qed

This concludes the proof of Theorem~\ref{thm:Alexander}. \qed

\begin{remark} We chose the name ``Infinitesimal Alexander Module'' as in
our mind there is some similarity between $\IAM_K$ and the ``Alexander
Module'' of $K$. Yet beyond the above, we did not embark on any serious
study of $\IAM_K$. In particular, we do not know if $\IAM_K$ in itself
is an invariant of $K$ (though we suspect it wouldn't be hard to show
that it is), we do not know if $\IAM_K$ contains any further information
beyond $\sl$ and the Alexander polynomial, and we do not know if there is any
formal relationship between $\IAM_K$ and the Alexander module of $K$.
\end{remark}

\begin{remark} The logarithmic derivative of the Alexander polynomial
also appears in Lescop's work, see~\cite{Lescop:EquivariantLinking, Lescop:Cube}. We
don't know if its appearances there are related to its appearance here.
\end{remark}

\draftcut
\subsection{The Relationship with u-Knots} \label{subsec:RelWithKont}
Unlike in the case of braids, there is a canonical universal finite type
invariant of $u$-knots: the Kontsevich integral $\glos{Z^u}$. So it
makes sense to ask how it is related to the expansion $Z^w$.

\parpic[l]{$\xymatrix{
  \calK^u(\uparrow) \ar[r]^{Z^u} \ar[d]^a
    & \glos{\calA^u}(\uparrow) \ar[d]^\alpha \\
  \calK^w(\uparrow) \ar[r]^{Z^{w}}
    & \calA^{sw}(\uparrow)
}$}
We claim that the square on the left commutes, where
$\glos{\calK^u}(\uparrow)$ stands for long $u$-knots (knottings of
an oriented line), and similarly $\calK^w(\uparrow)$ denotes long
$w$-knots. As before, $a$ is the composition of the maps $u$-knots
$\to$ $v$-knots $\to$ $w$-knots, and $\alpha$ is the induced map on
the associated graded spaces, mapping each chord to the sum of the two ways to
direct it.

Recall that $\alpha$ kills everything but wheels and arrows (hence $Z^w$ is much
weaker, but also easier to handle, than the Kontsevich integral).
We are going to use the formula for the ``wheel part'' of the Kontsevich integral
as stated in \cite{Kricker:Kontsevich}. 
Let $K$ be a 0-framed long knot, and let $A(K)$ denote the Alexander polynomial. Then by \cite{Kricker:Kontsevich},
$$Z^u(K)= \exp_{\calA^u}\left(-\frac{1}{2} \log A(K)(e^h)|_{h^{2n}\to w^u_{2n}}\right)+\text{ ``loopy terms''},$$
where $w^u_{2n}$ stands for the unoriented wheel with $2n$ spokes; and ``loopy terms'' means terms that contain 
diagrams with more than one loop, which
are killed by $\alpha$. Note that by the symmetry $A(z)=A(z^{-1})$ of the Alexander polynomial,
$A(K)(e^h)$ contains only even powers of $h$, as suggested by the formula.


We need to understand how $\alpha$ acts on wheels. Due to the two-in-one-out
rule, a wheel is zero unless all the ``spokes'' are oriented inward, and the cycle oriented in
one direction. In other words, there are two ways to orient an unoriented wheel:
clockwise or counterclockwise. Due to 
the anti-symmetry of chord vertices, we get that for odd wheels $\alpha(w^u_{2h+1})=0$ and
for even wheels $\alpha(w^u_{2h})=2w^w_{2h}$. As a result,
$$\alpha Z^u(K)=\exp_{\calA^{sw}}\left(-\frac{1}{2} \log A(K)(e^h)|_{h^{2n}\to 2w_{2n}}\right)
=\exp_{\calA^{sw}}\left(-\log A(K)(e^h)|_{h^{2n}\to w_{2n}}\right)$$
which agrees with the Formula (\ref{eq:AtoZ}) of Theorem
\ref{thm:Alexander}. Note that since $K$ is 0-framed, the first part
(``$\sl$ coded in arrows'') of Formula~(\ref{eq:AtoZ}) is trivial.
