Let $K$ be a symmetric $n\times n$ matrix with integer coefficients. The additive abelian group of integer vectors of size $n$ gets a lattice structure by defining the (not necessarily definite) integral inner product $(x,y):=x^TKy$. It is conventional to call the integer $(x,x)$ the norm of $x$ (rather than its square root, as in Hilbert spaces). A standard referecne for lattices is the book Sphere Packings, Lattices and Groups by Conway and Sloane. (It covers the definite case only. For the indefinite case see, e.g., the book An introduction to the theory of numbers by Cassels.
If $K$ is positive semidefinite, one has a Euclidean lattice in which all vectors have nonnegative integral norm $(x,x)$. The vectors of zero norm are just the integral null vectors of $K$; they form a subgroup that can be factored out, leaving a definite lattice of smaller dimensions where all nonzero points have positive norm.
In a definite lattice, there are only finitely many lattice points of a given norm. coming in antipodal pairs that can be found by a complete enumeration procedure (typically an application of the LLL algorithm followed by Schnorr-Euchner search, or more advanced variations). The collection of all vectors of small norm define a graph whose edges are labelled by the nonzero inner products. Lattice isomorphism (corresponding to equivalence of the $K$ under $GL(n,Z)$) can be tested efficiently by testing these labelled graphs for isomorphism, e.g., using the graph isomorphism package nauty. (Of course one first checks whether the determinant of $K$, which is an invariant is the same.) This makes checking for decomposability a finite procedure (recursive in the dimension). It is not very practical in higher dimensions unless the lattice decomposes into a large number of lattices generated by vectors of norm 1 and 2. However, if some of these graphs are disconnected they suggest decompositions that can be used in a heuristic fashion.
In an indefinite lattice (i.e., when $K$ is indefinite) there are always vectors of norm zero that are not null vectors of $K$. The norm zero vectors no longer form a subgroup. Classification and isomorphism testing is instead done by working modulo various primes, giving genera. Again one has a finite procedure.
To solve the decomposition problem posed for given $K$, one shouldn't need to do a full classification of all lattices of dimension $n$. But I don't know a simpler systematic procedure that is guaranteed to work in general. In higher dimensions most lattices are indecomposable, though there are no simple criteria for indecomposability. The key to decomposition is to transform the basis in such a way that a subset of the basis vectors becomes orthogonal to the remaining ones, and to repeat this recursively as long as feasible.
This gives rise to the following heuristics that works well for the specific matrix given. One transforms the basis so that it contains points $x$ of absolutely small nonzero norm (reflected by corresponding diagonal entries) and subtracts integral multiples of $x$ from the other basis vectors in order to make the absolute values of the inner products as small as possible. [Finding these short vectors is often trivial by inspection, but if the original diagonal entries are large lattice reduction methods (of which LLL is the simplest) must be used to find them or to show their nonexistence.] This is repeated as long as the sum of absolute values of the off-diagonal entries decreases. If a diagonal entry is $\pm1$ one can make in this way all off-diagonal entries zero and obtains a 1-dimensional sublattice that decomposes the given lattice. (For absolutely larger diagonal entries there is no such guarantee, but the case of norm $\pm2$ is usually tractable, too, since one can use the structure theory of root lattices to handle the sublattice generated by norm 2 vectors.)
In the specific (indefinite) case given, the fourth unit vector $e^4$ has norm 1, and transforming the off-diagonals in the 4th column to zero produces the reduced matrix {2,-1,0,-1,-1,2;0,2,0]. Now the second unit vector has norm -1, and doing the same with column 2 gives the reduced matrix [3 -2;-2,4]. This matrix is definite, and one can enumerate all vectors of norm $\le 4$ to check that it is indecomposable. One can still improve the basis a little bit , replacing [3 -2;-2,4] by [3 1;1 3].
Collecting the transformations done one finds a unimodular matrix that transforms $K$ into the direct sum of [3,-2;-2,,4], [-1], and [1]. Or [3 1;1 3], [-1], and [1].