3 |
Algorithm Development |
The preceding chapter introduced and appraised the various active contour model forms presented in the available literature. The purpose of this chapter is to describe the adaptation of the most appropriate of these models for the wound area measurement problem. Four different active contour models are developed and presented with the objective of comparing their performance as wound area measurement algorithms. The models may be divided into two categories according to the two methods of solution that they employ: (a) iterative solution of matrix equations as presented by
Kass et al. (1987) and (b) direct iterative energy-minimising search using a ‘greedy’ algorithm (Williams and Shah, 1992; Lai and Chin, 1993). All four models require reasonable initialisation in order to produce repeatable results. Common to all four models is the construction of the external potential energy image, based on edge strength. Additionally, all algorithms make use of a common scale-descent procedure that is described in the following section. The algorithms have been implemented in the C/C++ language and run under the DOS system environment using a 32-bit protected mode interface. The programs employ VESA direct video mode addressing for graphical output.
3.1 Producing the External Potential Energy Field
Multi-scale gradient operators and edge detection schemes were discussed in Chapter 2, where the use of gradient operators for generating the external energy term of equation (
2.1) was considered. This section further considers the use of such operators for producing the external energy function and provides some examples of the results using images of leg-ulcers. These examples are used as a discussion point to highlight some of the problems that the proposed algorithms encounter in practice. Application of a smoothed gradient filter (e.g. derivative of Gaussian) to a gray level or intensity image produces two main side effects:
3.1.1 Depletion of the External Energy and Attractive Forces
The Derivative-of-Gaussian gradient operator provides regularisation by suppressing the effects of high-frequency image components that tend to be heavily noise-dominated. High frequencies responsible for image detail and fine texture (if present) are not considered to be essential requirements for accurate area measurements. Increasing the amount of regularisation by increasing the filter width parameter (scale) removes many local energy minima present in the noisy and detailed image and helps the active contour to converge to a more closely-packed set of results, i.e. increased precision. The final solution will be an optimised one, where both internal contour-regularising energies (controlling to some extent the contour shape) and external energies contribute. Therefore, it is necessary to understand the effect image regularisation has upon the magnitude of the external energy, since any overall shifts in this energy due to scale equate to a modification of the internal contour regularisation parameters. Consider therefore, the idealised 1-D step edge model shown in Figure 3.1(a) along with the profile of this edge after low-pass filtering with a Gaussian kernel at scales (i.e. standard deviation)
s = 0.25, 0.5 and 1.0. The derivatives of these smoothed edge profiles are plotted in Figure 3.1(b). The effect of increasing the filter scale (i.e. its width) increases the regularisation of the image and this serves to suppress the image gradient magnitude. The peak energy at the centre of the edge response is therefore depleted with increased scale, so that the relative magnitude of the active contour’s internal energy increases, thus increasing regularisation of the contour itself. One would expect to increase contour regularisation when the image (external energy field) becomes more noisy, and not less noisy as is the case when image-regularisation is increased.
(a) |
(b) |
Figure 3.1 (a) Gaussian smoothing of a unit step edge, H(x). (b) Responses of derivative-of-Gaussian gradient operator to a unit step edge. |
It is necessary to compensate for the reduction of peak energy by scaling the gradient by the filter scale. The gradient magnitude of a unit step edge convolved with a Gaussian derivative filter of scale
s (in one dimension only) is given by:
(3.1) |
|
Where
Gs(x) is a Gaussian smoothing filter.The peak gradient in (3.1) occurs at
x=0 and has the value:
(3.2) |
|
Thus compensation for peak energy variation may be achieved by multiplying the gradient result by the factor
sÖ(2p). If the original external energy term suggested by Kass et al. (1987) is used (square of gradient magnitude) then one must multiply by the square of this factor to compensate.Once the change in peak energy with scale is stabilised, the effect of increasing scale is simply to increase the spread of an edge’s ‘energy’ within its locality in the image. This spreading, even though the normalised peak energy is the same at all scales, causes (a) the reduction of peak edge-attractive force due to the energy-gradient being reduced (Figure 3.1) and (b) increases the distance that the peak forces occur from the edge location (for a Gaussian profile the derivative magnitude peaks at
±s). For a given set of contour regularisation parameters increasing the scale increases the ability of the contour to overcome weak edges by weakening the peak attractive force of the edge. This effect is implicitly overcome by using normalisation. The external force normalisation of Cohen (1991) is suitable for this process when the minimisation algorithm is based on gradient descent. This normalisation implies that the external energy profile is piecewise linear with unit slope and with short quadratic segments forming a smooth (C1 continuous) variation across gradient peaks. Algorithms that directly minimise the energy terms by iteratively searching through various control vertex states should use (3.2) to compensate for the scale-effect, unless a more dynamic normalisation scheme is used, e.g. Williams and Shah (1992). Normalisation is a key element to the successful application of the minimax principle (§2.2.14), as employed by Lai and Chin (1993).
Multi-scale edge detection schemes and their associated gradient operators, such as the Canny algorithm (using derivative of Gaussian filter) allow one to observe the more dominant edges at high scales free from the clutter that exists at low scales. Blurring an image in this way has the effect of moving edges away from their original position in the unblurred image. Two modes of movement can occur when detecting edges at high scales:
Clearly, the fact that contours are smoothed by blurring is unsurprising. One could utilise this property to allow the use of fewer elements to define a contour at higher scales and also to suggest that the contour regularisation parameters could be relaxed. Figure 3.2 shows two examples of leg ulcer images where the edges detected after applying a derivative-of-Gaussian gradient operator and subsequently suppressing non-directionally-maximal gradients are translated as a result of image smoothing. The darker lines correspond to the edges at higher scales. This effect must be considered by any algorithm that uses coarse-scale filtering as part of its object boundary-detection scheme.
|
|
Figure 3.2 Movement of detected wound edges with different levels of filter scale. |
The image used to produce the edge profiles on the left of Figure 3.2 is a well-defined wound (see 1.1 in
Plate 1). The darkest contour is the edge profile that is detected with a filter scale of s=16 pixels. The effect of image smoothing is substantial in this image. The image used to produce the edge profiles on the right of Figure 3.2 is somewhat less well-defined (see 1.4 in Plate 1). The movement of edges appears to be somewhat less affected by the application of smoothing, although there is a double contour on the left-hand side of the wound which is evident at low scales. This can cause problems when manual delineation is used to initialise the contour.
Kass et al. (1987) and Leymarie and Levine (1993) have reported active contour model algorithms that first seek equilibrium at a high level of scale. The use of such a procedure allows the model to become less sensitive to distortion of the resulting contour caused by noise. In the case of measuring the area of a leg ulcer, there exists the additional problem of multiple local energy minima corresponding to edge segments positioned near to the boundary of the wound. These are caused by the contrast between epithelialisation tissue and blood and the surrounding skin. Such nearby edge segments that do not form part of the wound boundary are referred to as ‘ambiguous’. It will be shown in Chapter 4 that ambiguities in the image also lead to manual delineations that are biased. It is considered that initialisation of an active contour model is the most difficult aspect to resolve in the case of wound measurement, therefore manual delineation is used to initialise the algorithm, implying the possibility of biased results. Starting at a high scale merges multiple edges together and overcomes the problem of varied initial contours becoming trapped by different edges. Thus contours corresponding to biased delineations are converged before seeking contour equilibrium at a lower scale. In order to be specific about the manner in which multi-scale information is combined, the scale descent algorithm is defined overleaf.
Scale Descent Algorithm
Initialisation: Manually delineated contour (using colour image of wound)
(1) Set initial scale parameter, scale reduction factor and end scale
(2) Create potential energy image using scale factor to set filter scale
(3) Iterate initial contour to equilibrium*
(4) Lower scale by reduction factor
WHILE scale >= end scale
(5) Create potential energy image using scale factor to set filter scale
(6) Iterate contour to equilibrium*
(7) Lower scale by reduction factor
* The iteration procedure used in steps (3) and (6) is one of the four algorithms described in the following sections.
The selection of the gradient filter’s start and end scale parameters for each algorithm is considered in Chapter 5. The active contour model algorithms proposed to fulfil steps (3) and (6) are described in the following sections.
3.2 The Ground-Attract (GA) Algorithm
This algorithm combines several contributions that have been made to the active contour paradigm. Firstly, it employs the ‘ground state’ initial model due to
Radeva and Marti (1995) and a modification of the directional-gradient external energy term due to Lai and Chin (1993). The model is spatially discretised using the finite element method, a step previously reported by Karaolani et al. (1990) and Cohen and Cohen (1993). Before defining the energy functional and its associated Euler-Lagrange equation, it is necessary to specify some component terms:Let be a unit vector oriented normally outwards from the contour
Let
v0(s) be a static reference contour given by the initial contour position (this defines the ‘ground-state’).Let
P(v) be an external energy potential function, P(x,y), sampled along the contour v(s). P(x,y) is a smoothed gradient image given by:
(3.3) |
|
where
I(x,y) is the intensity plane of the wound image and Gs(u,v) is a Gaussian smoothing kernel with standard deviation s.The energy equation and the modified Euler-Lagrange equation for the complete model, with periodic boundary conditions, are respectively given by:
(3.4) |
|
(3.5) |
|
where is the unit-vector in the direction of gradient
The first two terms in the integral of (3.4) are the terms proposed by Radeva and Marti (1995) which cause the contour to tend to maintain its initial size, shape and orientation. Note that the gradient term in (3.5) is not the exact variation of the energy term in (3.4), but rather an intuitive approximation. The direction of the gradient of the potential function in (3.4) is preserved whilst being gradually cosinusoidally weighted by the alignment of the normal vector to the contour and the direction of the potential gradient. Also note that it is possible to implement the minimisation of (3.4) by a direct search algorithm using both dynamic programming methods and a greedy algorithm (§2.2.9).
This active contour model may be spatially discretised by applying either the Galerkin weighted residual finite element method to (3.5) or the Rayleigh-Ritz method to (3.4) – see Stasa (1985) for a general description of these two methods. In general, the two methods can be shown to produce the same result given corresponding differential and variational formulations. However, the external term in (3.4) is not the exact equivalent of the external term in (3.5). The current implementation uses the Galerkin weighted residual method to produce the external forces. Appendix A provides the derivation of the finite element equations given the original energy functional (2.1) of Kass et al. (1987). The shape functions selected for this process are unit-spaced cubic B-Splines having the properties of (a) C2-continuity and (b) at least twice differentiable. The modifications proposed in this section do not require the process of deriving the finite element matrix equations again. The assemblage stiffness matrix, K, for the system of equations is unchanged by the introduction of the v0 term in the energy functional (3.4). However, the resultant matrix equation which is satisfied at equilibrium becomes:
(3.6) |
|
where
V0 is a two-column matrix of vertices controlling v0(s) and f(V) is the assemblage of the elemental external force vectors given by
|
The external forces acting at each control vertex in
V are integrated using the extended trapezoidal rule of closed type (Press et al., 1988). This basic quadrature formula amounts to an approximation of the external potential function, sampled along each element, by piecewise linear segments. At present, 25 equally spaced points are used. The accuracy could be improved and the number of points reduced (hence lower computational effort) by using a quadrature of higher order and Gaussian form, e.g. Gauss-Legendre, as proposed by Karaolani et al. (1992).The equation to be solved at each iteration time-step becomes (using the semi-implicit scheme):
(3.7) |
|
The internal forces acting at the first iteration,
t=0, are zero since Vt=0=V0.
3.2.1 Stability of the Solution
The stability of the solution may be assessed by considering the fact that the
KV0 term in (3.5) is a constant term given temporally-constant values for a and b. Thus, provided a scheme that varies these two parameters based on some merit function of the contour at each iteration is not used to iteratively tune them, the constant has no effect on the stability of the solution. The stability properties of (3.5) may therefore be considered identical to the case given in (§2.2.8). Thus the semi-implicit scheme is stable with respect to the internal parameters and may be stabilised with respect to the external forces by (a) normalisation and (b) bi-linear interpolation as used by Cohen (1991).The acronym ‘GA’ stands for Ground-Attract, which expresses the two incorporated ideas due to Radeva and Marti, and Lai and Chin, of (a) ground state contour and (b) external forces that selectively attract contours based on mutual orientation, but do not repel an inversely oriented contour.
The inclusion of the
v0 term in (3.4) weighs the final solution towards the initial solution since the internal energy is dependent upon the first and second order deviations of the active contour from its initial state and the internal parameters. The properties of this formulation are:A possible disadvantage of the internal energy term is related to the need for the contour to deform to fit the smoothed contour of the potential function at high scales. The use of initialisation at a high image smoothing-scale is to enable the contour solution to become less dependent upon its initial state by removing many local minima in the potential function. There are two factors to consider:
The initial filter scale used to create the potential function must be large enough to cause neighbouring ambiguous boundary edges to merge, which implies some movement of edges. It is evident from Figure 3.2 that the apparent displacement of wound boundaries at high smoothing scales can shrink the enclosed area. Increasing the regularisation of the contour rather than the image is not a viable alternative in this case since it causes the contour to maintain more of its initial delineation error. Thus it is expected that the performance of this algorithm will be optimised at a relatively low scale. This does not imply inferiority however, since the wounds encountered in practice are so varied that some algorithms should perform better on certain types of wound. One expects the shape preserving property of the GA algorithm to allow it to be less susceptible to catastrophic failure, i.e. collapse of the contour or for the contour to converge to a wildly different solution, far away from its initial position.
The matrix equation derived for the Ground-Attract model is given by (3.6). This section describes the algorithm used to iterate the model to convergence using a Cholesky solution of the semi-implicit equation of (3.7). The algorithm implements a time-step reduction scheme once in the vicinity of the final solution, so that the displacements of the control vertices are refined and accuracy improved over the last few iterations. The time step parameter starts at 1 and is reduced to 1/32 in octave steps. The time step is halved when the external energy of the contour increases (
Leymarie and Levine, 1993) and adds the computational burden of having to calculate the external energy term in (3.4) at each iteration to check for near convergence. The algorithm is as follows:
Ground-Attract Convergence Algorithm
(1) Set
T=1(2) Given the initial control vertices,
V0, calculate initial E*ext component with last term in (3.4)WHILE
T>1/64(3) Calculate new control vertex position vector using Cholesky decomposition of (3.7).
(4) Calculate new external energy,
E*ext, with last term of (3.4)WHILE
E*ext decreases(5) Update current control vertices using result of (3) or (6).
(6) Calculate new control vertex position vector using Cholesky decomposition of (3.7).
(7) Calculate new external energy,
END while
Halve time step, T.
END while
This algorithm is implemented as a subroutine of the higher-level scale descent procedure. The scale descent algorithm is common to all four active contour models described in this chapter and is described in (§3.1).
3.2.4 Possible Improvement of the Model
The perceived disadvantage of the algorithm being unable to properly converge at high scales could possibly be overcome by allowing the static reference contour (i.e. the ‘ground state’) to move adaptively as the algorithm proceeds. Alternatively, one could compute the internal forces as a weighted function of the last
m iterations. For instance, the ground state could be assigned the highest weight with successive moves being given less weight to guard against the external forces pulling the contour too far away from its initial position. The general balance of the weights would probably need to be set empirically given the diverse nature of leg ulcer images. It is foreseeable that the weights would need to sum to unity, otherwise the effect of adding extra terms would be to increase the internal regularisation of the contour.
3.3 The Tangent-Normal (TN) Algorithm
The external forces that act upon the contour in the standard active contour model can cause the elements to collapse in cases where the potential gradient is non-uniform along the boundary (cf.
§2.2.10). Additionally, the internal elastic energy term in the standard model is responsible for producing scale-varying forces that can cause the whole contour to collapse to a point (§2.2.11). This algorithm proposes two modifications to the standard active contour model to overcome some of the associated problems, although the modification to the elastic term will not remove the scale dependence.The algorithm is referred to as the ‘TN’ (Tangent-Normal) algorithm due to the combination of tangential internal elastic forces and normal external forces. Considering these two modifications, the Euler-Lagrange equation for the model may be written as:
(3.8) |
|
where
v||ss is the component of elastic force vss tangential to v(s) and (ÑP(v))^ is the component of external force normal to v(s).Note that since (3.8) is formulated in terms of forces, an equivalent energy formulation is not specified. This prevents the equation from being implemented by both direct minimisation algorithms and the Rayleigh-Ritz finite element method. Two remaining possible options are finite differencing and the Galerkin weighted residual method.
3.3.1 External Force Made Normal to the Contour
In the standard active contour model the external force components resulting from the potential function are simple spatial derivatives of the function, thus the external forces
ÑP(v) always act in the direction of maximum gradient of the potential function, irrespective of the relative orientation of the contour. The external force vector, derived from the potential function, is redefined here so that it is always oriented in a direction normal to the contour. This is achieved by simply taking the component of the potential gradient in the direction normal to the contour. Let
(3.9) |
|
where is the outer-product of the unit normal vector with itself.
Note that P(v) is a scalar quantity, and thus (3.9) does not consider the orientation of the gradient of the gray level image used to produce the potential. This may be compared to the external force specification for the GA algorithm (3.5): In (3.9) the cosinusoidally-weighted forces act normally to the boundary, whereas in (3.5) the cosinusoidal weight is calculated on the basis of the alignment of the potential function (a vector term) and the contour. The forces act in the direction of the maximum gradient of the magnitude of the potential function. In (3.9) the directional information comes from the gradient of potential function. Using the Galerkin weighted-residual method (3.9) becomes:
(3.10) |
|
This equation may be solved numerically for each element with a suitable quadrature scheme. Subsequently, the force vectors corresponding to each element are assembled to produce the overall force vector.
3.3.2 Elastic Force Made Tangential to the Contour
The elastic forces acting along the contour may be decomposed into any arbitrary pair of orthogonal component vectors. Thus the elastic force acting at any point on the contour may be expressed in terms of a component normal to the contour and a component tangential to it. The elastic force component acting normally to the contour is that component responsible for causing the entire contour to tend to collapse to a point in the absence of competing forces (at points of concavity the elastic forces cause the contour to expand, see
Figure 2.1). The force component acting tangentially to the contour is responsible for promoting equal length elements. Removal of the normal force component enables the contour to maintain its size and shape, rather than collapsing in the absence of external forces.It should be noted that for stable non-contractile behaviour under the elastic energy model the energy term must be scale invariant. Scale invariance implies that the elastic energy remains unchanged after the contour or contour element has undergone a linear magnifying transformation. Any tendency for the energy to change with scale implies the production of an elastic force vector component which will tend to drive the contour towards that scale at which elastic energy is minimised. As already mentioned, this is the case with the classical elastic energy definition which tends to drive the contour to complete collapse.
3.3.3 Implementation of Tangential-Only Elastic Force
It was shown in
Figure 2.4 that the elastic forces acting along the contour have a large normal component responsible for causing the contour to collapse in the absence of large enough competing forces. The standard elastic force is defined by (2.4) and may be decomposed into any two arbitrary orthogonal vectors. Re-modelling (2.4) in terms of a vector-pair formed from the component of elastic force (a) normal to the curve and (b) tangential to the contour yields the following equations that are no longer independent in terms of the x and y components of the contour:
(3.11) |
|
(3.12) |
|
where is a unit normal vector to the contour and
is a unit tangent vector.
It is easy to verify that the sum of (3.11) and (3.12) equals the original elastic force equation given by (2.4). Since this algorithm requires only the tangential component (3.12), one might consider attempting to find the Galerkin weighted residual formulation to obtain the associated stiffness matrix components. Performing this task analytically appears to be rather unwieldy and the forces would clearly be a non-linear function of the control vertices. Two alternative solutions are possible:
|
Figure 3.3 Comparison of Galerkin weighted residual forces (B-Spline basis) for standard elastic force (eqn 2.3) and tangential and normal elastic forces, respectively (3.12) and (3.11). |
The current implementation makes use of the approximate solution. If the normal components of the vectors shown in Figure 3.3 are approximated (by cal-culating
KVt with (b=0) and taking the components normal to the contour at the element end points) and subtracted from the external force vector then an explicit step approximation is obtained. This may have implications for the stability of the system since explicit components can generate unacceptably large displace-ments. Instead, an implicit method of approximating the normal components is used. The implicit approximation does not allow for the displacements of the control vertices to be expressed as a sum of independent forces arising from the different energy terms (unlike a wholly explicit method). This leads to a two-stage iteration process that involves performing the standard semi-implicit iteration without removing any elastic force components, followed by a ‘correction’ step. The correction step contains the implicit approximation and attempts to remove the elastic normal component present in the displacements of the control vertices. Stage one solves the following iteration equation by Cholesky decomposition, where K is the standard stiffness matrix:
(3.13) |
|
Stage two is defined by the following sequence of operations:
(a) |
Solve |
|
(b) |
Calculate displacements |
|
(c) |
Calculate the displacement components normal to the contour at the respective |
|
|
element end points with |
|
|
where The operation diag(diag(M)) simply removes the upper and lower triangular parts of any matrix M (Matlab® help menu). |
(d) |
Finally, update the control vertices with |
|
3.3.4 Stability of the Solution
The stability of the solution is more difficult to assess than the previous model because of the nature of the non-linear correction step. The displacements of the control vertices caused by the correction, however, can be no greater in magnitude than the displacements caused by the standard elastic term. Use of the implicit method to calculate the standard displacements (in the absence of any other forces) guarantees stability of the standard term. The fact that the modified displacements must be smaller than the standard displacements does not prove that the algorithm will converge, however, it does provide some notion that the model should be stable.
This algorithm is similar to the Ground-Attract minimisation algorithm described in this chapter. A necessary alteration required is an extra step to remove the normal component of the elastic forces as previously described. In order to check for near-convergence and thus refine the step size parameter, the standard external energy is calculated by (3.15).
(3.15) |
|
When this energy rises, the contour should be in the vicinity of the final solution.
Tangent-Normal Convergence Algorithm
(1) Set
T=1(2) Given the initial control vertices,
V0, calculate initial E*ext with (3.15)WHILE
T>1/64(3) Calculate new control vertex position vector using Cholesky decomposition of (3.13) followed by the correction step.
(4) Calculate new external energy
E*ext with (3.15)WHILE
E*ext decreases(5) Update current control vertices using result of (3) or (6).
(6) Calculate new control vertex position vector using Cholesky decomposition of (3.13) followed by the correction step.
(7) Calculate new external energy
END while
Halve time step, T.
END while
3.4 Normalisation of Parameters
The total energy of an active contour model in some arbitrary state and with some arbitrary external energy conditions is dependent not only upon the contour regularisation parameters but also upon two other parameters introduced by the discretisation and by the smoothing of the external potential function:
Ivins and Porril (1994) presented a suitable modification of the internal energy terms to maintain the energy balance of the active contour model in the event of changing the number of snaxels or elements in the contour:
(3.16) |
|
|
(3.17) |
|
Also, since the external energy in the current implementation is integrated over each element in the range
s=[0..1], the external energy requires division by N to maintain energy balance.In equations (3.16) and (3.17),
N is the number of snaxels/elements and a0 and b0 are independent of N. The modification is necessary because the elements are based on unit-spacing in the independent parameter (s). Altering the number of elements over which the contour is defined alters the magnitude of the contour’s derivatives by altering the effective range of s, i.e. s=[1..N]. In practice, one effect of this modification is the tendency to require b0 << a0 when a reasonable number of snaxels or elements are used. Currently, the number of elements is fixed at 32, which appears to adequately represent the boundaries of the wound images encountered during this work. Erring on the side of caution it is probably better to have more elements than necessary.
3.5 The Minimax Algorithms (MX and MG)
The previous two algorithms, the GA and TN finite-element implementations, have parameters that must be properly set in order for them to work effectively without over-regularising the contour. Making use of the minimax principle
(Gennert and Yuille, 1988), Lai and Chin (1993) have shown how an active contour model may be implemented without regard to the explicit setting of what they term the ‘nuisance’ parameters, i.e. the tension and stiffness constants. Their implementation allows the minimax optimisation to apply locally across the contour so that regularisation is adaptive depending upon the conditions existing at any part of the contour, e.g. image noise and curvature of the contour being followed. The energy minimisation algorithm is implemented as a direct local search using a modification of the greedy algorithm (Williams and Shah, 1992) and is based upon a finite difference discretisation; i.e. the snaxels are single points rather than piecewise continuous curves. It is evident that the algorithm may alternatively be implemented using cubic elements to define the contour and thus base the energy equations on a piecewise-continuous model. This requires modification of the energy term expressions since Lai and Chin specify them in discrete form only. The following sections describe the implementation of the continuity (elastic), curvature (bending) and external energy terms using a piece-wise continuous contour definition.
This term is related to the elastic energy term of
Kass et al. (1987) although its particular formulation promotes equality of spacing between snaxels. Lai and Chin’s continuity energy term (3.18) is scale-invariant; hence it does not cause the contour to collapse.
(3.18) |
|
where
d¥ is the averaged inter-snaxel distance under the ||·||¥ vector norm and vi is a contour point (snaxel).An equivalent formulation of (3.18) in the continuous (piecewise) domain is given by:
(3.19) |
|
|
where |
|
is the average element length. |
Following
Ivins and Porril (1994), the number of elements, N, used to discretise the contour is included in (3.19) to preserve the energy balance of the active contour model if and when the number of elements is altered.
Lai and Chin measure curvature at a contour point by calculating the angle formed between two vectors subtended from that point to the two neighbouring points. Their curvature term is given by:
|
(3.20) |
|
where
ui=vi+1–vi is a vector joining two neighbouring points.The curvature energy is 0.5 when the two vectors form a right angle and approaches a maximum of 1.0 as the angle becomes increasingly acute. This is a scale invariant measure of curvature.
Curvature at a point on a continuous function can be defined as the inverse of the radius of a circular arc that is tangential to the function at that point (see Stroud, 1987). A continuous representation of curvature for a parametric contour element is given by:
|
(3.21) |
|
where is a unit-normal vector to the contour.
Thus curvature is equal to the magnitude of the normal component of the term vss=(xss,yss) divided by the square of incremental arc length. An interesting point is that when the parameter s represents the true contour arc length, then the boundary tangent and normal vectors (vs and n respectively) have unit magnitude, and vss is also normal to the boundary. Curvature is then equal to the magnitude of vss. Clearly, since C(s) is a function of radius, the measure is not scale-invariant and attempting to minimise an energy term that integrates (3.21) will cause the contour to expand, since minimising curvature equates to maximising the radius of curvature. A scale-invariant measure of curvature may be defined by multiplying the curvature by the incremental contour length:
|
(3.22) |
|
The division by
p causes (3.22) to integrate to 0.5 for a perfectly circular 90° arc (the integral is taken over the range s=[0..1]). The total sum energy for the entire contour is obtained by integrating (3.22) over each element in the contour and summing the resulting integrals. Integrating (3.22) with respect to s is equivalent to integrating |C(s)| with respect to incremental contour length. This means that a magnification of an element affects both the curvature term and the incremental length term with the net result being that the curvature energy is dependent upon shape only. Taking the magnitude of C(s) is necessary to avoid negative curvatures cancelling-out positive curvatures.
3.5.3 Gradient External Energy Term
The external energy term of Lai and Chin is defined by the following equation:
(3.23) |
|
where
f is the angle between the gradient vector, P(v), and the normal to the contour.
|
Figure 3.4 Comparison of directional gradient weighting |
This produces a much more directional gradient weighting term than the standard cosine weight (dot product) used in equation (3.4). Figure 3.4 shows a comparison of the two energy terms as functions of the angle
f. With regard to the gradient weighting proposal of Radeva and Marti (1995), it would be a simple matter to remove the magnitude operator and allow the energy function to repel inversely oriented edges. The gradient magnitude in (3.24) is normalised in the range [0..1], although it is not clear as to the domain over which the normalisation takes place. Williams and Shah (1992) normalise the gradient magnitude locally by considering, for each point, the gradient magnitudes of the 3´3 pixel neighbourhood within which each snaxel can move at each iteration. This tends to produce a strong tendency for a snaxel to remain at a point of locally maximum gradient, even if all gradients within the neighbourhood are relatively weak. They apply a minimum range constraint to the normalisation to counteract this effect.The normalisation model proposed in this section is to scale data within the range of gradient values found along the whole contour at each iteration. Equation (3.24) expresses the proposed normalisation:
(3.24) |
|
where |
|
and |
|
Combining this normalisation with a cosine-weighted gradient magnitude operator produces the definition of external energy used with this model. The energy is normalised within the range
[0..1].
|
(3.25) |
|
|
where |
|
3.5.4 Gray Level External Energy Term
Early experiments have shown that ambiguous contours as detected by a gradient operator are likely to cause the result to be dependent upon initialisation. The use of a scale descent algorithm can help to make contours starting from different initialisations converge to the same result. However, this can sometimes result in a hybrid ‘solution’ combining sections of two different contours. In cases where the gray level is more consistent around the true boundary, an adaptive gray level seeking energy term may be used to add weight to the contour which has most consistent gray level:
|
(3.26) |
|
where |
|
n is an arbitrary constant, and
I(v) is a gray level image function sampled along the contour v.
Clearly, the comments concerning homogeneity of region information in §2.1 is relevant to this proposal. A modification which could be employed, but which has not been extensively tested is to allow the ‘reference’ gray level – the mean gray level in the case of (3.26) – to vary around the wound in a controlled fashion. Wounds which are on highly curved parts of the lower leg can exhibit a marked change in intensity due to somewhat directional illumination. An example of a suitable reference function is to sum the average gray level plus first two harmonics of gray level. This allows for a maximum of two local minima and two local maxima of intensity along the contour.
3.5.5 Continuous Domain Minimisation
The continuous domain application of the minimax principle to energy minimisation yields (3.27) for the 3-term minimax algorithm (MX) and (3.28) for the 4-term algorithm incorporating the gray level term (MG), which must be minimised among a set of local search patterns at each iteration:
(3.27) |
|
(3.28) |
|
|
Figure 3.5 Depiction of the maximum energy profile over a contour element. |
The integration of the continuous maximum energy profile in (3.27) and (3.28) is implemented by numerical quadrature. At present the closed-form extended trapezoidal rule is used. In addition to the use of quadrature to perform integration of external energy along the contour, one could also follow the example of
Cohen (1991) and use a 2-D image interpolation scheme to allow the external energy to vary smoothly across pixels, although this has not been implemented in the current algorithm. This process of defining contour energy by continuous domain maxi-misation of functions is depicted in Figure 3.5 for one element of a complete contour. The area under the solid line representing the function max(Econt(v),Ecurv(v),Egradient(v)) is given by the integral term in (3.27) and this quantity is summed for all elements giving the total contour energy. At each iteration, the greedy algorithm attempts to minimise (3.27) among the set of local contour deformations. The modified greedy algorithm of Lai and Chin is guaranteed to converge to a unique minimum when one snaxel (control vertex in the piecewise continuous case) is moved at a time. After moving a single vertex, all affected parameters, e.g. contour length, must be recalculated to avoid the production of limit-cycle oscillations.
3.5.6 Iteration and Convergence of Direct Minimisation Algorithms
The MX and MG algorithms use the modified greedy algorithm applied to equations (3.27) and (3.28) respectively. The modified greedy algorithm for the case of cubic B-Spline contour elements solves the following equation for each control vertex defining the contour:
|
(3.29) |
|
where |
|
and |
|
This summation is taken because altering any single control vertex controlling
v(s) will affect the associated four neighbouring contour elements and thus their energies, when C2 cubic B-Splines are used as the basis functions (N(s)).The discrete point models of
Williams and Shah (1992) and Lai and Chin (1993) constrain the snaxels to lie on a regular grid so that the position of each snaxel corresponds to a particular image pixel. In the MX and MG algorithms described in this section, the control vertices are free to move anywhere in the continuous spatial domain. They are moved by single pixel steps initially, since the contour is placed fairly close to the boundary by the delineator and moreover the time taken for the algorithm to execute is not at present considered paramount. However, constraining the steps to be equal to the pixel size can produce significant variances in the areas enclosed by a contour. Clearly, the effect will be scale-dependent with smaller and more-compact contours being affected to a greater degree. Thus the algorithms use the same type of convergence algorithm as employed by the GA and TN algorithms. The algorithm is described here, where D is the spatial step size that is adjusted as the algorithm converges:
MX/MG Direct Energy-Minimisation Algorithm
(1) Set
D=1(2) Label initial control vertex vector
V0 as VoldWHILE
D>1/64(3) FOR
m=1 to NCalculate new control vertex position vector,
Vnew, using (3.29)WHILE Vold ¹ Vnew
(4)
Vold = Vnew(3) FOR
m=1 to NCalculate new control vertex position vector,
Vnew, using (3.29)END while
Halve step-size, D.
END while
The combination of the minimisation equation (3.29) and the gradient normalisation method proposed in §3.5.3 appears, in practice, to contribute to the element-shrinking problem. This occurs because the difference in energy between points of minimum and maximum gradient is always equal to 1 so that points of maximum external energy always have the highest possible value of 1. This means that the ‘max’ function in (3.29) can reduce the external energy by selecting displacements of the vertices controlling the local element that extend the ends of the element along the contour in order to incorporate points of stronger gradient (lower energy). The regularising terms in the contour do not oppose this action because the ‘max’ function effectively disables them until the gradient energy is sufficiently low.
Clearly, the four active contour models described in the foregoing sections have different properties, which will affect their performance when applied to the task of leg ulcer area measurement. The following statements summarise the perceived advantages and disadvantages of the algorithms:
GA Algorithm
TN Algorithm
MX Algorithm
MG Algorithm
3 Algorithm Development |