**C. J. Wu and A. H. Sung **

*Department of Computer Science, New Mexico Tech, Socorro,
NM 87801-4682
*

**H. S. Soliman **

*Department of Computer Science, New Mexico Tech, Socorro,
NM 87801-4682
*

**ABSTRACT:**

In this paper an application of the ART1 (adaptive resonance theory)
network in image data compression is studied, and the simulation
results are compared with those using Counter- propagation networks
(CPNs). The unique ability of ART to control the trade-off between
compression ratio and image quality makes ART1 networks suitable
for this application. Computer simulations indicate that ART1
networks can be an alternative for data compression.

**INTRODUCTION
**

The large mount of time and storage required to transmit pictorial
data brings about the need of image data compression. CPN (Hecht-Nielsen,
1987), which is self-organizing and combines a Kohonen layer with
a Grossberg Outstar layer, has been shown a good candidate for
this application (Chang, Soliman, and Sung 1992). In this work,
ART1, Adaptive Resonance Theory 1, (Carpenter and Grossberg 1987)
is studied and its performance is compared with CPNs. Apparently,
ART1 has CPN's advantages such as faster learning and self-organization.
In addition, ART1 has the vigilance parameter to control directly
the trade-off between compression ratio and distortion ratio.
Another factor affecting our choice is that we are concerned only
with the quality of reconstructed pictorial data and the length
of time needed to transmit data from source to destination, not
the topological relation among nodes, which is the major feature
of Kohonen networks (Kohonen 1982). One modification in similarity
test of ART1 learning algorithm has been made to achieve error-free
compression. The computer simulations indicate that ART1 networks
perform as good as CPNs, therefore, it can be an alternative for
data compression.

**ART1 LEARNING ALGORITHM
**

In the process of compression, a picture is split into small frames.
Those subimages are fed into an ART1 network for self-organization,
and the outputs represent the category indices for those subimages.
After learning, those indices and their corresponding prototypes
are transmitted to the destination. The picture is then reconstructed
according to those indices and prototypes. The benefit received
from this operation is measured by the compression ratio (Q),
which is computed as follows:

where N is the dimension of the subimages, F is the total number
of frames, and C is the total number of categories formed during
learning. On the other hand, the price we pay is measured by the
distortion ratio defined as follows:

where is the value in the original image
and is the corresponding value in the
reconstructed image. For image data compression, vigilance parameter
is used to control the trade-off between
and .

**The ART1 algorithm (Carpenter and Grossberg 1987, Zurada 1992)**

Step 1. Enable all the nodes on the layer and initialize (the weight matrix from to ), (the pattern matrix from to ), and vigilance parameter () where both and are matrices, is the number of nodes on the layer, is the number of nodes on the layer.

Step 2. Present the binary unipolar input vector (or subimage) to . The winning node on the layer is the one for which is the largest.

where is the winning node on the layer, and is the input pattern. If all the committed nodes have been deactivated, an uncommitted node will be selected randomly unless there are no such nodes in which case the operation terminates.

Step 3. Test the similarity between the prototype of winner
and input pattern by computing the following
ratio:

where is the logical AND operator.

Step 4. If , go to Step 5; otherwise node is deactivated by setting and go back to Step 2.

Step 5. Update only the weights associated with the winning node
as follows:

where is equal to 1.5 in this work.

Step 6. If there are no more inputs, terminate the program; otherwise
get the next input vector and go back to Step 2.

**EXPERIMENTAL RESULTS
**

For some critical pictorial data, we might want to have error-free
compression, i.e., mapping each unique input pattern to a unique
category cell on the layer. Thus, an
identical picture can be reconstructed after the transmission.
Unfortunately, the ART1 networks (Carpenter and Grossberg 1987)
can not guarantee an error-free compression even when we use the
strictest vigilance, 1. We depict this problem by the example
in Figure 1. In this example, ART1 networks can not distinguish
pattern A from pattern B by using vigilance 1 if we present pattern
B first then pattern A. Since pattern B is presented first, an
uncommitted node of (category 1 or node
1) will store it immediately. Then if pattern A is presented next,
node 1 on the layer will be the winning
node first according to the Weber Law Rule since all the input
patterns or prototypes are subsets of those uncommitted nodes
which have initial values 1. Thus, the input pattern A will be
assigned to category 1 as well because

Clearly, an ART1 network can not distinguish a new input pattern from the stored category prototype of winning node under equation (4) if the set of black pixels of this new input pattern is a subset of the black pixels of the stored prototype. Therefore, some distortion will be introduced during the process. The approach used in his work is to count directly the number of different bits between the new input pattern and a stored category prototype instead of using the inner product (or logical AND) operation. In other words, we use the equation (7) to

replace equation (4).

where and are binary unipolar vector and matrix, respectively.

The experiments are performed on three pictures of
black-and-white pixels. Simulation results of this modified ART1
networks are listed in Table 1 with those using CPNs (Chang, Soliman,
and Sung 1992). Also, the reconstructed images of "Bear"
of four operations, including supervised CPN, CPN with KFM (Kohonen
Feature Map) layer, CPN with FSO, Frequency Sensitive Self-Organization,
(Fang 1992) layer, and ART1 with , along
with the original picture are shown in Figure 2. As shown, ART1
networks perform as good as CPNs in terms of compression ratio
(Q) and distortion ratio (E). Also, despite the supervised CPN
has higher compression ratio and lower distortion ratio than other
CPNs and ART, the severe loss of the object's edges makes the
recognition very difficult; therefore, it may not be suitable
for practical use. By adjusting the vigilance parameter from 1
to near 0, ART1 performs from error-free compression to highly
compressed operation. Moreover, the picture quality control and
parameter value assignment are more direct in ART. This makes
ART1 networks very attractive for use in real time environments.

**CONCLUSIONS AND DISCUSSIONS
**

In this paper the application of ART1 networks in image data compression
has been studied. A modified similarity test that counts the number
of differing bits between an input pattern and the stored prototypes
instead of using their inner products gives ART1 networks the
ability to perform error-free compression. The computer simulation
results indicates ART1 networks perform as good as CPNs in terms
of compression ratio and distortion ratio in those cases that
we analyzed. Though more simulations are needed to draw a concrete
conclusion of the comparison between ART and CPN, the initial
simulation results indicate that the use of ART in data/image
compression is promising.

**ACKNOWLEDGMENTS
**

The authors would like to thank W. Chang for his help during computer
simulations. Partial support for this work from BDM International
in Albuquerque, New Mexico, is also acknowledged.

**REFERENCES
**

Carpenter, G.A., Grossberg, S., (1987). A Massively
Parallel Architecture for a Self-Organizing Neural Pattern Recognition
Machine, *Computer Vision, Graphics, and Image Processing*,
Vol.
37,
54-115.

Chang,
W.,
Soliman,
H.S.,
Sung,
A.H.,
(1992).
Image
Data
Compression
Using
Counterpropagation
Network,
*Proceedings of* *IEEE
International Conference on Systems, Man, and Cybernetics*,
405-409.

Chang,
W.,
Soliman,
H.S.,
Sung,
A.H.,
(1993).
Preserving
Visual
Perception
by
Learning
Natural
Clustering,
*Proceedings of* *IEEE International
Conference on Neural Networks*,
661-666.

Fang,
W.C.,
Sheu,
B.J.,
Chen,
O.T.C.,
Choi,
J.,
(1992).
A VLSI
Neural
Processor
for
Image
Data
Compression
Using
Self-Organization
Networks,
*IEEE Trans. Neural Networks*,
Vol.
3,
506-518.

Grossberg,
S.,
(1976).
Adaptive
Pattern
Classification
and
Universal
Recording:
I.
Parallel
Development
and
Coding
of
Neural
Feature
Detectors,
*Biological Cybernetics*,
Vol.
23,
121-134.

Hecht-Nielsen,
R.,
(1987).
Counterpropagation
Networks,
*Applied Optics*,
Vol.
26,
4979-4984.

Kohonen,
T.,
(1982).
Self-Organized
Formation
of
Topologically
Correct
Feature
Maps,
*Biological Cybernetics*,
Vol.
43,
59-69.

Zurada,
J.M.,
(1992).
*Introduction to Artificial Neural Systems*,
West
Publishing
Co.,
St.
Paul.

C.J. Wu , A.H. Sung, and H.S. Soliman, "Image
Data Compression Using ART Networks," in Intelligent Engineering
Systems Through Artificial Neural Network (St. Louis, Missouri),
C.H. Dagli, L.I. Burke,B.R. Fernandez, and J. Ghost Ed., vol.
3, pp. 417-422, Nov. 1993.