Laurene fausett fundamentals of neural networks ebook


 

Fundamentals of Neural Networks has been written for students and for . Don Fausett for introducing me to neural networks, and for his patience, en-. Fundamentals of Neural Networks. Home · Fundamentals of Neural Networks Author: Laurene V. Fausett. downloads Views 3MB Size Report. Fundamentals of Neural Networks: Architectures, Algorithms And Applications [ Laurene V. Fausett] on usaascvb.info *FREE* shipping on qualifying offers.

Author:RHONA PURDUE
Language:English, Spanish, French
Country:Sudan
Genre:Fiction & Literature
Pages:526
Published (Last):08.03.2016
ISBN:871-9-37138-878-4
Distribution:Free* [*Registration needed]
Uploaded by: MITSUKO

77058 downloads 133289 Views 27.58MB ePub Size Report


Laurene Fausett Fundamentals Of Neural Networks Ebook

Fundamentals of Neural Networks: Soft Computing Course Lecture 7 Laurene V. Fausett, (), Prentice Hall, Chapter, page 6. Fundamentals of Neural Networks by Laurene Fausett - Ebook download as PDF File .pdf), Text File .txt) or read book online. systematic study of the artificial neural network. Four years later The interest in neural networks comes from the networks' ability to mimic Chapter 2 − Fundamentals of NN Fausett, L., Fundamentals of Neural Networks, Prentice- Hall.

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below! Laurene V. Neural Networks. Read more. Neural networks. Fundamentals of Neural Networks: Architectures, Algorithms and Applications. Principles of artificial neural networks. Fundamentals of Stochastic Networks. Principles of Artificial Neural Networks.

Unsupervised learning is also used for other tasks, in addition to clustering. Examples are included in Chapter 7. Fixed-weight nets Still other types of neural nets can solve constrained optimization problems.

Such nets may work well fot problems that can cause difficulty for traditional tech- niques, such as problems with conflicting constraints i. Often, in such cases, a nearly optimal solution which the net can find is satisfactory.

When these nets are designed, the weights are set to represent the constraints and the quantity to be maximized or minimized. The Boltzmann machine without learning and the continuous Hopfield net Chapter 7 can be used for constrained optimization problems. Fixed weights are also used in contrast-enhancing nets see Section 4. As mentioned before, the basic operation of an artificial neuron involves summing its weighted input signal and applying an output, or activation, function.

For the input units, this function is the identity function see Figure 1.

Typically, the same activation function is used for all neurons in any particular layer of a neural net, although this is not required. In most cases, a nonlinear activation function is used. In order to achieve the advantages of multilayer nets, compared with the limited capabilities of single-layer nets, nonlinear functions are required since the results of feeding a signal through two or more layers of linear processing elements-i.

Single-layer nets often use a step function to convert the net input, which is a continuously valued variable, to an output unit that is a binary 1 or 0 or bipolar 1 or - 1 signal see Figure 1. The use of a threshold in this regard is discussed in Section 2. The binary step function is also known as the threshold function or Heaviside function.

Sigmoid functions S-shaped curves are useful activation functions. The logistic function and the hyperbolic tangent functions are the most common. They are especially advantageous for use in neural nets trained by backpropagation, because the simple relationship between the value of the function at a point and the value of the derivative at that point reduces the computational burden during training. To emphasize the range of the function, we will call it the binary sigmoid; it is also called the logistic sigmoid.

This function is illustrated in Figure 1. As is shown in Section 6. The most com- mon range is from - 1 to 1;; we call this sigmoid the bipolar sigmoid. It is illustrated in Figure 1.

The bipolar sigmoid is closely related to the hyperbolic tangent function, which is also often used as the activation function when the desired range of output values is between - 1 and 1. We have 1 -. For binary data rather than continuously valued data in the range from 0 to 1 , it is usually preferable to convert to bipolar form and use the bipolar sigmoid or hyperbolic tangent.

A more extensive discussion of the choice of activation functions and different forms of sigmoid functions is given in Section 6. The following notation will be used throughout the discussions of specific neural nets, unless indicated otherwise for a particular net appropriate values for the parameter depend on the particular neural net model being used and will be dis- cussed further for each model:.

Xi, Y j Activations of units Xi, Y,, respectively: Wij Weight on connection from unit Xi to unit Y j: Some authors use the opposite convention, with wij de- noting the weight from unit Y, to unit Xi. A bias acts like a weight on a connection from a unit with a constant activation of 1 see Figure 1. This is the jth column of the weight matrix. A step activation function sets the activation of a neuron to 1 when- ever its net input is greater than the specified threshold value Oj; otherwise its activation is 0 see Figure 1.

Learning rate: The learning rate is used to conrtol the amount of weight adjust- ment at each step of training. The bias is treated exactly like any other weight, i. The net input to unit Yj is given by.

This section presents a very brief summary of the history of neural networks, in terms of the development of architectures and algorithms that are widely used today. Results of a primarily biological nature are not included, due to space constraints. They have, however, served as the inspiration for a number of net- works that are applicable to problems beyond the original ones studied.

Thus, the field is strongly interdisciplinary. These researchers recognized that combining many simple neurons into neural systems was the source of in- creased computational power.

The weights on a McCulloch-Pitts neuron are set so that the neuron performs a particular simple logic function, with different neu- rons performing different functions. The neurons can be arranged into a net to produce any output that can be represented as a combination of logic functions.

The flow of information through the net assumes a unit time step for a signal to travel from one neuron to the next. This time delay allows the net to model some physiological processes, such as the perception of hot and cold. The idea of a threshold such that if the net input to a neuron is greater than the threshold then the unit fires is one feature of a McCulloch-Pitts neuron that is used in many artificial neurons today.

Hebb learning Donald Hebb, a psychologist at McGill University, designed the first learning law for artificial neural networks [Hebb, ]. His premise was that if two neurons were active simultaneously" then the strength of the connection between them should be increased.

Fundamentals of Neural Networks

The idea is closely related to the correlation matrix learning developed by Kohonen and Anderson among others. Although today neural networks are often viewed as an alternative to or com- plement of traditional computing, it is interesting to note that John von Neumann, the "father of modern computing," was keenly interested in modeling the brain [von Neumann, ].

Johnson and Brown and Anderson and Rosenfeld discuss the interaction between von Neumann find early neural network researchers such as Warren McCulloch, and present further indication of von Neumann's views of the directions in which computers would develop.

The most typical perceptron consisted of an input layer the retina connected by paths with fixed weights to associator neurons; the weights on the connection paths were adjustable. The perceptron learning rule uses an iterative weight adjustment that is more powerful than the Hebb rule. Perceptron learning can be proved to con- verge to the correct weights if there are weights that will solve the problem at hand i.

Rosenblatt's work describes many types of perceptrons. Like the neurons developed by McCulloch and Pitts and by Hebb, perceptrons use a threshold output function. The early successes with perceptrons led to enthusiastic claims.

The perceptron rule adjusts the connection weights to a unit when- ever the response of the unit is incorrect. The response indicates a classification of the input pattern. The delta rule adjusts the weights to reduce the difference between the net input to the output unit and the desired output.

This results in the smallest mean squared error. The similarity of models developed in psychology by Rosenblatt to those developed in electrical engineering by Widrow and Hoff is evidence of the interdisciplinary nature of neural networks. The difference in learning rules, although slight, leads to an improved ability ofthe net to generalize i.

The Widrow-Hoff learning rule for a single-layer network is a precursor of the backpropagation rule for multilayer nets. Work by Widrow and his students is sometimes reported as neural network research, sometimes as adaptive linear systems.

In spite of Minsky and Papert's demonstration of the limitations of perceptrons i. Many of the cur- rent leaders in the field began to publish their work during the s. Widrow, of course, had started somewhat earlier and is still active. Kohonen The early work of Teuvo Kohonen , of Helsinki University of Technology, dealt with associative memory neural nets. His more recent work [Kohonen, ] has been the development of self-organizing feature maps that use a topological structure for the cluster units.

Anderson James Anderson, of Brown University, also started his research in neural net- works with associative memory nets [Anderson, , ]. Among the areas of application for these nets are medical diagnosis and learning multiplication tables.

Anderson and Rosenfeld and Anderson, Pellionisz, and Rosenfeld are collections of fundamental papers on neural network research.

The introductions to each are especially useful.

Fundamentals of Neural Networks: Architectures, Algorithms and Applications

Grossberg Stephen Grossberg, together with his many colleagues and coauthors, has had an extremely prolific and productive career. Klimasauskas lists publica- tions by Grossberg from to His work, which is very mathematical and very biological, is widely known [Grossberg, , , , , ]. Adaptive resonance theory nets for binary input patterns ART!

Renewed Enthusiasm. A method for propagating information about errors at the output units back to the hidden units had been discovered in the previous decade [Werbos, ], but had not gained wide publicity. This method was also discovered independently by David Parker and by LeCun before it became widely known.

These nets can serve as associative memory nets and can be used to solve con- straint satisfaction problems such as the "Traveling Salesman Problem. Neocognitron Kunihiko Fukushima and his colleagues at NHK Laboratories in Tokyo have developed a series of specialized neural nets for character recognition.

One ex- ample of such a net, called a neocognitron, is described in Chapter 7. An earlier self-organizing network, called the cognitron [Fukushima, ], failed to rec- ognize position- or rotation-distorted characters.

These nets incorporate such classical ideas as simulated annealing and Bayesian decision theory. Hardware implementation Another reason for renewed interest in neural networks in addition to solving the problem of how to train a multilayer net is improved computational capa- bilities. Carver Mead, of California Institute of Technology, who also studies motion detection, is the coinventor of software to design microchips. He is also cofounder of Synaptics, Inc.

Nobel laureate Leon Cooper, of Brown University, introduced one of the first multilayer nets, the reduced coulomb energy network. DARPA is a valuable summary of the state of the art in artificial neural networks especially with regard to successful applications. To quote from the preface to his book, Neurocomputing , Hecht-Nielsen is "an industrialist, an adjunct aca- demic, and a philanthropist without financial portfolio" [Hecht-Nielsen, ].

The founder of HNC, Inc. It displays several important features found in many neural net- works. The requirements for McCulloch-Pitts neurons may be summarized as follows:. The activation of a McCulloch-Pitts neuron is binary. That is, at any time step, the neuron either fires has an activation of I or does not fire has an activation of 0.

McCulloch-Pitts neurons are connected by directed, weighted paths. The McCulloch-Pitts Neuron A connection path is excitatory if the weight on the path is positive; other- wise it is inhibitory. All excitatory connections into a particular neuron have the same weights. Each neuron has a fixed threshold such that if the net input to the neuron is greater than the threshold, the neuron fires. The threshold is set so that inhibition is absolute. That is. It takes one time step for a signal to pass over one connection link.

The simple example of a McCulloch-Pitts neuron shown in Figure 1. The connection from Xl to Y is excitatory, as is the connection from X 2 to Y. These excitatory connections have the same positive weight because they are going into the same unit. The threshold for unit Y is 4; for the values of the excitatory and inhibitory weights shown, this is the only integer value of the threshold that will allow Y to fire sometimes, but will prevent it from firing if it receives a nonzero signal over the inhibitory connection.

It takes one time step for the signals to pass from the X units to Y; the activation of Y at time t is determined by the activations of Xl, X 2, and X 3 at the previous time, t - 1. The use of discrete time steps enables a network of McCulloch-Pitts neurons to. In general, a McCulloch-Pitts neuron Y may receive signals from any number of other neurons. For convenience, in Figure 1. The activation function for unit Y is. The weights for a McCulloch-Pitts neuron are set, together with the threshold for the neuron's activation function, so that the neuron will perform a simple logic function.

Since analysis, rather than a training algorithm, is used to determine the values of the weights and threshold, several examples of simple McCulloch- Pitts neurons are presented in this section.

Using these simple neurons as building blocks, we can model any function or phenomenon that can be represented as a logic function.

In Section 1. Simple networks of McCulloch-Pitts neurons, each with a threshold of 2, are shown in Figures 1. The activation of unit Xi at time t is denoted Xi t.

The activation of a neuron Xi at time t is determined by the activations, at time t - 1, of the neurons from which it receives signals.

Logic functions will be used as simple examples for a number of neural nets. Each of these functions acts on two input values. ANn The AND function gives the response "true" if both input values are "true"; otherwise the response is "false. Example 1. The threshold on unit f is 2.

OR The OR function gives the response "true" if either of the input values is "true" ; otherwise the response is "false. The threshold on unit fis 2. The response is "true" if the first input value, Xio is "true" and the second input value, X2, is "false"; otherwise the response is "false.

In other words, neuron Y fires at time t if and only if unit XI fires at time t - 1 and unit X 2 does not fire at time t - 1. The threshold for unit Y is 2. The XOR exclusive or function gives the response "true" if exactly one of the input values is "true"; otherwise the response is "false. XOR can be expressed as. Hot and cold Example 1. However, if the same stimulus is applied for a longer period, the person will perceive cold. The use of discrete time steps enables the network of McCulloch- Pitts neurons shown in Figure 1.

The example is an elaboration of one originally presented by McCulloch and Pitts []. The model is designed to give only the first perception of heat or cold that is received by the perceptor units.

Neurons 2 1. Each neuron has a threshold of 2, i. Input to the system will be l,0 if heat is applied and 0,1 if cold is applied. The desired response of the system is that cold is perceived if a cold stimulus is applied for two time steps, i. In order to model the physical phenomenon described, it is also required that heat be perceived if either a hot stimulus is applied or a cold stimulus is applied briefly for one time step and then removed.

To see that the net shown in Figure 1. Now consider the neurons illustrated in Figure 1. Finally, the response of unit Z2 at time t - 2 is simply the value of X 2 at the previous time step: The analysis for the response of neuron Y2 at time t proceeds in a similar manner. However, as. Case 1, a cold stimulus applied for one time step and then removed, is illustrated in Figure 1. Case 2, a cold stimulus applied for two time steps, is illustrated in Figure 1.

In each case, only the activations that are known at a particular time step are indicated. The weights on the connections are as in Figure 1. Case 1: A cold stimulus applied for one time step. Although the responses of the perceptor units are determined, no per- ception of hot or cold has reached them yet. Case 2: A cold stimulus applied for two time steps. Note that the activations of the input units are not specified, since the first response of the net to the cold stimulus being applied for two time steps is not influenced by whether or not the stimulus is removed after the two steps.

Case 3: A hot stimulus applied for one time step. Many of the applications and historical developments we have summarized in this chapter are described in more detail in two collections of original research:.

F- Neurocomputing 2: These contain useful papers, along with concise and insightful introductions explaining the significance and key results of each paper. Nontechnical introductions Two very readable nontechnical introductions to neural networks, with an em- phasis on the historical development and the personalities of the leaders in the field, are:.

Inside the Neural Network Revolution [Allman, ]. Applications Among the books dealing with neural networks for particular types of applications are:. History The history of neural networks is a combination of progress in experimental work with biological neural systems, computer modeling of biological neural systems, the development of mathematical models and their applications to problems in a wide variety of fields, and hardware implementation of these models. In addition to the collections of original papers already mentioned, in which the introductions to each paper provide historical perspectives, Embodiments ofMind [McCulloch, ] is a wonderful selection of some of McCulloch's essays.

Biological neural networks Introduction to Neural and Cognitive Modeling [Levine, ] provides extensive information on the history of neural networks from a mathematical and psycho- logical perspective. Each neuron other than the input neurons, N. Define the response of neuron N 5 at time t in terms of the activations of the input neurons, N. Show the activation of each neuron that results from an input signal of N.

One such example is presented in Section 1. Find another representation and the net for it. How do the two nets yours and the one in Section 1. Can you modify the net to require the cold stimulus to be applied for three time steps before cold is felt? State clearly any further assumptions as to what happens to the inputs stimuli that may be nec- essary or relevant. Use three input units to correspond to the three pitches, "do," "re," and "mi.

Use two output units to correspond to the perception of an "upscale segment" and a "downscale segment"-specifically, a. You may wish to elaborate on this example, allowing for more than one input unit to be "on" at any instant of time, designing output units to detect chords, etc.

One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification problems, each input vector pattern be- longs, or does not belong, to a particular class or category. For a neural net approach, we assume we have a set of training patterns for which the correct classification is known. In the simplest case, we consider the question of mem- bership in a single class.

The output unit represents membership in the class with a response of 1 ; a response of - 1 or 0 if binary representation is used indicates that the pattern is not a member of the class. For the single-layer nets described in this chapter, extension to the more general case in which each pattern may or may not belong to any of several classes is immediate. In such case, there is an output unit for each class.

Pattern classification is one type of pattern recognition; the associative recall of patterns discussed in Chapter 3 is another. Pattern classification problems arise in many areas. In , Donald Specht a student of Widrow used neural networks to detect heart abnormalities with EKG types of data as input 46 measurements. In this chapter, we shall discuss three methods of training a simple single- layer neural net for pattern classification: First how- ever, we discuss some issues that are common to all single-layer nets that perform pattern classification.

Many real-world examples need more sophisticated architecture and training rules because the conditions for a single-layer net to be adequate see Section 2.

However, if the conditions are met approximately, the results may be sufficiently accurate. Also, insight can be gained from the more simple nets, since the meaning of the weights may be easier to interpret. The basic architecture of the simplest possible neural networks that perform pat- tern classification consists of a layer of input units as many units as the patterns to be classified have components and a single output unit. Most neural nets we shall consider in this chapter use the single-layer architecture shown in Figure 2.

This allows classification of vectors, which are n-tuples, but considers mem- bership in only one category. Input Output Figure 2.

Fundamentals of Neural Networks - PDF Free Download

An example of a net that classifies the input into several categories is con- sidered in Section 2. A bias acts exactly as a weight on a connection from a unit whose activation is always 1. Increasing the bias increases the net input to the unit. However, as the next example will demonstrate, this is essentially equivalent to the use of an adjustable bias.

Example 2. To facilitate a graphical display of the relationships, we illustrate the ideas for an input with two components while the output is a scalar i. The architecture of these examples is given in Figure 2. Figure 2. The boundary between the values of x l and x2 for which the net gives a positive response and the values for which it gives a negative response is the separating line.

During training, values of w l , w 2 , and b are determined so that the net will have the correct response for the training data.

This gives the equation of the line separating positive from neg- ative output as. During training, values of w , and w 2 are determined so that the net will have the correct response for the training data. In this case, the separating line cannot pass through the origin: The form of the separating line found by using an adjustable bias and the form obtained by using a fixed threshold illustrate that there is no advantage to including both a bias and a nonzero threshold for a neuron that uses the step function as its activation function.

On the other hand, including neither a bias nor a threshold is equivalent to requiring the separating line or plane or hyperplane for inputs with -more components to pass through the origin. This may or may not be appropriate for a particular problem. As an illustration of a pseudopsychological analogy to the use of a bias, consider a simple artificial neural net in which the activation of the neuron corresponds to a person's action, "Go to the ball game.

The weights on these input signals correspond to the importance the person places on each factor. Of course, the weights may change with time, but methods for modifying them are not considered in this illustration. A bias could represent a general inclination to "go" or "not go," based on past experiences.

Thus, the bias would be modifiable, but the signal to it would not correspond to information about the specific game in question or activities competing for the person's time. The threshold for this "decision neuron" indicates the total net input nec- essary to cause the person to "go," i. The threshold would be different for different people; however, for the sake of this simple example, it should be thought of as a quantity that remains fixed for each individual.

Since it is the relative values of the weights, rather than their actual magnitudes, that determine the response of the neuron, the model can cover all possibilities using either the fixed threshold or the adjustable bias. For each of the nets in this chapter, the intent is to train the net i. Before discussing the particular nets which is to say, the particular styles of training , it is useful to discuss some issues common to all of the nets.

For a particular output unit, the desired response is a "yes" if the input pattern is a member of its class and a "no" if it is not. A "yes" response is represented by an output signal of 1, a "no" by an output signal of - 1 for bipolar signals.

Since we want one of two responses, the activation or transfer or output function is taken to be a step function. The value of the function is 1 if the net input is positive and - 1 if the net input is negative.

Depending on the number of input units in the network, this equation represents a line, a plane, or a hyperplane. Furthermore, it is easy to extend this result to show that multilayer nets with linear activation functions are no more powerful than single-layer nets since the composition of linear functions is linear. The region where y is positive is separated from the region where it is negative by the line.

These two regions are often called decision regions for the net. Notice in the following examples that there are many different lines that will serve to separate the input points that have different target values.

However, for any particular line, there are also many choices of w I , w 2 ,and b that give exactly the same line.

There are four different bipolar input patterns we can use to train a net with two input units. However, there are two possible responses for each input pattern, so there are 24 different functions that we might be able to train a very simple net to perform. Several of these functions are familiar from elementary logic, and we will use them for illustrations, for convenience.

The first question we consider is, For this very simple net, do weights exist so that the net will have the desired output for each of the training input vectors? The desired responses can be illustrated as shown in Figure 2.

One possible de- cision boundary for this function is shown in Figure 2. An example of weights that would give the decision boundary illustrated in the figure, namely, the separating line.

Any point that is not on the decision boundary can be used to determine which side of the boundary is positive and which is negative; the origin is particularly convenient to use when it is not on the boundary.

The weights must be chosen to provide a separating line, as illustrated in Figure 2. One example of suitable weights is. The preceding two mappings which can each be solved by a single-layer neural net illustrate graphically the concept of linearly separable input.

The input points to be classified positive can be separated from the input points to be clas- sified negative by a straight line. The equations of the decision boundaries are not unique. We will return to these examples to illustrate each of the learning rules in this chapter. Note that if a bias weight were not included in these examples, the decision boundary would be forced to go through the origin. In many cases- including Examples 2.

Not all simple two-input, single-output mappings can be solved by a single- layer net even with a bias included , as is illustrated in Example 2. It is easy to see that no single straight line can separate the points for which a positive response is desired from those for which a negative response is desired. The previous examples show the use of a bipolar values 1 and - 1 representation of the training data, rather than the binary representation used for the McCulloch- Pitts neurons in Chapter 1.

Many early neural network models used binary rep- resentation, although in most cases it can be modified to bipolar form. The form of the data may change the problem from one that can be solved by a simple neural net to one that cannot, as is illustrated in Examples 2.

Binary representation is also not as good as bipolar if we want the net to generalize i. Using bipolar input, missing data can be distinguished from mistaken data. We shall discuss some of the issues relating to the choice of binary versus bipolar representation further as they apply to particular neural nets. In general, bipolar representation is preferable. The remainder of this chapter focuses on three methods of training single- layer neural nets that are useful for pattern classification: The Hebb rule, or correlational learning, is extremely simple but limited even for linearly separable problems ; the training algorithms for the perceptron and for ADALINE adaptive linear neuron, trained by the delta rule are closely related.

Both are iterative techniques that are guaranteed to converge under suitable circumstances. The earliest and simplest learning rule for a neural net is generally known as the Hebb rule. Hebb proposed that learning occurs by modification of the synapse strengths weights in a manner such that if two interconnected neurons are both "on" at the same time, then the weight between those neurons should be in- creased.

The original statement only talks about neurons firing at the same time and does not say anything about reinforcing neurons that do not fire at the same time. However, a stronger form of learning occurs if we also increase the weights if both neurons are "off" at the same time.

We shall refer to a single-layer feedforward neural net trained using the extended Hebb rule as a Hebb net. The Hebb rule is also used for training other specific nets that are discussed later. Step Initialize all weights: Step 1. For each input training vector and target output pair, s: Step 2.

Set activations for input units: Step 3. Set activation for output unit: Step 4. Adjust the bias:. Note that the bias is adjusted exactly like a weight from a "unit" whose output signal is always 1. The weight update can also be expressed in vector form as.

There are several methods of implementing the Hebb rule for learning. The foregoing algorithm requires only one pass through the training set; other equiv- alent methods of finding the weights are described in Section 3.

Bias types of inputs are not explicitly used in the original formulation of Hebb learning. However, they are included in the examples in this section shown as a third input component that is always 1 because without them, the problems discussed cannot be solved. For each training input: The new weights are the sum of the previous weights and the' weight change.

Only one iteration through the training vectors is required. The weight updates for the first input are as follows:. The graph, presented in Figure 2. Presenting the second, third, and fourth training inputs shows that because the target value is 0, no learning occurs. Thus, using binary target values prevents the net from learning any pattern for which the target is "off": Presenting the first input, including a value of 1 for the third component, yields the following:.

Presenting the second, third, and fourth training patterns shows that learning continues for each of these patterns since the target value is now - rather than 0, as in Example 2.

However, these weights do not provide the correct response for the first input pat- tern. The choice of training patterns c a n play a significant role in determining which problems can be solved using the Hebb rule. The next example shows that the A N D function can be solved if we modify its representation to express the inputs as well as the targets in bipolar form. Bipolar representation of the inputs and targets allows modification of a weight when the input unit and the target value are both "on" at the same time and when they are both "off" at the same time.

The algorithm is the same as that just given, except that now all units will learn whenever there is an error in the output. The graph in Figure 2. Pre- senting the second input vector and target results in the following situation:. Presenting the third input vector and target yields the following: Presenting the last point, we obtain the following:. Character recognition Example 2. The patterns can be represented as.

Pattern 1 Pattern To treat this example as a pattern classification problem with one output class, we will designate that class "X" and take the pattern "0" to be an example of output that is not "X. That is easy to do by assigning each the value 1 and each ".

To convert from the two-dimensional pattern to an input vector, we simply concatenate the rows, i. Pattern 1 then becomes 1 - 1- 1- 11, - 11 - 11 - 1 , - 1- 11 - 1- 1 , - 11 - 11 - 1 , 1 - 1- 1- 11, and pattern 2 becomes - 11 1 - 11 1 1 - 1 , where a comma denotes the termination of a line of the original matrix. For computer simulations, the program can be written so that the vector is read in from the two- dimensional format.

The correct response for the second pattern is "off," or - 1, so the weight change when the second pattern is presented is. In addition, the weight change for the bias weight is - 1. Adding the weight change to the weights representing the first pattern gives the final weights: The bias weight is 0. Now, we compute the output of the net for each of the training patterns. The net input for any input pattern is the dot product of the input pattern with the weight vector.

For the first training vector, the net input is 42, so the response is positive, as desired. For the second training pattern, the net input is , so the response is clearly negative, also as desired. However, the net can also give reasonable responses to input patterns that are similar, but not identical, to the training patterns.

There are two types of changes that can be made to one of the input patterns that will generate a new input pattern for which it is reasonable to expect a response. Want to Read Currently Reading Read. Other editions. Enlarge cover. Error rating book. Refresh and try again.

Open Preview See a Problem? Details if other: Thanks for telling us about the problem. Return to Book Page. Fundamentals of Neural Networks: Architectures, Algorithms and Applications by Laurene V. Providing detailed examples of simple applications, this new book introduces the use of neural networks. It covers simple neural nets for pattern classification; pattern association; neural networks based on competition; adaptive-resonance theory; and more.

For professionals working with neural networks. Get A Copy. Paperback , pages. Published December 19th by Pearson first published December 9th More Details Original Title. Other Editions 2. Friend Reviews.

To see what your friends thought of this book, please sign up. To ask other readers questions about Fundamentals of Neural Networks , please sign up. Industrial Applications of Neural Networks. Elements of Artificial Neural Networks. Elements of artificial neural networks. Discrete mathematics of neural networks. Fundamentals of neural network modeling. Neural Networks and Micromechanics.

Artificial Neural Networks. Neural Networks Theory. Neural Networks Grassroots. Recurrent Neural Networks.