What is a Neural Network?
A neuron is a cell in the brain whose principal function is the collection, Processing, and broadcasting of electrical signals. Brains Information processing capacity comes from networks of such neurons.
Due to this reason, some earliest AI work aimed to create such artificial networks. (Other Names are a Connectionist system; Parallel distributed processing and neural computing, neural net).
Natural neurons receive signals through synapses located on the dendrites or membrane of the neuron. When the signals received are strong enough (surpass a certain threshold), the neuron is activated and emits a signal through the axon. This signal might be sent to another synapse and might activate other neurons.
An Artificial Neural Network (ANN) is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system.
It is composed of a large number of highly interconnected processing elements (neurons) working in unity to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process.
Why use Neural Networks?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an “expert” in the category of information it has been given to analyze. Other advantages include
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.
Real-Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage
Here is the difference between Neural networks versus conventional computers
Neural networks take a different approach to problem-solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem.
That restricts the problem-solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don’t exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example.
They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly.
The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable. On the other hand, conventional computers use a cognitive approach to problem-solving; the way the problem is to solved must be known and stated in small unambiguous instructions.
These instructions are then converted to a high-level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
Units of Neural Network
- Nodes(units): Nodes represent a cell of the neural network.
- Links: Links are directed arrows that show the propagation of information from one node to another node.
- Activation: Activations are inputs to or outputs from a unit.
- Wight: Each link has a weight associated with it which determines strength and sign of the connection.
- Activation function: A function which is used to derive output activation from the input activations to a given node is called the activation function.
- Bias Weight: Bias weight is used to set the threshold for a unit. The unit is activated when the weighted sum of real inputs exceeds the bias weight.