top of page
Subscribe to my blog • Don’t miss out!

Thanks for subscribing!

sadiahzahoor

Neurons, nothing but light bulbs!

Updated: Oct 31

A Metaphor for Understanding Neural Networks


(Def.) An artificial Neural Network is a computational model designed to simulate how human brains processes information.


Let’s explore this by imagining a neural network as a collection of light bulbs—each with dimmers and switches that control how brightly each one glows. Each light bulb represents a neuron—a small but essential unit responsible for processing information. Using this metaphor, we’ll walk through each component that makes a neuron function, exploring how each part contributes to the network’s ability to think and learn.

Step 1: Receiving the Input Signal

Imagine several light bulbs, each connected to a single bulb that we’ll call our neuron. Each of these connected bulbs represents an input signal—a bit of information flowing into the neuron. These signals provide the data the neuron will process, much like sensory inputs feed information to neurons in the human brain.


(Def.) Neurons are the fundamental units that process information in an artificial neural network.

Neural Network with neurons as bulbs
Step 2: Adjusting Signal Strength with Weights (Dimmers)

Each input connection is equipped with a dimmer—a device that can adjust how much current, or influence, each signal has. In neural networks, this dimmer represents the weight. Weights adjust the input signals to determine their impact on the neuron’s output.


  • Higher Weight (Brighter Dimmer): When a weight is high, it means the input signal is significant for the neuron, so the dimmer lets more “light” through, making it shine more brightly.

  • Lower Weight (Dimmer Light): If the weight is low, the dimmer reduces the signal’s strength, resulting in a dimmer glow. These weights (or dimmers) adjust over time, helping the network learn which inputs matter most for a specific outcome.


(Def.) Weights are parameters that scale the input signals to a neuron, determining the neuron’s output and thereby influencing the artificial neural network’s learning and decision-making processes.

Weights represent connections that control input strengths

Step 3: Summing the Signals and Adding Bias

After adjusting each input with its weight, the neuron combines all these weighted signals into a total. Think of this as merging all dimmed lights into one final combined light signal.


If this combined light signal is still below what’s needed for the neuron to fire (bulb to turn on), the bias acts like a minimum voltage guarantee, ensuring the neuron has enough input to activate. This is crucial for the neuron to function correctly, especially in cases where input signals are weak or insufficient on their own. You can picture the bias as a small battery providing baseline power, ensuring the light bulb has an initial glow even if some inputs are weak.


(Def.) Bias in neural networks is an additional constant added to the weighted sum of inputs that adjusts the threshold at which a neuron activates.

Battery giving a baseline threshold for firing the bulb

Step 4: Activating with the Smart Switch (Activation Function)

Once the neuron is ready to fire, the activation function comes into play, acting like a gatekeeper, representing a Smart Switch. Based on combined signal with adjustment, this Smart Switch ultimately controls whether the neuron lights up (Bulb-ON/OFF) and by how much (Set limits on brightness, smoothly increase the intensity with increase in input signal etc.)


For instance, t, or it might only turn on if the input reaches a certain threshold (like the ReLU activation function). By choosing different types of Smart Switches the network can respond to patterns within data, allowing it to solve various kinds of problems.


  • For a Sigmoid Activation, the switch might gradually increase the brightness as the total input approaches a critical level.

  • For ReLU Activation, this switch acts like a threshold switch that keeps the bulb off for any negative current but turns it on and adjusts brightness linearly with positive currents.

(Def.) Activation functions in neural networks compute a neuron’s output by applying a non-linear mathematical transformation to the sum of the weighted inputs and the bias. This transformation enables outputs that do not have a direct, proportional relationship with inputs, allowing the network to handle complex patterns and interactions within data.

Bulb, dimmers, battery and smart-switch together forming a neuron.
Step 5: Outputting the Signal

Finally, if the smart switch decides the light bulb should glow, the neuron is “activated” and passes this glow as a signal to another neuron in the network. The brightness level represents the strength of the neuron’s response, influencing how the network as a whole processes information and makes decisions.


In conclusion, neurons in a neural network work together like a network of interconnected light bulbs, each influenced by adjustable dimmers (weights), baseline power (bias), and responsive smart switches (activation functions). Each part plays a crucial role in shaping the network’s behaviour, allowing it to solve complex tasks that mirror aspects of human learning and decision-making. By Sadiah Z.


  • Facebook
  • LinkedIn
  • Instagram

©2021 by Sadiah Zahoor

bottom of page