Artificial neural networks (NNs) are traditionally designed with distinctly defined layers (input layer, hidden layers, output layer) and accordingly network design techniques and training algorithms are based on this concept of strictly defined layers. In this paper, a new approach to designing neural networks is presented. The structure of the proposed NN is not strictly defined (each neuron may receive input from any other neuron). Instead, the initial network structure can be randomly generated, and traditional methods of training, such as back-propagation, are replaced or augmented by a genetic algorithm (GA). The weighting of each neuron input is encoded genetically to serve as the genes for the GA. By means of the training data provided to the supervised network, the contribution of each neuron in creating a desired output serves as a selection function. Each of the neurons is then modified to store and recall past weightings for possible future use. A simple network is trained to recognize vertical and horizontal lines as a proof of concept.