*neuron. Not neutron, just to clarify.
So, I'm currently going through the process of learning machine learning myself. However, as far as I understand if a given neural network is trying to solve a non-linear problem then an "activation function", such as sigmoid, is applied in order to better map a correlation between the input data and the expected output.
Activation functions themselves don't have a direct affect on the weights applied on each synapse. This is covered by "Backpropagation", usually something of the form "Gradient Descent". This basically calculate the difference between the estimated output and the true output then uses this value to update the weights, moving backward through the network.
This process is repeated until the weights have been optimised to a point where input = output, as long as "over-fitting" hasn't occured.
Hope this helped :).
Check this guy out: youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A
He's really good.