It is argued that there is no articial or intelligence in A.I. but we do like to hang simple collective nouns on stuff, so we will stick with that. Machine Learning, Deep Learning, Synthetic Cognitive Systems and so on ... basically obtaining a goal by using electronic computational systems to analyse data [insert own description]. Like art, we recognise it when we see it. The use of algorithms can be a black art and sometimes the secrecy of these can be to protect its advantage or to hide the fact that it is flawed. Neural networks are particularly opaque and there is legislation to try and show what is going on inside those black boxes. Also analytics such as LIME - Local Interpretable Model-Agnostic Explanations.
Both organic and A.I. look for patterns, it turns data into information, which is what it is all about. Humans are particularly good at this and can interpret them really fast. It is called survival and we, like other animals do it instinctively and subconsciously. Machine neural networks are also good but require lots of data and are specific in what they do. The young are masters at learning with a very malleable neural network. There are quicker algorithms than neural networks and sometimes out compete them but as usual it depends on the data and what you are trying to do. Bayesian inference is a particularly good statistical fast response technique. Some of its strengths are the relationship of entities and if it looks like a duck, quacks and walks like a duck, then it is probably a duck.
The aims of both natural and synthetic intelligence is to solve problems and there are some similarities in approach but more differences.The human brain uses massively parallel and highly specific cell networks to do different jobs. Synthetics use very fast, kind of parallel and fixed hardware. The articial neurons in a chip are usually all the same and use a weighting procedure to produce different outputs. Backpropagation has been a common tool but is being replaced by more brain like components such as the IBM and Graphcore chips. Neural networks use training data to tweak the result in many runs (epochs) with a serious amount of data. Humans on input of data are always back checking to see if they have seen this item before. They can short circuit the cycle by a symbolic realisation that they already know what the thing is.