Deep learning is a very hot topic in the computer field. There are not only many papers in the academic world, but also many practical applications in the industry. This blog introduces three basic deep learning architectures and provides a brief description of the principles of deep learning.
IntroductionMachine learning technology has played a big role in contemporary society: from web search to content filtering in social networks to personalized recommendations for e-commerce sites, it is rapidly emerging in consumer products such as cameras and smartphones. Machine learning systems can be used to identify objects in an image, transform speech into text, match news, messages, and products of interest to the user, and select relevant search results. These applications are increasingly using a technology called "Deep Learning."
Deep Learning (also known as Deep Structured Learning, Hierarchical Learning, or Deep Machine Learning) is a collection of algorithms that are a branch of machine learning. It attempts to model a high-level summary of the data. In a simple example, suppose you have two sets of neurons, one that accepts the input and one that sends the output. When the input layer receives the input signal, it makes a simple modification of the input layer and passes it to the next layer. In a deep network, there can be many layers between the input layer and the output layer (these layers are not composed of neurons, but it can be understood in terms of neurons), allowing the algorithm to use multiple processing layers, and Linear and non-linear transformations are performed on the results of these layers.
Translator added: The idea of ​​deep learning is consistent with the idea of ​​artificial neural networks. In general, a neural network is a machine learning architecture. All individual units are connected together in a weighted manner, and these weights are trained through the network, so it can be called a neural network algorithm. The idea of ​​artificial neural network algorithms comes from mimicking the way human brains think. The human brain receives input signals through the nervous system and responds accordingly. The way to receive external stimuli is to use neurons to receive electrical signals from nerve endings. Then, we hope to simulate the thinking of the brain through artificial neurons, which produces an artificial neural network. Artificial neurons form the computational unit of artificial neural networks, and artificial neural network structures describe the way these neurons are connected. We can organize neurons in layers, and the layers can be connected to each other. Previously subject to many factors, we were unable to add many layers, and now with the update of algorithms, the increase in data volume and the development of GPUs, we can develop neural networks with many layers, which produces deep neural networks. Deep learning is actually a synonym for deep neural networks.
In recent years, in-depth learning is revolutionizing machine learning through excellent performance in certain tasks. Deep learning methods have shown phenomenal accuracy in session recognition, image recognition, object detection, and areas such as drug discovery and genomics. However, the term "deep learning" is very old. It was proposed by Dechter in the field of machine learning in 1986, and then introduced into artificial neural networks by Aizenberg et al. in 2000. Now, Alex Krizhevsky has been noticed by everyone after winning the ImageNet competition in 2012 using a convolutional network structure.
Deep learning architecture1. GeneraTIve deep architectures, mainly used to describe observable data with high-order correlation or features of visible objects, mainly for pattern analysis or for summation purposes, or to describe these data A joint distribution with their categories. (In fact, it is similar to the generated model)
2. Discriminating depth architecture (DiscriminaTIve deep architectures), mainly used to provide discriminative ability of pattern classification, often used to describe the probability of posterior categories of objects under visible data conditions. (similar to the discriminant model)
3. Hybrid deep architectures, the goal is classification, but mixed with the build structure. For example, the result of generating a model is introduced in an optimized or optimized manner, or the discriminant annotation is used to learn the parameters of the generated model.
Although the classification of the above deep learning architecture is relatively complicated, in fact, examples of corresponding models in practice are deep feed-forward networks (ConvoluTIon networks and Recurrent Networks).
Deep feed-forward networks
Deep feedforward networks, also known as feedforward neural networks, or MulTIlayer Perceptrons (MLPs), are the essence of deep learning models.
The goal of the feedforward network is to approximate certain functions. For example, for a classifier, y=f(x), it changes an input value x to the corresponding category y. The feedforward network defines a mapping y=f(x;θ) and learns the parameter θ to produce the best approximation of the function.
In short, a neural network can be defined as an input layer, an implicit layer, and an output layer. Among them, the input layer accepts data, the hidden layer processes data, and the output layer outputs the final result. This information flow accepts x, and by processing function f, it reaches output y. This model does not have any feedback connections, so it is called a feedforward network. The model is shown below:
Convolution Neural NetworksIn machine learning, a convolutional neural network (CNN or ConvNet for short) is a feedforward neural network whose connection of neurons is inspired by the animal visual cortex. A single cortical neuron can respond to a stimulus in a limited spatial region. This limited space can be called an accepted domain. The accepted domains of different neurons can overlap and form all visible regions. Then, a neuron responds to a stimulus in an accepted domain and can mathematically approximate it using a convolution operation. That is, convolutional neural networks are inspired by biological processing, designing variants of multilayer perceptrons that use minimal pre-processing.
Convolutional neural networks are widely used in image and video recognition, recommendation systems, and natural language processing.
LeNet is one of the early convolutional neural networks that promoted deep learning. This is the groundbreaking work of Yann LeCun's successful iteration of many words since 1988, called LeNet5. At the time, the LeNet architecture was mainly used for character recognition, such as reading zip codes, numbers, and so on. The convolutional neural network mainly consists of four pieces:
Convolutional Layer
Activation Function
Pooling Layer
Fully Connected Layer
Convolutional Layer
The convolutional layer is based on the word "Convolution", which is a mathematical operation that operates on two variables f\*g to produce a third variable. It is very similar to cross-correlation. The input to the convolutional layer is an m x m x r image, where m is the height and width of the image and r is the number of channels, for example, the channel of an RGB image is 3, ie r = 3. The convolutional layer has k filters [filters] (or kernels) whose size is n × n × q, where n is a value smaller than the image dimension, and q can be equal to the number of channels. It can also be smaller than the number of channels, depending on the filter. The size of the filter leads to
Activation Function
In order to implement complex mapping functions, we need to use an activation function. It can bring nonlinear results, and nonlinearity allows us to fit a variety of functions very well. At the same time, the activation function is also important for compressing unbounded linear weighted sums from neurons.
It is important to activate the function, which prevents us from accumulating large values ​​in high-level processing. There are many activation functions, commonly used are sigmoid, tanh and ReLU.
Pooling Layer
Pooling is a sample-based discretization process. The purpose is to reduce the sampling of the input representation (the input here can be the image, the output of the hidden layer, etc.), reducing their dimensions, and allowing us to assume that the feature has been included in the sub-region.
The effect of this part is to help over-fitting representations by providing an abstract formal representation. Similarly, it reduces the computational complexity by reducing the number of parameters and provides a fundamentally invariant transformation of the internal representation.
Currently the most commonly used pooling technologies are Max-Pooling, Min-Pooling and Average-Pooling. The figure below is a schematic diagram of the Ma-Pooling operation of the 2*2 filter.
Fully Connected Layer
"Full connection" means that all neurons in the previous layer are connected to all neurons in the latter layer. The fully connected layer is a traditional multi-layer perceptron. At the output layer, it uses the softmax activation function or other activation functions.
Recurrent Neural Networks
In traditional neural networks, we assume that all inputs are independent of each other. But for many tasks, this is not a good idea. If you want to know what the next word in a sentence is, you better know what the previous word is. The reason why RNN is called RNN is that it performs the same task for all the elements in a sequence, and all the output depends on the previous calculation. Another way to think about RNN is that it remembers all the previous calculations.
There is a lot of loops in an RNN that can carry the information from the input. As shown in the figure below, x_t is an input, A is a part of the RNN, and h_t is the output. Essentially, you can type text from a sentence, and you can even type characters in x_t format from a string, and an h_t can be provided through RNN. Some types of RNN are LSTM, bidirectional RNN, GRU, etc.
Since any input and output can be changed into one-to-one or many-to-many forms in the RNN, RNN can be used in natural language processing, machine translation, language modeling, image recognition, video analysis, image generation, verification code recognition, etc. . The figure below shows the possible structure of the RNN and an explanation of the model.
applicationThere are many applications for deep learning, and many special problems can also be solved through deep learning. Some examples of deep learning applications are as follows:
Black and white image coloring
Deep learning can be used to color pictures based on objects and their context, and the results are much like human coloring results. This solution uses a large convolutional neural network and a supervised layer to recreate the color.
machine translation
Deep learning translates unprocessed language sequences, allowing algorithms to learn the dependencies between words and map them into a new language. A large-scale LSTM RNN network can be used for this type of processing.
Object classification and detection in images
This kind of task requires dividing the image into one of the categories we know before. The best result of this type of task is currently achieved using a very large scale convolutional neural network. The breakthrough development is the AlexNet model used by Alex Krizhevsky and others in the ImageNet competition.
Automatically generate handwriting
This task is to first give some handwritten text and then try to generate new similar handwritten results. The first is that people use pens to write some text on paper, then train the model according to the handwriting of the writing, and finally learn to produce new content.
Automatic game play
This task is based on the image of the computer screen to decide how to play the game. This difficult task is the research area of ​​the deep reinforcement model, and the main breakthrough is the result of the DeepMind team.
Chat robot
A sequence to sequence based model to create a chat robot to answer certain questions. It is generated from a large number of actual session data sets.
in conclusionFrom this blog, deep learning can be used in many fields because it mimics the human brain. There are many areas currently studying the use of deep learning to solve problems. Although trust is currently a problem, it will eventually be resolved.
Cable Cutter Stripper of various types including Insulation Conductor/Cable Cover Stripper,Bolt Cutter,Hydraulic Conductor Cutter etc,which is specially used for striping cover of cable and cutting conductors in electric power line project.It is made of high strength steel or Alu Alloy with compact,light,fast speed,non-damage to conductor.By high quality steel or Alu material and good design,this kind of Cable Stripper & Cutter can be durable and long service life.we are a professional Chinese exporter of Cable Stripper,Cable Cutter,Conductor Stripper,Conductor Cutter,Hydraulic Cable Cutter.and we are looking forward to your cooperation.
Yangzhou Qianyuan Electric Equipment Manufacturing & Trade Co. Ltd is specialized in manufacturing and trade of electric power line transmission tools. Our main products are Anti-twisting Steel Wire Rope,Stringing Pulley,Hydraulic Crimping Compressors,Engine Powered Winch,Motorised Winch,Wire Grip,Gin Pole,Cable Stand,Mesh Sock Grips,Cable Conveyor,Lever Chain Hoists and so on,which are mainly supplied to power companies,railroad companies and other industry fields.
All our products are certified by China National Institute.To assure the quality, we will do 100% inspection for raw material, production procedure, packing before shipment,
so we do have the confidence to supply customers with high-quality and high-efficiency products.
"Customer satisfaction" is our marketing purposes,so we have extensive experience in professional sales force,and strongly good pre-sale, after-sale service to clients. We can completely meet with customers' requirements and cooperate with each other perfectly to win the market.Sincerely welcome customers and friends throughout the world to our company,We strive hard to provide customer with high quality products and best service.
cable cutter, cable stripper, Bolt Cutter, chain cutter, wire cutter tool
Yangzhou Qianyuan Electric Equipment Manufacturing & Trade Co.Ltd , https://www.qypowerline.com