fpgaimplementationofreal-timeadaptiveimagethresholding-外文文献(编辑修改稿)内容摘要:

ity of the work is stored in the interlayer connection strengths, called weights, which are obtained by a process of learning from a set of training patterns. Each of inputs to the node is multiplied by a connection weight. During recent decades, very diverse categories of ANN have been introduced by researchers. Each category is applicable to a specific domain and proposing a general neural work to solve all problems seems to be impossible. One of the proposed solutions, which is applicable to the classification and image segmentation problems, is Unsupervised Competitive Learning [19]. Let us assume every image consists of two distinct classes which are two major groups of pixels describing two different subjective properties of the image. We can call the two pixel groups as background and foreground. Obviously, this sort of image has a bimodal histogram. Consequently, in order to construct an ANN, we need only two weights to classify these two groups of pixels. The weights are updated by input pixels. For every input pixel, the closest weight is selected for being updated. The difference between the input pixel and the closest weight is scaled and added to the closest weight. This value is the updated value for the winner (closest) weight. The update function is as follows, )( oldioldnew WIWW −∗+= α (2) As equation (2) explains, the difference between the input and old weight is scaled by a factor α also called the learning rate. This weight update is applied for every pixel of the image. At the end of the training mode, the weights are located at the center of each cluster of the pixels, namely background and foreground, and the threshold is calculating by taking an average of these two weights. Figure 1 shows the flowchart for weight updating and thresholding process. Initialize weightsRead an input pixelLast input pixelYES NOUpdate the closest weightCalculate THRESHOLD Figure 1 : Update process flow chart. It is necessary to analyze convergence criteria in the artificial neural work. In order to achieve a set of precise weights at the end of training process that addresses an optimum threshold, the work has to converge. The rate of convergence in most feedback neural works is a critical parameter, but in feedforward works like the proposed work is not applicable. The reason is that we have to terminate the training process at the end of data set (pixels at this application), whether it is converged or not. The convergence totally depends on two parameters, learning rate and initial value. With a constant value of learning rate, the work does not converge [19]. Initially the weights may not be near to the actual centroids. If the learning rate parameter is set to a small value, then the learning process proceeds smoothly. In this case the work may not converge to a stable value within the number of pixels because the weight movements towards the actual centroids are too slow. On the other hand, if the learningrate parameter is set to a large value, the rate of the learning is accelerated, but now there is a risk that the work diverges and bees unstable. Therefore the learning rate has to be decreased gradually as the training proceeds. In practice, to guarantee the convergence of the work, the learning rate is taken as the reciprocal of the number of cases that have been assigned to the winning cluster. If we assume for an input image, iCW pixels have been previously assigned to the ith weight as the closest or winning weight and iW is a wining weight once more, it will be updated as follows, )(11 oldiiioldinew WICWWWi −∗++= (3) Reducing the learning rate causes every weight approaches to the mean of all pixels assigned to the corresponding cluster [19] and guarantees convergence of the algorithm to an optimum value of the error function (the sum of squared Euclidean distances between inputs and weights). In other words, as the number of input pixels increases the learning rate of every weight, and consequently the update value for the winning weight, are reduced. Although this guarantees the convergence of the work, it might be risky when the initial values of the weights are trapped in local minimum. The WCT method has applied equation (3) in order to update the weights. The drawback of this method is its sensitivity to the initial values of weights. Some modifications are necessary to make the learning rate less dependent on the initial values of the weights. Equation (3) indicates the learning rate continuously decreases proportional to the number of the weight updated. After a number of the input pixels processed, the weight bees smaller. In some cases this process is not desirable, especially when the gray level image does not have a uniform distribution, for example, images with poor contrast. This problem can be seen in the case that the initial value is far away from the centroid of the cluster and get trapped in a local minimum. Alternatively, it can start with a constant learning rate value and then it can be reduced after a predetermined point. The start point for decreasing the learning rate may set to the ratio of object pixels to the background pixels. Before this point the work is training with the object and background pixels behaviour and after that it needs to reduce the weights change rate. This modification enhances the work convergence but it makes the approach more application dependent. In the proposed method, the work is trained with the all the image pixels. The learning process is started with a constant value of learning rate. After processing of a percentage of image pixels, which is roughly equal to the ratio of number of pixels in the foreground to the number of pixels in the background for a particular application, the learning rate is decreased to make the weight changes smaller。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。