道路交通专业毕业设计外文翻译--基于od矩阵神经网络与主成分分析法的路段流量估算-交通线路(编辑修改稿)内容摘要:

ernatives have been experimented since many years and they aim at building the OD matrix by using link flows that generally cost less to be measured. In order to solve the problem of OD estimation many approaches have been developed. Some are based on entropy maximization that is on the maximization of trip distribution dispersion on all available paths。 in some cases the built model refers to an OD matrix objective without referring indeed to estimation errors, or to statistic estimation indexes or to likelihood functions (Van Zuylen and Willumsen, 1980). This model was later extended to congested works by formulating an optimization problem with variational disequality constraints leading to a bilevel programme. The bilevel approach presents some difficulties to find the optimal solution because of nonconvex and nondifferential formulation. Florian and Chen (1995) formulated a heuristic approach (of GaussSeidel type) capable of the GaussSeidel type) capable of converging on an optimal solution by limiting the objective to the correction of the O/D matrix. Other approaches are based on models that use the statistical properties of observed variables. For instance Maher (1983) proposed a Bayesian estimation by means of a normal multivariate distribution both for matrix distribution and for link flow。 Cascetta (1984) used an estimation based on generalized least squares (GLS). An overview of statistical methods for estimating OD matrix can be found in Cascetta and Nguyen (1988)。 it regards generalized and constrained least squares and estimation of likelihood and the Bayesian type. Most of these studies assume a fixed percentage of link or path choice calculated by a deterministic user equilibrium assignment model. This can cause some inconsistency between flow and the OD matrix especially when the work is highly congested. In order to overe this limitation, Cascetta (1989) proposed interpreting link and path flows as stochastic variables and, therefore, values obtained by a SUE assignment like the average values of these variables. On the same line there are other papers such as Yang et al. (1992), and in a successive reelaboration in (Yang et al., 2020)。 Lo et al. (1996) proposed a unified statistical approach for the estimation of the OD matrix using simultaneously link flow data and information about link choice percentage. Gong (1998) uses a Hopfield neural work as a tool for solving an optimal problem, formulated however as an entropy maximization problem, of the type cited before. The aim of this paper is to solve OD matrix estimation by soft puting techniques, specifically neural works, starting from the knowledge of flow measures on road work links. It works out an application of multilayer feedforward neural works in order to estimate OD matrices by using the wellknown approximation property typical of these models。 without losing generality, the existence of a continuous relationship between flow measured on links and the OD matrix that produces them is assumed. Because of the learning mechanism of feedforward neural works, however, the contemporary knowledge of OD matrix and related link flows is required for the training set. This requirement is achieved thanks to the laboratory of the transport group of the University of Naples (Bifulco, 2020) that produced all necessary information usually not easy to collect on the field with an adequate time detail and with the necessary prehensiveness. 2. DATA PREPROCESSING Signal cleaning and normalization are fundamental to get an easier training of neural works. The basic idea is to reduce the problem to stationar y conditions or, if not possible, to stabilize the relationship between the signal mean and its variance. In this paper, we focus mainly on signal variance stabilization。 there are no missing data or erroneous measurements. In literature there are a few definitions of stationarity。 in a strong sense we can state plete statistical stationarity for the observed process, but this is impractical since it requires specifying infinite constraints on the moments of the distribution (Bittanti, 1986). A simpler approach reduces the problem of stationarity to second order statistics (., the signal distribution can be pletely described using the first two moments) by imposing a constant expected value  ( ( )) ,E v t m tand a covariance function  ( ) ( ( ) ) ( ( ) )vr E v t m v t m   independent from specific time indexes, but depending only on their difference 21tt. In the following we use this weaker definition of stationarity assuming that expected value and covariance are invariant with respect to time shifts. The interest on (weak) stationary processes is related to the classical least squares minimization approach. Also learning neural work weights by backpropagation pertains to this framework since the error function minimized by training procedure, being this the classical gradient descent or another training algorithm, is the sum of squared errors between the ( .dimensional) target value nt and the work output ny : 2 2( ) ( ( ) )N N K kkn n nnn n kt y w t y w       (1) Learning by using this error function can be seen as the maximum likelihood estimation of neural work parameters (. the weights) under the hypothesis of t being a corrupted version of ( , )ywx (., the neural work output) ( , )kk nn n wxyt with Gaussian noise  ~ N(0,  ), thus being 12, ,...., Nt t t   an . sample from a Gaussian process ~ ( ( , ), )t N y w x     with mean ( , )ywx . If we pute the maximum (log) likelihood estimation of this mean (., the maximum likelihood estimation of neural work weights), it turns out to be equivalent to the classical least squares minimization. When the process is nonstationary this learning approach does not hol。
阅读剩余 0%
本站所有文章资讯、展示的图片素材等内容均为注册用户上传(部分报媒/平媒内容转载自网络合作媒体),仅供学习参考。 用户通过本站上传、发布的任何内容的知识产权归属用户或原始著作权人所有。如有侵犯您的版权,请联系我们反馈本站将在三个工作日内改正。