Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • The first stage of the study

    2018-11-05

    The first stage of the study involved assessing the entropy increase in the training set which consisted of the data generated using the above-mentioned algorithms. Entropy was calculated by Shannon\'s formula [9]: where is the probability of selecting an dhpg from the cluster. The link between the input and the output parameters is set by a nonlinear function Where x1, x2, x3, x4 are the corresponding input parameters, y is the output parameter. Noise described by a random variable distributed normally with a variance of 0.02 was introduced to the output signal vector. The neural network training was performed for the functions from the Neural Network Toolbox (NNtoolbox) of the MATLAB package. The training parameters are given in Table 1. Fig. 1 shows the results for training the neural network on randomly generated data (without clustering) with the purpose of assessing the effectiveness of the algorithms (i.e., the difference between the mean square errors of the training, validation and test sets). The training procedure was conducted ten times for each case. The pre-initialization of the weights was performed for the NNtoolbox functions by the Nguyen–Widrow algorithm [10] for each training procedure. The attempt of training that was the most successful from the standpoint of the minimal mean-square error of training is given as the result. The mean square error of training is calculated by the following formula: where are the actual and the expected training results for the training vector i, respectively. Tables 2 and 3 list the results for computing the entropy with the SOM algorithm (Variant 1, H(T)=0), and for the neural network\'s training time T1 for training on the data generated without clustering (T2=0).
    Algorithm based on the construction of a hierarchical cluster tree This algorithm belongs to the hierarchical clustering class [8]. The Euclidean distance between the elements of the factor space is used as a metric, calculated by the following formula: where X1, X2 are the elements of the factor space, with 1 ∈ X1,  2 ∈ X2. The running time of the algorithm, depending on the size of the factor space, is given in Table 3. It can be seen that the running time of the algorithm increases insignificantly with increasing size of the factor space. The results for training the MLP neural network are shown in Fig. 5. In this case, the value of the entropy increase is the lowest among all clustering algorithms. The mean square error of training is 0.22595, which, despite the low entropy value, is less than for the case when the k-means algorithm was used, which yielded the highest entropy increase among all the methods. It should be noted that using the hierarchical method has considerable advantages over all the algorithms considered because of its running time (see Table 3). The data for the main results of the study for all clustering algorithms are summarized in Table 4.
    Conclusion The study carried out lead us to conclude that condition (1), formulated in the problem setting, is satisfied for all clustering algorithms considered. In all experiments, we have observed an increase in entropy and a decrease in the mean square error of the training set, as well as a decrease in the difference between the mean square error of the validation/training and test sets, which indicates an improvement in the quality of training. The best result in satisfying condition (2) was obtained with the c-means algorithm. However, it should be borne in mind that the gain in efficiency for a substantially high dimension of the factor space is significantly lower, in comparison with the increase in clustering time.
    Introduction The problem of controlling the vibrations of distributed mechanical objects such as beams and plates is peculiar in that their description involves an infinite number of eigenmodes, each of which can be regarded as a separate degree of freedom. The frequency range of external disturbances typically includes several natural frequencies of the object. We have previously established theoretically and experimentally that under such conditions, a system for suppressing vibrations in a distributed object, constructed on the basis of a modal control algorithm, has significant advantages over a system of local regulators [1].