Вы не можете выбрать более 25 тем Темы должны начинаться с буквы или цифры, могут содержать дефисы(-) и должны содержать не более 35 символов.

1.2 MiB

from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
import os
os.chdir('/content/drive/MyDrive/Colab Notebooks/is_lab2')

import numpy as np
import lab02_lib as lib
# генерация датасета
data = lib.datagen(10, 10, 1000, 2)

# вывод данных и размерности
print('Исходные данные:')
print(data)
print('Размерность данных:')
print(data.shape)

Исходные данные:
[[10.1127864   9.99999352]
 [10.05249217  9.87350749]
 [10.1316048  10.05250118]
 ...
 [10.03841171 10.0442026 ]
 [ 9.91528464 10.06201318]
 [10.09181138  9.92258731]]
Размерность данных:
(1000, 2)
# обучение AE1
patience = 300
ae1_trained, IRE1, IREth1 = lib.create_fit_save_ae(data,'out/AE1.h5','out/AE1_ire_th.txt',
1000, False, patience)
Задать архитектуру автокодировщиков или использовать архитектуру по умолчанию? (1/2): 1
Задайте количество скрытых слоёв (нечетное число) : 1
Задайте архитектуру скрытых слоёв автокодировщика, например, в виде 3 1 3 : 1

Epoch 1000/1000
 - loss: 58.5663
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`. 


# Построение графика ошибки реконструкции
lib.ire_plot('training', IRE1, IREth1, 'AE1')

# обучение AE2
ae2_trained, IRE2, IREth2 = lib.create_fit_save_ae(data,'out/AE2.h5','out/AE2_ire_th.txt',
3000, False, patience)
Задать архитектуру автокодировщиков или использовать архитектуру по умолчанию? (1/2): 1
Задайте количество скрытых слоёв (нечетное число) : 5
Задайте архитектуру скрытых слоёв автокодировщика, например, в виде 3 1 3 : 4 2 1 2 4

Epoch 1000/3000
 - loss: 11.4675

Epoch 2000/3000
 - loss: 0.6698

Epoch 3000/3000
 - loss: 0.0165
32/32 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`. 


# Построение графика ошибки реконструкции
lib.ire_plot('training', IRE2, IREth2, 'AE2')

numb_square = 20
xx, yy, Z1 = lib.square_calc(numb_square, data, ae1_trained, IREth1, '1', True)
219/219 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step

amount:  22
amount_ae:  308


Оценка качества AE1
IDEAL = 0. Excess:  13.0
IDEAL = 0. Deficit:  0.0
IDEAL = 1. Coating:  1.0
summa:  1.0
IDEAL = 1. Extrapolation precision (Approx):  0.07142857142857142


# построение областей покрытия и границ классов
# расчет характеристик качества обучения
numb_square = 20
xx, yy, Z2 = lib.square_calc(numb_square, data, ae2_trained, IREth2, '2', True)
219/219 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step

amount:  22
amount_ae:  39


Оценка качества AE2
IDEAL = 0. Excess:  0.7727272727272727
IDEAL = 0. Deficit:  0.0
IDEAL = 1. Coating:  1.0
summa:  1.0
IDEAL = 1. Extrapolation precision (Approx):  0.5641025641025641


# сравнение характеристик качества обучения и областей аппроксимации
lib.plot2in1(data, xx, yy, Z1, Z2)

# загрузка тестового набора
data_test = np.loadtxt('data_test.txt', dtype=float)
print(data_test)
[[8.5 8.5]
 [8.2 8.2]
 [7.7 7.7]
 [9.3 8.8]]
# тестирование АE1
predicted_labels1, ire1 = lib.predict_ae(ae1_trained, data_test, IREth1)
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
# тестирование АE1
lib.anomaly_detection_ae(predicted_labels1, ire1, IREth1)
lib.ire_plot('test', ire1, IREth1, 'AE1')
Аномалий не обнаружено

# тестирование АE2
predicted_labels2, ire2 = lib.predict_ae(ae2_trained, data_test, IREth2)
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 37ms/step
# тестирование АE2
lib.anomaly_detection_ae(predicted_labels2, ire2, IREth2)
lib.ire_plot('test', ire2, IREth2, 'AE2')

i         Labels    IRE       IREth     
0         [1.]      [1.99]    0.47      
1         [1.]      [2.42]    0.47      
2         [1.]      [3.12]    0.47      
3         [1.]      [1.28]    0.47      
Обнаружено  4.0  аномалий

# построение областей аппроксимации и точек тестового набора
lib.plot2in1_anomaly(data, xx, yy, Z1, Z2, data_test)

# загрузка выборок
train = np.loadtxt('letter_train.txt', dtype=float)
test = np.loadtxt('letter_test.txt', dtype=float)
print('train:\n', train)
print('train.shape:', np.shape(train))
train:
 [[ 6. 10.  5. ... 10.  2.  7.]
 [ 0.  6.  0. ...  8.  1.  7.]
 [ 4.  7.  5. ...  8.  2.  8.]
 ...
 [ 7. 10. 10. ...  8.  5.  6.]
 [ 7.  7. 10. ...  6.  0.  8.]
 [ 3.  4.  5. ...  9.  5.  5.]]
train.shape: (1500, 32)
from time import time

patience = 5000
start = time()
ae3_v1_trained, IRE3_v1, IREth3_v1 = lib.create_fit_save_ae(train,'out/AE3_V1.h5','out/AE3_v1_ire_th.txt',
100000, False, patience, early_stopping_delta = 0.001)
print("Время на обучение: ", time() - start)

Epoch 1000/100000
 - loss: 4.1363

Epoch 2000/100000
 - loss: 2.1867

Epoch 3000/100000
 - loss: 1.5813

Epoch 4000/100000
 - loss: 1.2347

Epoch 5000/100000
 - loss: 0.9672

Epoch 6000/100000
 - loss: 0.8081

Epoch 7000/100000
 - loss: 0.7125

Epoch 8000/100000
 - loss: 0.6341

Epoch 9000/100000
 - loss: 0.5874

Epoch 10000/100000
 - loss: 0.5461

Epoch 11000/100000
 - loss: 0.5433

Epoch 12000/100000
 - loss: 0.4929

Epoch 13000/100000
 - loss: 0.4718

Epoch 14000/100000
 - loss: 0.4580

Epoch 15000/100000
 - loss: 0.4421

Epoch 16000/100000
 - loss: 0.4266

Epoch 17000/100000
 - loss: 0.4146

Epoch 18000/100000
 - loss: 0.4065

Epoch 19000/100000
 - loss: 0.3979

Epoch 20000/100000
 - loss: 0.3916

Epoch 21000/100000
 - loss: 0.3894

Epoch 22000/100000
 - loss: 0.3879

Epoch 23000/100000
 - loss: 0.3754

Epoch 24000/100000
 - loss: 0.3776

Epoch 25000/100000
 - loss: 0.3682

Epoch 26000/100000
 - loss: 0.3593

Epoch 27000/100000
 - loss: 0.3599

Epoch 28000/100000
 - loss: 0.3545

Epoch 29000/100000
 - loss: 0.3520

Epoch 30000/100000
 - loss: 0.3421

Epoch 31000/100000
 - loss: 0.3388

Epoch 32000/100000
 - loss: 0.3456

Epoch 33000/100000
 - loss: 0.3399

Epoch 34000/100000
 - loss: 0.3331

Epoch 35000/100000
 - loss: 0.3331

Epoch 36000/100000
 - loss: 0.3237

Epoch 37000/100000
 - loss: 0.3216

Epoch 38000/100000
 - loss: 0.3200

Epoch 39000/100000
 - loss: 0.3172

Epoch 40000/100000
 - loss: 0.3170

Epoch 41000/100000
 - loss: 0.3160

Epoch 42000/100000
 - loss: 0.3134

Epoch 43000/100000
 - loss: 0.3096

Epoch 44000/100000
 - loss: 0.3222

Epoch 45000/100000
 - loss: 0.3076

Epoch 46000/100000
 - loss: 0.3045

Epoch 47000/100000
 - loss: 0.3036

Epoch 48000/100000
 - loss: 0.3092

Epoch 49000/100000
 - loss: 0.3002

Epoch 50000/100000
 - loss: 0.2984

Epoch 51000/100000
 - loss: 0.3179

Epoch 52000/100000
 - loss: 0.3005

Epoch 53000/100000
 - loss: 0.2976

Epoch 54000/100000
 - loss: 0.2968

Epoch 55000/100000
 - loss: 0.2964

Epoch 56000/100000
 - loss: 0.2943

Epoch 57000/100000
 - loss: 0.2907

Epoch 58000/100000
 - loss: 0.2892

Epoch 59000/100000
 - loss: 0.2902

Epoch 60000/100000
 - loss: 0.2904

Epoch 61000/100000
 - loss: 0.2872

Epoch 62000/100000
 - loss: 0.2880

Epoch 63000/100000
 - loss: 0.2839

Epoch 64000/100000
 - loss: 0.2909

Epoch 65000/100000
 - loss: 0.2857

Epoch 66000/100000
 - loss: 0.2810

Epoch 67000/100000
 - loss: 0.2801

Epoch 68000/100000
 - loss: 0.2805

Epoch 69000/100000
 - loss: 0.2823

Epoch 70000/100000
 - loss: 0.2817

Epoch 71000/100000
 - loss: 0.2772

Epoch 72000/100000
 - loss: 0.2757

Epoch 73000/100000
 - loss: 0.2763

Epoch 74000/100000
 - loss: 0.2745

Epoch 75000/100000
 - loss: 0.2745

Epoch 76000/100000
 - loss: 0.2774

Epoch 77000/100000
 - loss: 0.2750

Epoch 78000/100000
 - loss: 0.2715

Epoch 79000/100000
 - loss: 0.2741

Epoch 80000/100000
 - loss: 0.2717

Epoch 81000/100000
 - loss: 0.2716

Epoch 82000/100000
 - loss: 0.2674

Epoch 83000/100000
 - loss: 0.2689

Epoch 84000/100000
 - loss: 0.2689

Epoch 85000/100000
 - loss: 0.2682

Epoch 86000/100000
 - loss: 0.2714

Epoch 87000/100000
 - loss: 0.2696

Epoch 88000/100000
 - loss: 0.2731

Epoch 89000/100000
 - loss: 0.2730

Epoch 90000/100000
 - loss: 0.2634

Epoch 91000/100000
 - loss: 0.2643

Epoch 92000/100000
 - loss: 0.2629

Epoch 93000/100000
 - loss: 0.2621

Epoch 94000/100000
 - loss: 0.2608

Epoch 95000/100000
 - loss: 0.2708

Epoch 96000/100000
 - loss: 0.2688

Epoch 97000/100000
 - loss: 0.2620

Epoch 98000/100000
 - loss: 0.2640

Epoch 99000/100000
 - loss: 0.2580

Epoch 100000/100000
 - loss: 0.2571
47/47 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
WARNING:absl:You are saving your model as an HDF5 file via `model.save()` or `keras.saving.save_model(model)`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')` or `keras.saving.save_model(model, 'my_model.keras')`. 


Время на обучение:  5561.245889663696
# Построение графика ошибки реконструкции
lib.ire_plot('training', IRE3_v1, IREth3_v1, 'AE3_v1')

print('\n test:\n', test)
print('test.shape:', np.shape(test))

 test:
 [[ 8. 11.  8. ...  7.  4.  9.]
 [ 4.  5.  4. ... 13.  8.  8.]
 [ 3.  3.  5. ...  8.  3.  8.]
 ...
 [ 4.  9.  4. ...  8.  3.  8.]
 [ 6. 10.  6. ...  9.  8.  8.]
 [ 3.  1.  3. ...  9.  1.  7.]]
test.shape: (100, 32)
# тестирование АE3
predicted_labels3_v1, ire3_v1 = lib.predict_ae(ae3_v1_trained, test, IREth3_v1)
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step 
# Построение графика ошибки реконструкции
lib.ire_plot('test', ire3_v1, IREth3_v1, 'AE3_v1')

lib.anomaly_detection_ae(predicted_labels3_v1, IRE3_v1, IREth3_v1)

i         Labels    IRE       IREth     
0         [1.]      4.04      6.97      
1         [1.]      1.6       6.97      
2         [1.]      2.7       6.97      
3         [1.]      1.96      6.97      
4         [1.]      1.58      6.97      
5         [1.]      2.51      6.97      
6         [1.]      3.47      6.97      
7         [1.]      2.81      6.97      
8         [1.]      2.35      6.97      
9         [1.]      1.88      6.97      
10        [1.]      3.03      6.97      
11        [1.]      3.34      6.97      
12        [1.]      1.76      6.97      
13        [1.]      3.25      6.97      
14        [1.]      2.13      6.97      
15        [1.]      1.42      6.97      
16        [1.]      1.24      6.97      
17        [1.]      2.75      6.97      
18        [0.]      3.06      6.97      
19        [1.]      2.72      6.97      
20        [0.]      1.27      6.97      
21        [0.]      1.95      6.97      
22        [1.]      1.61      6.97      
23        [1.]      2.22      6.97      
24        [1.]      1.77      6.97      
25        [1.]      1.96      6.97      
26        [1.]      1.94      6.97      
27        [1.]      3.6       6.97      
28        [1.]      2.83      6.97      
29        [1.]      2.77      6.97      
30        [1.]      1.33      6.97      
31        [1.]      1.64      6.97      
32        [1.]      1.91      6.97      
33        [1.]      3.29      6.97      
34        [1.]      1.68      6.97      
35        [1.]      1.75      6.97      
36        [1.]      1.83      6.97      
37        [1.]      3.97      6.97      
38        [1.]      2.09      6.97      
39        [1.]      3.3       6.97      
40        [1.]      2.06      6.97      
41        [1.]      1.99      6.97      
42        [1.]      3.51      6.97      
43        [1.]      1.99      6.97      
44        [0.]      1.77      6.97      
45        [0.]      2.72      6.97      
46        [0.]      1.41      6.97      
47        [1.]      3.47      6.97      
48        [1.]      1.61      6.97      
49        [1.]      1.72      6.97      
50        [1.]      1.6       6.97      
51        [1.]      1.99      6.97      
52        [0.]      1.72      6.97      
53        [1.]      1.72      6.97      
54        [1.]      2.44      6.97      
55        [1.]      1.44      6.97      
56        [1.]      3.6       6.97      
57        [1.]      1.43      6.97      
58        [1.]      2.23      6.97      
59        [1.]      1.78      6.97      
60        [1.]      3.26      6.97      
61        [1.]      2.05      6.97      
62        [1.]      1.58      6.97      
63        [1.]      1.35      6.97      
64        [1.]      3.47      6.97      
65        [1.]      3.81      6.97      
66        [1.]      2.3       6.97      
67        [0.]      1.9       6.97      
68        [1.]      1.57      6.97      
69        [1.]      4.03      6.97      
70        [1.]      4.2       6.97      
71        [1.]      2.22      6.97      
72        [1.]      3.36      6.97      
73        [1.]      2.01      6.97      
74        [1.]      1.63      6.97      
75        [1.]      1.4       6.97      
76        [0.]      2.2       6.97      
77        [0.]      4.04      6.97      
78        [1.]      3.43      6.97      
79        [1.]      2.3       6.97      
80        [1.]      1.35      6.97      
81        [0.]      3.13      6.97      
82        [1.]      2.59      6.97      
83        [0.]      3.29      6.97      
84        [0.]      2.53      6.97      
85        [1.]      2.91      6.97      
86        [1.]      2.0       6.97      
87        [1.]      2.24      6.97      
88        [1.]      1.52      6.97      
89        [1.]      1.68      6.97      
90        [0.]      2.42      6.97      
91        [0.]      2.97      6.97      
92        [0.]      2.51      6.97      
93        [1.]      4.03      6.97      
94        [1.]      1.22      6.97      
95        [1.]      2.43      6.97      
96        [1.]      2.94      6.97      
97        [1.]      2.05      6.97      
98        [1.]      2.7       6.97      
99        [1.]      2.04      6.97      
Обнаружено  84.0  аномалий