{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "provenance": [], "gpuType": "T4" }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" }, "accelerator": "GPU" }, "cells": [ { "cell_type": "markdown", "source": [ "## Задание 1" ], "metadata": { "id": "oZs0KGcz01BY" } }, { "cell_type": "markdown", "source": [ "### 1) В среде Google Colab создали новый блокнот (notebook). Импортировали необходимые для работы библиотеки и модули." ], "metadata": { "id": "gz18QPRz03Ec" } }, { "cell_type": "code", "source": [ "# импорт модулей\n", "import os\n", "os.chdir('/content/drive/MyDrive/Colab Notebooks/is_lab3')\n", "\n", "from tensorflow import keras\n", "from tensorflow.keras import layers\n", "from tensorflow.keras.models import Sequential\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "from sklearn.metrics import classification_report, confusion_matrix\n", "from sklearn.metrics import ConfusionMatrixDisplay" ], "metadata": { "id": "mr9IszuQ1ANG" }, "execution_count": 5, "outputs": [] }, { "cell_type": "markdown", "source": [ "### 2) Загрузили набор данных MNIST, содержащий размеченные изображения рукописных цифр. " ], "metadata": { "id": "FFRtE0TN1AiA" } }, { "cell_type": "code", "source": [ "# загрузка датасета\n", "from keras.datasets import mnist\n", "(X_train, y_train), (X_test, y_test) = mnist.load_data()" ], "metadata": { "id": "Ixw5Sp0_1A-w", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "ab0db71c-14bd-4d90-b103-de0f680bb148" }, "execution_count": 6, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n", "\u001b[1m11490434/11490434\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 0us/step\n" ] } ] }, { "cell_type": "markdown", "source": [ "### 3) Разбили набор данных на обучающие и тестовые данные в соотношении 60 000:10 000 элементов. Параметр random_state выбрали равным (4k – 1)=23, где k=6 –номер бригады. Вывели размерности полученных обучающих и тестовых массивов данных." ], "metadata": { "id": "aCo_lUXl1BPV" } }, { "cell_type": "code", "source": [ "# создание своего разбиения датасета\n", "from sklearn.model_selection import train_test_split\n", "\n", "# объединяем в один набор\n", "X = np.concatenate((X_train, X_test))\n", "y = np.concatenate((y_train, y_test))\n", "\n", "# разбиваем по вариантам\n", "X_train, X_test, y_train, y_test = train_test_split(X, y,\n", " test_size = 10000,\n", " train_size = 60000,\n", " random_state = 23)\n", "# вывод размерностей\n", "print('Shape of X train:', X_train.shape)\n", "print('Shape of y train:', y_train.shape)\n", "print('Shape of X test:', X_test.shape)\n", "print('Shape of y test:', y_test.shape)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "BrSjcpEe1BeV", "outputId": "297e8485-c5bd-473a-96fd-a0459a264bd4" }, "execution_count": 7, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Shape of X train: (60000, 28, 28)\n", "Shape of y train: (60000,)\n", "Shape of X test: (10000, 28, 28)\n", "Shape of y test: (10000,)\n" ] } ] }, { "cell_type": "markdown", "source": [ "### 4) Провели предобработку данных: привели обучающие и тестовые данные к формату, пригодному для обучения сверточной нейронной сети. Входные данные принимают значения от 0 до 1, метки цифр закодированы по принципу «one-hot encoding». Вывели размерности предобработанных обучающих и тестовых массивов данных." ], "metadata": { "id": "4hclnNaD1BuB" } }, { "cell_type": "code", "source": [ "# Зададим параметры данных и модели\n", "num_classes = 10\n", "input_shape = (28, 28, 1)\n", "\n", "# Приведение входных данных к диапазону [0, 1]\n", "X_train = X_train / 255\n", "X_test = X_test / 255\n", "\n", "# Расширяем размерность входных данных, чтобы каждое изображение имело\n", "# размерность (высота, ширина, количество каналов)\n", "\n", "X_train = np.expand_dims(X_train, -1)\n", "X_test = np.expand_dims(X_test, -1)\n", "print('Shape of transformed X train:', X_train.shape)\n", "print('Shape of transformed X test:', X_test.shape)\n", "\n", "# переведем метки в one-hot\n", "y_train = keras.utils.to_categorical(y_train, num_classes)\n", "y_test = keras.utils.to_categorical(y_test, num_classes)\n", "print('Shape of transformed y train:', y_train.shape)\n", "print('Shape of transformed y test:', y_test.shape)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "xJH87ISq1B9h", "outputId": "e01d8833-3b5d-4e63-a680-37a437ee81cc" }, "execution_count": 8, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Shape of transformed X train: (60000, 28, 28, 1)\n", "Shape of transformed X test: (10000, 28, 28, 1)\n", "Shape of transformed y train: (60000, 10)\n", "Shape of transformed y test: (10000, 10)\n" ] } ] }, { "cell_type": "markdown", "source": [ "### 5) Реализовали модель сверточной нейронной сети и обучили ее на обучающих данных с выделением части обучающих данных в качестве валидационных. Вывели информацию об архитектуре нейронной сети." ], "metadata": { "id": "7x99O8ig1CLh" } }, { "cell_type": "code", "source": [ "# создаем модель\n", "model = Sequential()\n", "model.add(layers.Conv2D(32, kernel_size=(3, 3), activation=\"relu\", input_shape=input_shape))\n", "model.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", "model.add(layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"))\n", "model.add(layers.MaxPooling2D(pool_size=(2, 2)))\n", "model.add(layers.Dropout(0.5))\n", "model.add(layers.Flatten())\n", "model.add(layers.Dense(num_classes, activation=\"softmax\"))\n", "\n", "model.summary()" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 413 }, "id": "Un561zSH1Cmv", "outputId": "fe8a1667-aa09-4b2c-ec6d-1c8606d80d1e" }, "execution_count": 9, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.12/dist-packages/keras/src/layers/convolutional/base_conv.py:113: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.\n", " super().__init__(activity_regularizer=activity_regularizer, **kwargs)\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1mModel: \"sequential\"\u001b[0m\n" ], "text/html": [ "
Model: \"sequential\"\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ conv2d (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m26\u001b[0m, \u001b[38;5;34m32\u001b[0m) │ \u001b[38;5;34m320\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m13\u001b[0m, \u001b[38;5;34m13\u001b[0m, \u001b[38;5;34m32\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_1 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m11\u001b[0m, \u001b[38;5;34m11\u001b[0m, \u001b[38;5;34m64\u001b[0m) │ \u001b[38;5;34m18,496\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_1 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m5\u001b[0m, \u001b[38;5;34m5\u001b[0m, \u001b[38;5;34m64\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout (\u001b[38;5;33mDropout\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m5\u001b[0m, \u001b[38;5;34m5\u001b[0m, \u001b[38;5;34m64\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ flatten (\u001b[38;5;33mFlatten\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1600\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m10\u001b[0m) │ \u001b[38;5;34m16,010\u001b[0m │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n"
],
"text/html": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃ Layer (type) ┃ Output Shape ┃ Param # ┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ conv2d (Conv2D) │ (None, 26, 26, 32) │ 320 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d (MaxPooling2D) │ (None, 13, 13, 32) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_1 (Conv2D) │ (None, 11, 11, 64) │ 18,496 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_1 (MaxPooling2D) │ (None, 5, 5, 64) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout (Dropout) │ (None, 5, 5, 64) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ flatten (Flatten) │ (None, 1600) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense (Dense) │ (None, 10) │ 16,010 │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m34,826\u001b[0m (136.04 KB)\n"
],
"text/html": [
"Total params: 34,826 (136.04 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m34,826\u001b[0m (136.04 KB)\n" ], "text/html": [ "
Trainable params: 34,826 (136.04 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n" ], "text/html": [ "
Non-trainable params: 0 (0.00 B)\n", "\n" ] }, "metadata": {} } ] }, { "cell_type": "code", "source": [ "# компилируем и обучаем модель\n", "batch_size = 512\n", "epochs = 15\n", "model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n", "model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "q_h8PxkN9m0v", "outputId": "6dc60b63-4778-4097-946b-d3e43c78ec73" }, "execution_count": 10, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Epoch 1/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m9s\u001b[0m 41ms/step - accuracy: 0.5999 - loss: 1.2914 - val_accuracy: 0.9450 - val_loss: 0.1909\n", "Epoch 2/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 11ms/step - accuracy: 0.9346 - loss: 0.2144 - val_accuracy: 0.9672 - val_loss: 0.1132\n", "Epoch 3/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 12ms/step - accuracy: 0.9569 - loss: 0.1385 - val_accuracy: 0.9738 - val_loss: 0.0877\n", "Epoch 4/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9657 - loss: 0.1122 - val_accuracy: 0.9763 - val_loss: 0.0765\n", "Epoch 5/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 9ms/step - accuracy: 0.9699 - loss: 0.0973 - val_accuracy: 0.9795 - val_loss: 0.0701\n", "Epoch 6/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9744 - loss: 0.0823 - val_accuracy: 0.9833 - val_loss: 0.0626\n", "Epoch 7/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9775 - loss: 0.0757 - val_accuracy: 0.9832 - val_loss: 0.0588\n", "Epoch 8/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9782 - loss: 0.0701 - val_accuracy: 0.9830 - val_loss: 0.0578\n", "Epoch 9/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9798 - loss: 0.0651 - val_accuracy: 0.9848 - val_loss: 0.0537\n", "Epoch 10/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9814 - loss: 0.0598 - val_accuracy: 0.9858 - val_loss: 0.0534\n", "Epoch 11/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9832 - loss: 0.0567 - val_accuracy: 0.9858 - val_loss: 0.0526\n", "Epoch 12/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9826 - loss: 0.0554 - val_accuracy: 0.9863 - val_loss: 0.0509\n", "Epoch 13/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 10ms/step - accuracy: 0.9844 - loss: 0.0490 - val_accuracy: 0.9862 - val_loss: 0.0486\n", "Epoch 14/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 11ms/step - accuracy: 0.9843 - loss: 0.0475 - val_accuracy: 0.9870 - val_loss: 0.0469\n", "Epoch 15/15\n", "\u001b[1m106/106\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 12ms/step - accuracy: 0.9850 - loss: 0.0491 - val_accuracy: 0.9875 - val_loss: 0.0458\n" ] }, { "output_type": "execute_result", "data": { "text/plain": [ "
Model: \"sequential_1\"\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ dense_1 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m100\u001b[0m) │ \u001b[38;5;34m78,500\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_2 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m10\u001b[0m) │ \u001b[38;5;34m1,010\u001b[0m │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n"
],
"text/html": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃ Layer (type) ┃ Output Shape ┃ Param # ┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ dense_1 (Dense) │ (None, 100) │ 78,500 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_2 (Dense) │ (None, 10) │ 1,010 │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m79,512\u001b[0m (310.60 KB)\n"
],
"text/html": [
"Total params: 79,512 (310.60 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m79,510\u001b[0m (310.59 KB)\n" ], "text/html": [ "
Trainable params: 79,510 (310.59 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n" ], "text/html": [ "
Non-trainable params: 0 (0.00 B)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Optimizer params: \u001b[0m\u001b[38;5;34m2\u001b[0m (12.00 B)\n" ], "text/html": [ "
Optimizer params: 2 (12.00 B)\n", "\n" ] }, "metadata": {} } ] }, { "cell_type": "code", "source": [ "# развернем каждое изображение 28*28 в вектор 784\n", "X_train, X_test, y_train, y_test = train_test_split(X, y,\n", " test_size = 10000,\n", " train_size = 60000,\n", " random_state = 23)\n", "num_pixels = X_train.shape[1] * X_train.shape[2]\n", "X_train = X_train.reshape(X_train.shape[0], num_pixels) / 255\n", "X_test = X_test.reshape(X_test.shape[0], num_pixels) / 255\n", "print('Shape of transformed X train:', X_train.shape)\n", "print('Shape of transformed X train:', X_test.shape)\n", "\n", "# переведем метки в one-hot\n", "y_train = keras.utils.to_categorical(y_train, num_classes)\n", "y_test = keras.utils.to_categorical(y_test, num_classes)\n", "print('Shape of transformed y train:', y_train.shape)\n", "print('Shape of transformed y test:', y_test.shape)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0ki8fhJrEyEt", "outputId": "aff9bef1-f9cd-4aa1-d424-cb909d07c692" }, "execution_count": 25, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Shape of transformed X train: (60000, 784)\n", "Shape of transformed X train: (10000, 784)\n", "Shape of transformed y train: (60000, 10)\n", "Shape of transformed y test: (10000, 10)\n" ] } ] }, { "cell_type": "code", "source": [ "# Оценка качества работы модели на тестовых данных\n", "scores = model_lr1.evaluate(X_test, y_test)\n", "print('Loss on test data:', scores[0])\n", "print('Accuracy on test data:', scores[1])" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0Yj0fzLNE12k", "outputId": "ab3054a0-47de-4cfc-da39-2d5e2c1f579b" }, "execution_count": 26, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "\u001b[1m313/313\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m2s\u001b[0m 3ms/step - accuracy: 0.9490 - loss: 0.1739\n", "Loss on test data: 0.18475718796253204\n", "Accuracy on test data: 0.9458000063896179\n" ] } ] }, { "cell_type": "markdown", "source": [ "### 11) Сравнили обученную модель сверточной сети и наилучшую модель полносвязной сети из лабораторной работы №1 по следующим показателям:\n", "### - количество настраиваемых параметров в сети\n", "### - количество эпох обучения\n", "### - качество классификации тестовой выборки.\n", "### Сделали выводы по результатам применения сверточной нейронной сети для распознавания изображений. " ], "metadata": { "id": "MsM3ew3d1FYq" } }, { "cell_type": "markdown", "source": [ "Таблица1:" ], "metadata": { "id": "xxFO4CXbIG88" } }, { "cell_type": "markdown", "source": [ "| Модель | Количество настраиваемых параметров | Количество эпох обучения | Качество классификации тестовой выборки |\n", "|----------|-------------------------------------|---------------------------|-----------------------------------------|\n", "| Сверточная | 34 826 | 15 | accuracy:0.987 ; loss:0.037 |\n", "| Полносвязная | 79512 | 50 | accuracy:0.946 ; loss:0.185 |\n" ], "metadata": { "id": "xvoivjuNFlEf" } }, { "cell_type": "markdown", "source": [ "По результатам применения сверточной НС, а также по результатам таблицы 1 делаем выводы, что сверточная НС намного лучше справляется с задачами распознования изображений, чем полносвязная - имеет меньше настраиваемых параметров, быстрее обучается, имеет лучшие показатели качества." ], "metadata": { "id": "YctF8h_sIB-P" } }, { "cell_type": "markdown", "source": [ "## Задание 2" ], "metadata": { "id": "wCLHZPGB1F1y" } }, { "cell_type": "markdown", "source": [ "### В новом блокноте выполнили п. 2–8 задания 1, изменив набор данных MNIST на CIFAR-10, содержащий размеченные цветные изображения объектов, разделенные на 10 классов. \n", "### При этом:\n", "### - в п. 3 разбиение данных на обучающие и тестовые произвели в соотношении 50 000:10 000\n", "### - после разбиения данных (между п. 3 и 4) вывели 25 изображений из обучающей выборки с подписями классов\n", "### - в п. 7 одно из тестовых изображений должно распознаваться корректно, а другое – ошибочно. " ], "metadata": { "id": "DUOYls124TT8" } }, { "cell_type": "markdown", "source": [ "### 1) Загрузили набор данных CIFAR-10, содержащий цветные изображения размеченные на 10 классов: самолет, автомобиль, птица, кошка, олень, собака, лягушка, лошадь, корабль, грузовик." ], "metadata": { "id": "XDStuSpEJa8o" } }, { "cell_type": "code", "source": [ "# загрузка датасета\n", "from keras.datasets import cifar10\n", "\n", "(X_train, y_train), (X_test, y_test) = cifar10.load_data()" ], "metadata": { "id": "y0qK7eKL4Tjy", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "b9dbc3c1-08ad-4fc5-83b0-9b384b0d3759" }, "execution_count": 27, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz\n", "\u001b[1m170498071/170498071\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m11s\u001b[0m 0us/step\n" ] } ] }, { "cell_type": "markdown", "source": [ "### 2) Разбили набор данных на обучающие и тестовые данные в соотношении 50 000:10 000 элементов. Параметр random_state выбрали равным (4k – 1)=23, где k=6 –номер бригады. Вывели размерности полученных обучающих и тестовых массивов данных." ], "metadata": { "id": "wTHiBy-ZJ5oh" } }, { "cell_type": "code", "source": [ "# создание своего разбиения датасета\n", "\n", "# объединяем в один набор\n", "X = np.concatenate((X_train, X_test))\n", "y = np.concatenate((y_train, y_test))\n", "\n", "# разбиваем по вариантам\n", "X_train, X_test, y_train, y_test = train_test_split(X, y,\n", " test_size = 10000,\n", " train_size = 50000,\n", " random_state = 23)\n", "# вывод размерностей\n", "print('Shape of X train:', X_train.shape)\n", "print('Shape of y train:', y_train.shape)\n", "print('Shape of X test:', X_test.shape)\n", "print('Shape of y test:', y_test.shape)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "DlnFbQogKD2v", "outputId": "9d7a6710-eb51-45e3-b3e2-e93e57daee95" }, "execution_count": 28, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Shape of X train: (50000, 32, 32, 3)\n", "Shape of y train: (50000, 1)\n", "Shape of X test: (10000, 32, 32, 3)\n", "Shape of y test: (10000, 1)\n" ] } ] }, { "cell_type": "markdown", "source": [ "### Вывели 25 изображений из обучающей выборки с подписью классов." ], "metadata": { "id": "pj3bMaz1KZ3a" } }, { "cell_type": "code", "source": [ "class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',\n", " 'dog', 'frog', 'horse', 'ship', 'truck']\n", "\n", "plt.figure(figsize=(10,10))\n", "for i in range(25):\n", " plt.subplot(5,5,i+1)\n", " plt.xticks([])\n", " plt.yticks([])\n", " plt.grid(False)\n", " plt.imshow(X_train[i])\n", " plt.xlabel(class_names[y_train[i][0]])\n", "plt.show()" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 710 }, "id": "TW8D67KEKhVE", "outputId": "0eaf0395-6883-4d49-979b-983d0a48ee21" }, "execution_count": 29, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "
Model: \"sequential_1\"\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ conv2d_2 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m30\u001b[0m, \u001b[38;5;34m30\u001b[0m, \u001b[38;5;34m32\u001b[0m) │ \u001b[38;5;34m896\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_2 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m15\u001b[0m, \u001b[38;5;34m15\u001b[0m, \u001b[38;5;34m32\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_3 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m13\u001b[0m, \u001b[38;5;34m13\u001b[0m, \u001b[38;5;34m64\u001b[0m) │ \u001b[38;5;34m18,496\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_3 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m6\u001b[0m, \u001b[38;5;34m6\u001b[0m, \u001b[38;5;34m64\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_4 (\u001b[38;5;33mConv2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m4\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m73,856\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_4 (\u001b[38;5;33mMaxPooling2D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m2\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ flatten_1 (\u001b[38;5;33mFlatten\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m512\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_1 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m65,664\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout_1 (\u001b[38;5;33mDropout\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_2 (\u001b[38;5;33mDense\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m10\u001b[0m) │ \u001b[38;5;34m1,290\u001b[0m │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n"
],
"text/html": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃ Layer (type) ┃ Output Shape ┃ Param # ┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ conv2d_2 (Conv2D) │ (None, 30, 30, 32) │ 896 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_2 (MaxPooling2D) │ (None, 15, 15, 32) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_3 (Conv2D) │ (None, 13, 13, 64) │ 18,496 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_3 (MaxPooling2D) │ (None, 6, 6, 64) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ conv2d_4 (Conv2D) │ (None, 4, 4, 128) │ 73,856 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ max_pooling2d_4 (MaxPooling2D) │ (None, 2, 2, 128) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ flatten_1 (Flatten) │ (None, 512) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_1 (Dense) │ (None, 128) │ 65,664 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dropout_1 (Dropout) │ (None, 128) │ 0 │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ dense_2 (Dense) │ (None, 10) │ 1,290 │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
"\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m160,202\u001b[0m (625.79 KB)\n"
],
"text/html": [
"Total params: 160,202 (625.79 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m160,202\u001b[0m (625.79 KB)\n" ], "text/html": [ "
Trainable params: 160,202 (625.79 KB)\n", "\n" ] }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n" ], "text/html": [ "
Non-trainable params: 0 (0.00 B)\n", "\n" ] }, "metadata": {} } ] }, { "cell_type": "code", "source": [ "# компилируем и обучаем модель\n", "batch_size = 64\n", "epochs = 50\n", "model.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n", "model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "3otvqMjjOdq5", "outputId": "8051fa3f-3332-4a92-ae75-11c9985bc1d3" }, "execution_count": 32, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Epoch 1/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m11s\u001b[0m 10ms/step - accuracy: 0.2664 - loss: 1.9466 - val_accuracy: 0.4806 - val_loss: 1.4130\n", "Epoch 2/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.5057 - loss: 1.3726 - val_accuracy: 0.5646 - val_loss: 1.2276\n", "Epoch 3/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.5814 - loss: 1.1935 - val_accuracy: 0.5916 - val_loss: 1.1488\n", "Epoch 4/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.6156 - loss: 1.0997 - val_accuracy: 0.6424 - val_loss: 0.9974\n", "Epoch 5/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 5ms/step - accuracy: 0.6488 - loss: 1.0081 - val_accuracy: 0.6694 - val_loss: 0.9562\n", "Epoch 6/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 5ms/step - accuracy: 0.6746 - loss: 0.9450 - val_accuracy: 0.5854 - val_loss: 1.2591\n", "Epoch 7/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m5s\u001b[0m 7ms/step - accuracy: 0.6922 - loss: 0.8931 - val_accuracy: 0.6830 - val_loss: 0.8941\n", "Epoch 8/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 5ms/step - accuracy: 0.7087 - loss: 0.8355 - val_accuracy: 0.6966 - val_loss: 0.8782\n", "Epoch 9/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7240 - loss: 0.8012 - val_accuracy: 0.6982 - val_loss: 0.8639\n", "Epoch 10/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7408 - loss: 0.7496 - val_accuracy: 0.7090 - val_loss: 0.8516\n", "Epoch 11/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7512 - loss: 0.7111 - val_accuracy: 0.7030 - val_loss: 0.8536\n", "Epoch 12/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7594 - loss: 0.6925 - val_accuracy: 0.7074 - val_loss: 0.8410\n", "Epoch 13/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m5s\u001b[0m 4ms/step - accuracy: 0.7756 - loss: 0.6547 - val_accuracy: 0.7056 - val_loss: 0.8658\n", "Epoch 14/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7751 - loss: 0.6324 - val_accuracy: 0.7150 - val_loss: 0.8463\n", "Epoch 15/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7858 - loss: 0.6145 - val_accuracy: 0.7090 - val_loss: 0.8894\n", "Epoch 16/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.7950 - loss: 0.5918 - val_accuracy: 0.7182 - val_loss: 0.8696\n", "Epoch 17/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.7974 - loss: 0.5649 - val_accuracy: 0.7014 - val_loss: 0.9135\n", "Epoch 18/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8055 - loss: 0.5557 - val_accuracy: 0.7252 - val_loss: 0.8748\n", "Epoch 19/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8142 - loss: 0.5281 - val_accuracy: 0.7068 - val_loss: 0.9660\n", "Epoch 20/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8159 - loss: 0.5160 - val_accuracy: 0.7296 - val_loss: 0.9005\n", "Epoch 21/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m4s\u001b[0m 5ms/step - accuracy: 0.8256 - loss: 0.4960 - val_accuracy: 0.7178 - val_loss: 0.9040\n", "Epoch 22/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8328 - loss: 0.4789 - val_accuracy: 0.7272 - val_loss: 0.9039\n", "Epoch 23/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8370 - loss: 0.4589 - val_accuracy: 0.7228 - val_loss: 0.9271\n", "Epoch 24/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8402 - loss: 0.4509 - val_accuracy: 0.7172 - val_loss: 0.9669\n", "Epoch 25/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.8411 - loss: 0.4476 - val_accuracy: 0.7210 - val_loss: 0.9331\n", "Epoch 26/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8509 - loss: 0.4210 - val_accuracy: 0.7186 - val_loss: 0.9691\n", "Epoch 27/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8477 - loss: 0.4171 - val_accuracy: 0.7214 - val_loss: 1.0069\n", "Epoch 28/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8545 - loss: 0.4089 - val_accuracy: 0.7204 - val_loss: 1.0157\n", "Epoch 29/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8588 - loss: 0.3994 - val_accuracy: 0.7152 - val_loss: 1.0545\n", "Epoch 30/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.8592 - loss: 0.3959 - val_accuracy: 0.7118 - val_loss: 1.1099\n", "Epoch 31/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8582 - loss: 0.3911 - val_accuracy: 0.7106 - val_loss: 1.1526\n", "Epoch 32/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8668 - loss: 0.3691 - val_accuracy: 0.7220 - val_loss: 1.0838\n", "Epoch 33/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8735 - loss: 0.3511 - val_accuracy: 0.7046 - val_loss: 1.1383\n", "Epoch 34/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.8707 - loss: 0.3548 - val_accuracy: 0.7258 - val_loss: 1.1460\n", "Epoch 35/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8704 - loss: 0.3593 - val_accuracy: 0.7208 - val_loss: 1.1223\n", "Epoch 36/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8772 - loss: 0.3411 - val_accuracy: 0.7264 - val_loss: 1.1060\n", "Epoch 37/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8840 - loss: 0.3180 - val_accuracy: 0.7236 - val_loss: 1.1325\n", "Epoch 38/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 6ms/step - accuracy: 0.8772 - loss: 0.3432 - val_accuracy: 0.7246 - val_loss: 1.1593\n", "Epoch 39/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8844 - loss: 0.3239 - val_accuracy: 0.7244 - val_loss: 1.1873\n", "Epoch 40/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8886 - loss: 0.3056 - val_accuracy: 0.7154 - val_loss: 1.2173\n", "Epoch 41/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8945 - loss: 0.2932 - val_accuracy: 0.7124 - val_loss: 1.2767\n", "Epoch 42/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m5s\u001b[0m 5ms/step - accuracy: 0.8900 - loss: 0.3043 - val_accuracy: 0.7230 - val_loss: 1.2550\n", "Epoch 43/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8941 - loss: 0.2936 - val_accuracy: 0.7208 - val_loss: 1.2914\n", "Epoch 44/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8964 - loss: 0.2842 - val_accuracy: 0.7248 - val_loss: 1.2318\n", "Epoch 45/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8933 - loss: 0.2893 - val_accuracy: 0.7212 - val_loss: 1.3048\n", "Epoch 46/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.8979 - loss: 0.2843 - val_accuracy: 0.7208 - val_loss: 1.3156\n", "Epoch 47/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.9005 - loss: 0.2698 - val_accuracy: 0.7052 - val_loss: 1.3691\n", "Epoch 48/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.9024 - loss: 0.2705 - val_accuracy: 0.7152 - val_loss: 1.3893\n", "Epoch 49/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 4ms/step - accuracy: 0.8964 - loss: 0.2817 - val_accuracy: 0.7234 - val_loss: 1.3403\n", "Epoch 50/50\n", "\u001b[1m704/704\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m3s\u001b[0m 5ms/step - accuracy: 0.8998 - loss: 0.2719 - val_accuracy: 0.7224 - val_loss: 1.2929\n" ] }, { "output_type": "execute_result", "data": { "text/plain": [ "