Machine Learning Kurs im Rahmen der Studierendentage im SS 2023
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

350 lines
34 KiB

2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# A simple neural network with one hidden layer in pure Python\n",
  8. "\n",
  9. "## Introduction\n",
  10. "We consider a simple feed-forward neural network with one hidden layer:"
  11. ]
  12. },
  13. {
  14. "attachments": {
  15. "48b1ed6e-8e2b-4883-82ac-a2bbed6e2885.png": {
  16. "image/png": "iVBORw0KGgoAAAANSUhEUgAAASwAAAEsCAYAAAB5fY51AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy89olMNAAAACXBIWXMAAA9hAAAPYQGoP6dpAABEKklEQVR4nO2ddVxU6ffHD62EgYWBhd2K2F2767prrmLHioqBrWv3rh1rdxeIioGN3WAuusbaIqlISc39/P7gN8/XkYY7DHc879drXvta5855zjzc+5mnzjkGAEAMwzAKwFDXDjAMw6QVFiyGYRQDCxbDMIqBBYthGMXAgsUwjGJgwWIYRjGwYDEMoxhYsBiGUQwsWAzDKAYWLIZhFAMLFsMwioEFi2EYxcCCxTCMYmDBYhhGMbBgMQyjGFiwGIZRDCxYDMMoBhYshmEUAwsWwzCKgQWLYRjFwILFMIxiYMFiGEYxsGAxDKMYWLAYhlEMLFgMwygGFiyGYRQDCxbDMIqBBYthGMXAgsUwjGJgwWIYRjGwYDEMoxhYsBiGUQwsWAzDKAYWLIZhFAMLFsMwioEFi2EYxcCCxTCMYmDBYhhGMbBgMQyjGFiwGIZRDCxYDMMoBhYshmEUAwsWwzCKgQWLYRjFwILFMIxiMNa1A0zqxMTEkL+/P8XHx5OVlRUVKFCADAwMdO2WXsN9nj3hEVY25dGjRzRq1CiqWbMmWVpaUsmSJalMmTJUqFAhKly4MLVr14527NhB0dHRunZVb+A+z/4YAICunWD+x7Nnz2jYsGF05swZKliwILVr147s7e2pTJkyZGxsTB8/fqR79+7RlStX6OLFi2RtbU3Tp0+nESNGkKEh//5kBO5zBQEm27B69WrkzJkTpUuXxp49exATE5Pi9U+fPsWgQYNARGjUqBHevn2bRZ7qD9znyoIFK5swbdo0EBGGDh2KiIiIdH32woULsLW1RfHixfHixQsteah/cJ8rDxasbMD69etBRFiwYEGGbbx9+xZ2dnYoW7YswsPDZfROP+E+VyYsWDrmv//+g7m5OQYPHpxpW8+ePYO5uTmGDh0qg2f6C/e5cmHB0jHt2rVDyZIlZfuFXrlyJYgIt2/flsWePsJ9rlxYsHTIs2fPQETYsmWLbDbj4+NRokQJ9OnTRzab+gT3ubLhPVkdsmXLFsqbNy85OjrKZtPIyIicnZ1p//79FBYWJptdfYH7XNmwYOmQq1evUqtWrShnzpyy2m3Xrh3FxMSQj4+PrHb1Ae5zZcOCpSMkSaI7d+5Q7dq1ZbddoUIFMjc354fnG7jPlQ8Llo6IiIigiIgIKlGihOy2jYyMyNbWlj58+CC7bSXDfa58WLB0BP4/IkpbAbWGhoaiDSYB7nPlw4KlIywsLMjU1JQCAgJktw2AAgICyNraWnbbSob7XPmwYOkIY2Njqlq1qlbWPF69ekUfP36kmjVrym5byXCfKx8WLB1St25dOn/+PKlUKlntenl5kYGBgVYWl5UO97myYcHSIX369KE3b97QiRMnZLMJgP7++28qV66c7Fv3+oC2+nzNmjX0448/UqFChWSzyySG82HpEADk4OBAxsbGdPXqVTIyMsq0zePHj1O7du2IiChv3rw0ZswYcnFxoVy5cmXatj6gzT4/duwY/fzzzzJ4ySSLbg7YM2quXLkCAwMDLFq0KNO2Pn36hCJFiqB69eooV64ciAhEhDx58mDWrFkIDQ2VwWPlo40+//HHHyFJkgzeMSnBgpUNGDt2LExMTHD8+PEM24iKikLLli2RO3duvH37FvHx8dizZw8qVKigIVwzZ87Ep0+f5HNeoWijzxntw4KVDYiNjUX79u1hYmKCzZs3p/uX+t27d2jcuDFy5syJixcvarwXHx+PvXv3olKlSkK4cufOjenTp+Pjx49yfg1FIUefN2nSJMk+Z7QHC1Y2ITY2FgMGDAARoW3btvj3339T/Ux0dDTWr18PS0tL5MiRAxcuXEj2WpVKhf3796Ny5cpCuHLlyoVp06YhJCREzq+iGDLT57lz54aNjQ2uXLmSBZ4yaliwshlHjhxBkSJFQERo2bIlVq1ahevXryM4OBifP3/Gixcv4O7ujjFjxiBPnjwgIpiZmYGIMHfu3FTtq1QquLm5oUqVKkK4rKysMGXKFAQHB8v+fVQqFaKiohAbGyu7bblIT59bW1uDiFCqVKnvVuh1CQtWNuTLly/YuXMnGjduDBMTEyEsX7+MjY3FKGnz5s0gIpiYmOD+/ftpakOlUuHAgQOoVq2asGlpaYlJkyYhKCgow76rVCqcOXMGTk5OqFmzpob/NjY2aNu2LRYsWICAgIAMt6EN0tLnNjY26NGjB4gIBgYG8PX11bXb3x18rCGbExMTQw8fPqQ3b96QSqUiKysrCg0Npe7du5ORkRGpVCratm0bHTp0iDw8PKhmzZp08+ZNMjExSZN9SZLIw8ODZs2aRffv3yciIktLSxo+fDiNHTuW8ufPn2Zf3dzcaMqUKfTs2TMqV64cNWrUiGrUqEHW1tYUHx9Pz549Ix8fH7pw4QKpVCrq0aMHLVy4kAoWLJihvtEWSfV5tWrVqEiRIkRE1KlTJzp06BA5OjrS3r17deztd4auFZNJP9HR0TA3Nxe//A0aNMCHDx/EdGXWrFnptqlSqXDo0CHUrFlT2LWwsMCECRMQGBiY4mdDQkLQpUsXEBF++eUXXLp0KcVF7JCQECxZsgT58uVD/vz54e7unm5/dcn9+/fFKOuff/7RtTvfFSxYCuWXX34RDw0R4eHDh9i7d6+YLt69ezdDdiVJgoeHB2rVqiWEy9zcHOPGjUtyGhcQEIAqVarA2toarq6u6WorICAAHTt2BBFh9erVGfJXV3Tu3BlEhK5du+rale8KFiyFsmbNGhAR8uXLByLC8OHDIUmSeJCqVauWalHQlJAkCUePHoW9vb2GcI0dOxb+/v4AEkZ69vb2KFSoEB49epThdkaOHAkigpubW4b9zWoePHggfjAePnyoa3e+G1iwFMrLly9BRDA0NBRnqyIjIxEQEID8+fODiDBt2rRMtyNJEo4dOwYHBwchXDlz5sTo0aPh4uICExMT3LlzJ9NtdOnSBdbW1vjw4UOmfc4qfvvtNxARunTpomtXvhtYsBSM+jBowYIFNSrBuLq6gohgZGQEb29vWdqSJAmenp6oW7euEC4DAwPMmTNHFvtBQUEoWLAgunfvLou9rODhw4diSp7W3Vkmc7BgKZixY8eCiMR6U926dcV7Xbt2BRGhSpUqiI6Olq1NSZJw8uRJFCxYEAUKFMjUtPNbVq5cCSMjI7x79042m9pG3c+dOnXStSvfBSxYCubcuXMgIuTPnx9GRkYgIty7dw/A/0YsRITJkyfL2m5ERAQsLCwwY8YMWe1+/vwZFhYWmD17tqx2tYmvr68YZan7ntEenA9LwTRq1IgsLS0pODiYmjdvTkRE69evJyKi/Pnz09q1a4mIaP78+XT79m3Z2r179y5FRkZShw4dZLNJRJQrVy5q1aoVXbp0SVa72qRSpUrUrVs3IiKaNWuWjr3Rf1iwFIypqSm1atWKiIiKFStGRES7du2iiIgIIko44Ni9e3eSJIn69u1L0dHRsrTr7e1NZmZmVLlyZVnsfY29vT15e3srqpjD9OnTycDAgA4dOkT37t3TtTt6DQuWwmnbti0RET158oTKlClD4eHhtG/fPvH+ypUrqVChQvT48WOaOXOmLG2+ffuWSpQokebT9OmhTJkyFBoaSpGRkbLb1hYVK1ak7t27ExHJ1sdM0rBgKZyffvqJiIhu3LhBPXv2JKL/TQuJiPLlyyf+f9GiRXTjxo1Mt6lSqcjY2DjTdpJCnQFU7pzr2mbatGlkaGhIHh4edOfOHV27o7ewYCmcYsWKUdWqVQkA2djYkKmpKXl7e2s8NO3bt6fevXuTJEnUr18/+vLlS6bazJMnDwUHB2fW9SQJCQkhY2NjMjc314p9bVGhQgUeZWUBLFh6gHpaePXqVerUqRMRaY6yiIhWrFhBhQsXpidPntC0adMy1V716tUpMDCQ/Pz8MmUnKe7c
  17. }
  18. },
  19. "cell_type": "markdown",
  20. "metadata": {},
  21. "source": [
  22. "![nn.png](attachment:48b1ed6e-8e2b-4883-82ac-a2bbed6e2885.png)"
  23. ]
  24. },
  25. {
  26. "attachments": {},
  27. "cell_type": "markdown",
  28. "metadata": {},
  29. "source": [
  30. "In this example the input vector of the neural network has two features, i.e., the input is a two-dimensional vector:\n",
  31. "\n",
  32. "$$\n",
  33. "\\mathbf x = (x_0, x_1).\n",
  34. "$$\n",
  35. "\n",
  36. "We consider a set of $m$ vectors as training data. The training data can therefore be written as a $m \\times 2$ matrix where each row represents a feature vector:\n",
  37. "\n",
  38. "$$ \n",
  39. "X = \n",
  40. "\\begin{pmatrix}\n",
  41. "x_{00} & x_{01} \\\\\n",
  42. "x_{10} & x_{11} \\\\\n",
  43. "\\vdots & \\vdots \\\\\n",
  44. "x_{m-1\\,0} & x_{m-1\\,1} \n",
  45. "\\end{pmatrix} $$\n",
  46. "\n",
  47. "The known labels (1 = 'signal', 0 = 'background') are stored in a $m$-dimensional column vector $\\mathbf y$.\n",
  48. "\n",
  49. "In the following, $n_1$ denotes the number of neurons in the hidden layer. The weights for the connections from the input layer (layer 0) to the hidden layer (layer 0) are given by the following matrix:\n",
  50. "\n",
  51. "$$\n",
  52. "W^{(1)} = \n",
  53. "\\begin{pmatrix}\n",
  54. "w_{00}^{(1)} \\dots w_{0 \\, n_1-1}^{(1)} \\\\\n",
  55. "w_{10}^{(1)} \\dots w_{1 \\, n_1-1}^{(1)} \n",
  56. "\\end{pmatrix}\n",
  57. "$$\n",
  58. "\n",
  59. "Each neuron in the hidden layer is assigned a bias $\\mathbf b^{(1)} = (b^{(1)}_0, \\ldots, b^{(1)}_{n_1-1})$. The neuron in the output layer has the bias $\\mathbf b^{(2)}$. With that, the output values of the network for the matrix $X$ of input feature vectors is given by\n",
  60. "\n",
  61. "$$\n",
  62. "\\begin{align}\n",
  63. "Z^{(1)} &= X W^{(1)} + \\mathbf b^{(1)} \\\\\n",
  64. "A^{(1)} &= \\sigma(Z^{(1)}) \\\\\n",
  65. "Z^{(2)} &= A^{(1)} W^{(2)} + \\mathbf b^{(2)} \\\\\n",
  66. "A^{(2)} &= \\sigma(Z^{(2)})\n",
  67. "\\end{align}\n",
  68. "$$\n",
  69. "\n",
  70. "The loss function for a given set of weights is given by\n",
  71. "\n",
  72. "$$ L = \\sum_{k=0}^{m-1} (y_{\\mathrm{pred},k} - y_{\\mathrm{true},k})^2 $$\n",
  73. "\n",
  74. "We can know calculate the gradient of the loss function w.r.t. the wights. With the definition $\\hat L = (y_\\mathrm{pred} - y_\\mathrm{true})^2$, the gradients for the weights from the output layer to the hidden layer are given by: \n",
  75. "\n",
  76. "$$ \\frac{\\partial \\tilde L}{\\partial w_i^{(2)}} = \\frac{\\partial \\tilde L}{\\partial a_k^{(2)}} \\frac{a_k^{(2)}}{\\partial w_i^{(2)}} = \\frac{\\partial \\tilde L}{ \\partial a_k^{(2)}} \\frac{\\partial a_k^{(2)}}{ \\partial z_k^{(2)}} \\frac{\\partial z_k^{(2)}}{\\partial w_i^{(2)}} = 2 (a_k^{(2)} - y_k) a_k^{(2)} (1 - a_k^{(2)}) a_{k,i}^{(1)} \\equiv \\delta^{(2)}_k a_{k,i}^{(1)}$$\n",
  77. "\n",
  78. "Note, that it is assumed that the activation function is a sigmoid with the derivative\n",
  79. "\n",
  80. "$$ \\sigma(x) \\cdot (1 - \\sigma(x)) $$\n",
  81. "\n",
  82. "Applying the chain rule further, we obtain the gradient for the weights from the input layer to the hidden layer: \n",
  83. "\n",
  84. "$$ \\frac{\\partial \\tilde L}{\\partial w_{ij}^{(1)}} = \\frac{\\partial \\tilde L}{\\partial a_k^{(2)}} \\frac{\\partial a_k^{(2)}}{\\partial z_k^{(2)}} \\frac{\\partial z_k^{(2)}}{\\partial a_{k,j}^{(1)}} \\frac{\\partial a_{k,j}^{(1)}}{\\partial z_{k,j}^{(1)}} \\frac{\\partial z_{k,j}^{(1)}}{\\partial w_{ij}^{(1)}} $$\n",
  85. "\n"
  86. ]
  87. },
  88. {
  89. "cell_type": "markdown",
  90. "metadata": {},
  91. "source": [
  92. "## A simple neural network class"
  93. ]
  94. },
  95. {
  96. "cell_type": "code",
  97. "execution_count": null,
  98. "metadata": {},
  99. "outputs": [],
  100. "source": [
  101. "# A simple feed-forward neutral network with on hidden layer\n",
  102. "# see also https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6\n",
  103. "\n",
  104. "import numpy as np\n",
  105. "\n",
  106. "class NeuralNetwork:\n",
  107. " def __init__(self, x, y):\n",
  108. " n1 = 3 # number of neurons in the hidden layer\n",
  109. " self.input = x\n",
  110. " self.weights1 = np.random.rand(self.input.shape[1],n1)\n",
  111. " self.bias1 = np.random.rand(n1)\n",
  112. " self.weights2 = np.random.rand(n1,1)\n",
  113. " self.bias2 = np.random.rand(1) \n",
  114. " self.y = y\n",
  115. " self.output = np.zeros(y.shape)\n",
  116. " self.learning_rate = 0.01\n",
  117. " self.n_train = 0\n",
  118. " self.loss_history = []\n",
  119. "\n",
  120. " def sigmoid(self, x):\n",
  121. " return 1/(1+np.exp(-x))\n",
  122. "\n",
  123. " def sigmoid_derivative(self, x):\n",
  124. " return x * (1 - x)\n",
  125. "\n",
  126. " def feedforward(self):\n",
  127. " self.layer1 = self.sigmoid(self.input @ self.weights1 + self.bias1)\n",
  128. " self.output = self.sigmoid(self.layer1 @ self.weights2 + self.bias2)\n",
  129. "\n",
  130. " def backprop(self):\n",
  131. "\n",
  132. " # delta2: [m, 1], m = number of training data\n",
  133. " delta2 = 2 * (self.y - self.output) * self.sigmoid_derivative(self.output)\n",
  134. "\n",
  135. " # Gradient w.r.t. weights from hidden to output layer: [n1, 1] matrix, n1 = # neurons in hidden layer\n",
  136. " # self.layer1.T: m x n1 matrix\n",
  137. " d_weights2 = self.layer1.T @ delta2\n",
  138. " d_bias2 = np.sum(delta2)\n",
  139. "\n",
  140. " print(self.layer1.shape) \n",
  141. " \n",
  142. " # shape of delta1: [m, n1], m = number of training data, n1 = # neurons in hidden layer\n",
  143. " delta1 = (delta2 @ self.weights2.T) * self.sigmoid_derivative(self.layer1)\n",
  144. " d_weights1 = self.input.T @ delta1\n",
  145. " d_bias1 = np.ones(delta1.shape[0]) @ delta1 \n",
  146. " \n",
  147. " # update weights and biases\n",
  148. " self.weights1 += self.learning_rate * d_weights1\n",
  149. " self.weights2 += self.learning_rate * d_weights2\n",
  150. "\n",
  151. " self.bias1 += self.learning_rate * d_bias1\n",
  152. " self.bias2 += self.learning_rate * d_bias2\n",
  153. "\n",
  154. " def train(self, X, y):\n",
  155. " self.output = np.zeros(y.shape)\n",
  156. " self.input = X\n",
  157. " self.y = y\n",
  158. " self.feedforward()\n",
  159. " self.backprop()\n",
  160. " self.n_train += 1\n",
  161. " if (self.n_train %1000 == 0):\n",
  162. " loss = np.sum((self.y - self.output)**2)\n",
  163. " print(\"loss: \", loss)\n",
  164. " self.loss_history.append(loss)\n",
  165. " \n",
  166. " def predict(self, X):\n",
  167. " self.output = np.zeros(y.shape)\n",
  168. " self.input = X\n",
  169. " self.feedforward()\n",
  170. " return self.output\n",
  171. " \n",
  172. " def loss_history(self):\n",
  173. " return self.loss_history\n"
  174. ]
  175. },
  176. {
  177. "cell_type": "markdown",
  178. "metadata": {},
  179. "source": [
  180. "## Create toy data\n",
  181. "We create three toy data sets\n",
  182. "1. two moon-like distributions\n",
  183. "2. circles\n",
  184. "3. linearly separable data sets"
  185. ]
  186. },
  187. {
  188. "cell_type": "code",
  189. "execution_count": null,
  190. "metadata": {},
  191. "outputs": [],
  192. "source": [
  193. "# https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py\n",
  194. "import numpy as np\n",
  195. "from sklearn.datasets import make_moons, make_circles, make_classification\n",
  196. "from sklearn.model_selection import train_test_split\n",
  197. "\n",
  198. "X, y = make_classification(\n",
  199. " n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1\n",
  200. ")\n",
  201. "rng = np.random.RandomState(2)\n",
  202. "X += 2 * rng.uniform(size=X.shape)\n",
  203. "linearly_separable = (X, y)\n",
  204. "\n",
  205. "datasets = [\n",
  206. " make_moons(n_samples=200, noise=0.1, random_state=0),\n",
  207. " make_circles(n_samples=200, noise=0.1, factor=0.5, random_state=1),\n",
  208. " linearly_separable,\n",
  209. "]"
  210. ]
  211. },
  212. {
  213. "cell_type": "markdown",
  214. "metadata": {},
  215. "source": [
  216. "## Create training and test data set"
  217. ]
  218. },
  219. {
  220. "cell_type": "code",
  221. "execution_count": null,
  222. "metadata": {},
  223. "outputs": [],
  224. "source": [
  225. "# datasets: 0 = moons, 1 = circles, 2 = linearly separable\n",
  226. "X, y = datasets[1]\n",
  227. "X_train, X_test, y_train, y_test = train_test_split(\n",
  228. " X, y, test_size=0.4, random_state=42\n",
  229. ")\n",
  230. "\n",
  231. "x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5\n",
  232. "y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5\n"
  233. ]
  234. },
  235. {
  236. "cell_type": "markdown",
  237. "metadata": {},
  238. "source": [
  239. "## Train the model"
  240. ]
  241. },
  242. {
  243. "cell_type": "code",
  244. "execution_count": null,
  245. "metadata": {},
  246. "outputs": [],
  247. "source": [
  248. "y_train = y_train.reshape(-1, 1)\n",
  249. "\n",
  250. "nn = NeuralNetwork(X_train, y_train)\n",
  251. "\n",
  252. "for i in range(100000):\n",
  253. " nn.train(X_train, y_train)\n"
  254. ]
  255. },
  256. {
  257. "cell_type": "markdown",
  258. "metadata": {},
  259. "source": [
  260. "## Plot the loss vs. the number of epochs"
  261. ]
  262. },
  263. {
  264. "cell_type": "code",
  265. "execution_count": null,
  266. "metadata": {},
  267. "outputs": [],
  268. "source": [
  269. "import matplotlib.pyplot as plt\n",
  270. "plt.plot(nn.loss_history)\n",
  271. "plt.xlabel(\"# epochs / 1000\")\n",
  272. "plt.ylabel(\"loss\")"
  273. ]
  274. },
  275. {
  276. "cell_type": "markdown",
  277. "metadata": {},
  278. "source": []
  279. },
  280. {
  281. "cell_type": "code",
  282. "execution_count": null,
  283. "metadata": {},
  284. "outputs": [],
  285. "source": [
  286. "import matplotlib.pyplot as plt\n",
  287. "from matplotlib.colors import ListedColormap\n",
  288. "\n",
  289. "cm = plt.cm.RdBu\n",
  290. "cm_bright = ListedColormap([\"#FF0000\", \"#0000FF\"])\n",
  291. "\n",
  292. "xv = np.linspace(x_min, x_max, 10)\n",
  293. "yv = np.linspace(y_min, y_max, 10)\n",
  294. "Xv, Yv = np.meshgrid(xv, yv)\n",
  295. "XYpairs = np.vstack([ Xv.reshape(-1), Yv.reshape(-1)])\n",
  296. "zv = nn.predict(XYpairs.T)\n",
  297. "Zv = zv.reshape(Xv.shape)\n",
  298. "\n",
  299. "fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 7))\n",
  300. "ax.set_aspect(1)\n",
  301. "cn = ax.contourf(Xv, Yv, Zv, cmap=\"coolwarm_r\", alpha=0.4)\n",
  302. "\n",
  303. "ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors=\"k\")\n",
  304. "\n",
  305. "# Plot the testing points\n",
  306. "ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4, edgecolors=\"k\")\n",
  307. "\n",
  308. "ax.set_xlim(x_min, x_max)\n",
  309. "ax.set_ylim(y_min, y_max)\n",
  310. "# ax.set_xticks(())\n",
  311. "# ax.set_yticks(())\n",
  312. "\n",
  313. "fig.colorbar(cn)\n"
  314. ]
  315. },
  316. {
  317. "cell_type": "code",
  318. "execution_count": null,
  319. "metadata": {},
  320. "outputs": [],
  321. "source": []
  322. }
  323. ],
  324. "metadata": {
  325. "kernelspec": {
  326. "display_name": "Python 3 (ipykernel)",
  327. "language": "python",
  328. "name": "python3"
  329. },
  330. "language_info": {
  331. "codemirror_mode": {
  332. "name": "ipython",
  333. "version": 3
  334. },
  335. "file_extension": ".py",
  336. "mimetype": "text/x-python",
  337. "name": "python",
  338. "nbconvert_exporter": "python",
  339. "pygments_lexer": "ipython3",
  340. "version": "3.8.16"
  341. },
  342. "vscode": {
  343. "interpreter": {
  344. "hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
  345. }
  346. }
  347. },
  348. "nbformat": 4,
  349. "nbformat_minor": 4
  350. }