ACTL3143 & ACTL5111 Deep Learning for Actuaries
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
Generating sequential data is the closest computers get to dreaming.
Source: Alex Graves (2013), Generating Sequences With Recurrent Neural Networks
Source: Marcus Lautier (2022).
Source: Tensorflow tutorial, Text generation with an RNN.
RNN output | Decoded Transcription |
---|---|
what is the weather like in bostin right now | what is the weather like in boston right now |
prime miniter nerenr modi | prime minister narendra modi |
arther n tickets for the game | are there any tickets for the game |
Source: Hannun et al. (2014), Deep Speech: Scaling up end-to-end speech recognition, arXiv:1412.5567, Table 1.
ROMEO:
Why, sir, what think you, sir?
AUTOLYCUS:
A dozen; shall I be deceased.
The enemy is parting with your general,
As bias should still combit them offend
That Montague is as devotions that did satisfied;
But not they are put your pleasure.
Source: Tensorflow tutorial, Text generation with an RNN.
DUKE OF YORK:
Peace, sing! do you must be all the law;
And overmuting Mercutio slain;
And stand betide that blows which wretched shame;
Which, I, that have been complaints me older hours.
LUCENTIO:
What, marry, may shame, the forish priest-lay estimest you, sir,
Whom I will purchase with green limits o’ the commons’ ears!
Source: Tensorflow tutorial, Text generation with an RNN.
ANTIGONUS:
To be by oath enjoin’d to this. Farewell!
The day frowns more and more: thou’rt like to have
A lullaby too rough: I never saw
The heavens so dim by day. A savage clamour!
[Exit, pursued by a bear]
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
Idea inspired by Mehta (2023), The need for sampling temperature and differences between whisper, GPT-3, and probabilistic model’s temperature
In today’s lecture we will be different situation. So, next one is what they rective that each commit to be able to learn some relationships from the course, and that is part of the image that it’s very clese and black problems that you’re trying to fit the neural network to do there instead of like a specific though shef series of layers mean about full of the chosen the baseline of car was in the right, but that’s an important facts and it’s a very small summary with very scrort by the beginning of the sentence.
In today’s lecture we will decreas before model that we that we have to think about it, this mightsks better, for chattely the same project, because you might use the test set because it’s to be picked up the things that I wanted to heard of things that I like that even real you and you’re using the same thing again now because we need to understand what it’s doing the same thing but instead of putting it in particular week, and we can say that’s a thing I mainly link it’s three columns.
In today’s lecture we will probably the adw n wait lots of ngobs teulagedation to calculate the gradient and then I’ll be less than one layer the next slide will br input over and over the threshow you ampaigey the one that we want to apply them quickly. So, here this is the screen here the main top kecw onct three thing to told them, and the output is a vertical variables and Marceparase of things that you’re moving the blurring and that just data set is to maybe kind of categorical variants here but there’s more efficiently not basically replace that with respect to the best and be the same thing.
In today’s lecture we will put it different shates to touch on last week, so I want to ask what are you object frod current. They don’t have any zero into it, things like that which mistakes. 10 claims that the average version was relden distever ditgs and Python for the whole term wo long right to really. The name of these two options. There are in that seems to be modified version. If you look at when you’re putting numbers into your, that that’s over. And I went backwards, up, if they’rina functional pricing working with.
In today’s lecture we will put it could be bedinnth. Lowerstoriage nruron. So rochain the everything that I just sGiming. If there was a large. It’s gonua draltionation. Tow many, up, would that black and 53% that’s girter thankAty will get you jast typically stickK thing. But maybe. Anyway, I’m going to work on this libry two, past, at shit citcs jast pleming to memorize overcamples like pre pysing, why wareed to smart a one in this reportbryeccuriay.
Source: Payne (2021), What is beam search, Width.ai blog.
Source: Doshi (2021), Foundations of NLP Explained Visually: Beam Search, How It Works, towardsdatascience.com.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
GPT makes use of a mechanism known as attention, which removes the need for recurrent layers (e.g., LSTMs). It works like an information retrieval system, utilizing queries, keys, and values to decide how much information it wants to extract from each input token.
Attention heads can be grouped together to form what is known as a multihead attention layer. These are then wrapped up inside a Transformer block, which includes layer normalization and skip connections around the attention layer. Transformer blocks can be stacked to create very deep neural networks.
Highly recommended viewing: Iulia Turk (2021), Transfer learning and Transformer models, ML Tech Talks.
Source: David Foster (2023), Generative Deep Learning, 2nd Edition, O’Reilly Media, Chapter 9.
context = """
StoryWall Formative Discussions: An initial StoryWall, worth 2%, is due by noon on June 3. The following StoryWalls are worth 4% each (taking the best 7 of 9) and are due at noon on the following dates:
The project will be submitted in stages: draft due at noon on July 1 (10%), recorded presentation due at noon on July 22 (15%), final report due at noon on August 1 (15%).
As a student at UNSW you are expected to display academic integrity in your work and interactions. Where a student breaches the UNSW Student Code with respect to academic integrity, the University may take disciplinary action under the Student Misconduct Procedure. To assure academic integrity, you may be required to demonstrate reasoning, research and the process of constructing work submitted for assessment.
To assist you in understanding what academic integrity means, and how to ensure that you do comply with the UNSW Student Code, it is strongly recommended that you complete the Working with Academic Integrity module before submitting your first assessment task. It is a free, online self-paced Moodle module that should take about one hour to complete.
StoryWall (30%)
The StoryWall format will be used for small weekly questions. Each week of questions will be released on a Monday, and most of them will be due the following Monday at midday (see assessment table for exact dates). Students will upload their responses to the question sets, and give comments on another student's submission. Each week will be worth 4%, and the grading is pass/fail, with the best 7 of 9 being counted. The first week's basic 'introduction' StoryWall post is counted separately and is worth 2%.
Project (40%)
Over the term, students will complete an individual project. There will be a selection of deep learning topics to choose from (this will be outlined during Week 1).
The deliverables for the project will include: a draft/progress report mid-way through the term, a presentation (recorded), a final report including a written summary of the project and the relevant Python code (Jupyter notebook).
Exam (30%)
The exam will test the concepts presented in the lectures. For example, students will be expected to: provide definitions for various deep learning terminology, suggest neural network designs to solve risk and actuarial problems, give advice to mock deep learning engineers whose projects have hit common roadblocks, find/explain common bugs in deep learning Python code.
"""
{'score': 0.5019668340682983, 'start': 2092, 'end': 2095, 'answer': '30%'}
{'score': 0.2127601057291031,
'start': 1778,
'end': 1791,
'answer': 'deep learning'}
{'score': 0.5296486020088196,
'start': 1319,
'end': 1335,
'answer': 'Monday at midday'}
At the time of writing, there is no official paper that describes how ChatGPT works in detail, but from the official blog post we know that it uses a technique called reinforcement learning from human feedback (RLHF) to fine-tune the GPT-3.5 model.
While ChatGPT still has many limitations (such as sometimes “hallucinating” factually incorrect information), it is a powerful example of how Transformers can be used to build generative models that can produce complex, long-ranging, and novel output that is often indistinguishable from human-generated text. The progress made thus far by models like ChatGPT serves as a testament to the potential of AI and its transformative impact on the world.
Source: David Foster (2023), Generative Deep Learning, 2nd Edition, O’Reilly Media, Chapter 9.
Source: OpenAI blog.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
A CNN is a function f_{\boldsymbol{\theta}}(\mathbf{x}) that takes a vector (image) \mathbf{x} and returns a vector (distribution) \widehat{\mathbf{y}}.
Normally, we train it by modifying \boldsymbol{\theta} so that
\boldsymbol{\theta}^*\ =\ \underset{\boldsymbol{\theta}}{\mathrm{argmin}} \,\, \text{Loss} \bigl( f_{\boldsymbol{\theta}}(\mathbf{x}), \mathbf{y} \bigr).
However, it is possible to not train the network but to modify \mathbf{x}, like
\mathbf{x}^*\ =\ \underset{\mathbf{x}}{\mathrm{argmin}} \,\, \text{Loss} \bigl( f_{\boldsymbol{\theta}}(\mathbf{x}), \mathbf{y} \bigr).
This is very slow as we do gradient descent every single time.
Source: Goodfellow et al. (2015), Explaining and Harnessing Adversarial Examples, ICLR.
Source: The Verge (2018), These stickers make computer vision software hallucinate things that aren’t there.
“TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP”
Source: Wikipedia, DeepDream page.
Generated by Keras’ Deep Dream tutorial.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
Applying the style of a reference image to a target image while conserving the content of the target image.
Source: François Chollet (2021), Deep Learning with Python, Second Edition, Figure 12.9.
What the model does:
Preserve content by maintaining similar deeper layer activations between the original image and the generated image. The convnet should “see” both the original image and the generated image as containing the same things.
Preserve style by maintaining similar correlations within activations for both low level layers and high-level layers. Feature correlations within a layer capture textures: the generated image and the style-reference image should share the same textures at different spatial scales.
Content
Style
Source: Laub (2018), On Neural Style Transfer, Blog post.
Question
How would you make this faster for one specific style image?
Source: Laub (2018), On Neural Style Transfer, Blog post.
Source: Laub (2018), On Neural Style Transfer, Blog post.
Source: Laub (2018), On Neural Style Transfer, Blog post.
Source: Laub (2018), On Neural Style Transfer, Blog post.
Taking derivatives with respect to the input image can be a first step toward explainable AI for convolutional networks.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
An autoencoder takes a data/image, maps it to a latent space via an encoder module, then decodes it back to an output with the same dimensions via a decoder module.
Source: Marcus Lautier (2022).
# Download the dataset if it hasn't already been downloaded.
from pathlib import Path
if not Path("mandarin-split").exists():
if not Path("mandarin").exists():
!wget https://laub.au/data/mandarin.zip
!unzip mandarin.zip
import splitfolders
splitfolders.ratio("mandarin", output="mandarin-split",
seed=1337, ratio=(5/7, 1/7, 1/7))
from keras.utils import image_dataset_from_directory
data_dir = "mandarin-split"
batch_size = 32
img_height = 80
img_width = 80
img_size = (img_height, img_width)
train_ds = image_dataset_from_directory(
data_dir + "/train",
image_size=img_size,
batch_size=batch_size,
shuffle=False,
color_mode="grayscale")
val_ds = image_dataset_from_directory(
data_dir + "/val",
image_size=img_size,
batch_size=batch_size,
shuffle=False,
color_mode="grayscale")
test_ds = image_dataset_from_directory(
data_dir + "/test",
image_size=img_size,
batch_size=batch_size,
shuffle=False,
color_mode="grayscale")
X_train = np.concatenate(list(train_ds.map(lambda x, y: x))) / 255.0
y_train = np.concatenate(list(train_ds.map(lambda x, y: y)))
X_val = np.concatenate(list(val_ds.map(lambda x, y: x))) / 255.0
y_val = np.concatenate(list(val_ds.map(lambda x, y: y)))
X_test = np.concatenate(list(test_ds.map(lambda x, y: x))) / 255.0
y_test = np.concatenate(list(test_ds.map(lambda x, y: y)))
num_hidden_layer = 400
print(f"Compress from {img_height * img_width} pixels to {num_hidden_layer} latent variables.")
Compress from 6400 pixels to 400 latent variables.
random.seed(123)
model = keras.models.Sequential([
layers.Input((img_height, img_width, 1)),
layers.Flatten(),
layers.Dense(num_hidden_layer, "relu"),
layers.Dense(img_height*img_width, "sigmoid"),
layers.Reshape((img_height, img_width, 1)),
])
model.compile("adam", "binary_crossentropy")
epochs = 1_000
es = keras.callbacks.EarlyStopping(patience=15, restore_best_weights=True)
model.fit(X_train, X_train, epochs=epochs, verbose=0,
validation_data=(X_val, X_val), callbacks=es);
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ flatten (Flatten) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense (Dense) │ (None, 400) │ 2,560,400 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 6400) │ 2,566,400 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ reshape (Reshape) │ (None, 80, 80, 1) │ 0 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,380,402 (58.67 MB)
Trainable params: 5,126,800 (19.56 MB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 10,253,602 (39.11 MB)
random.seed(123)
model = keras.models.Sequential([
layers.Input((img_height, img_width, 1)),
layers.MaxPooling2D(2),
layers.Flatten(),
layers.Dense(num_hidden_layer, "relu"),
layers.Dense(img_height*img_width, "sigmoid"),
layers.Reshape((img_height, img_width, 1)),
])
model.compile("adam", "binary_crossentropy")
es = keras.callbacks.EarlyStopping(patience=15, restore_best_weights=True)
model.fit(X_train, X_train, epochs=epochs, verbose=0,
validation_data=(X_val, X_val), callbacks=es);
random.seed(123)
model = keras.models.Sequential([
layers.Input((img_height, img_width, 1)),
layers.Lambda(lambda x: 1 - x),
layers.Flatten(),
layers.Dense(num_hidden_layer, "relu"),
layers.Dense(img_height*img_width, "sigmoid"),
layers.Lambda(lambda x: 1 - x),
layers.Reshape((img_height, img_width, 1)),
])
model.compile("adam", "binary_crossentropy")
es = keras.callbacks.EarlyStopping(patience=15, restore_best_weights=True)
model.fit(X_train, X_train, epochs=epochs, verbose=0,
validation_data=(X_val, X_val), callbacks=es);
Model: "sequential_3"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ lambda (Lambda) │ (None, 80, 80, 1) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_2 (Flatten) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_4 (Dense) │ (None, 400) │ 2,560,400 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_5 (Dense) │ (None, 6400) │ 2,566,400 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ lambda_1 (Lambda) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ reshape_2 (Reshape) │ (None, 80, 80, 1) │ 0 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 15,380,402 (58.67 MB)
Trainable params: 5,126,800 (19.56 MB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 10,253,602 (39.11 MB)
random.seed(123)
encoder = keras.models.Sequential([
layers.Input((img_height, img_width, 1)),
layers.Lambda(lambda x: 1 - x),
layers.Conv2D(16, 3, padding="same", activation="relu"),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding="same", activation="relu"),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding="same", activation="relu"),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(num_hidden_layer, "relu")
])
decoder = keras.models.Sequential([
keras.Input(shape=(num_hidden_layer,)),
layers.Dense(6400),
layers.Reshape((20, 20, 16)),
layers.Conv2D(256, 3, padding="same", activation="relu"),
layers.UpSampling2D(),
layers.Conv2D(128, 3, padding="same", activation="relu"),
layers.UpSampling2D(),
layers.Conv2D(64, 3, padding="same", activation="relu"),
layers.Conv2D(1, 1, padding="same", activation="relu"),
layers.Lambda(lambda x: 1 - x),
])
model = keras.models.Sequential([encoder, decoder])
model.compile("adam", "binary_crossentropy")
es = keras.callbacks.EarlyStopping(patience=15, restore_best_weights=True)
model.fit(X_train, X_train, epochs=epochs, verbose=0,
validation_data=(X_val, X_val), callbacks=es);
Model: "sequential_4"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ lambda_2 (Lambda) │ (None, 80, 80, 1) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d (Conv2D) │ (None, 80, 80, 16) │ 160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_2 (MaxPooling2D) │ (None, 40, 40, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_1 (Conv2D) │ (None, 40, 40, 32) │ 4,640 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_3 (MaxPooling2D) │ (None, 20, 20, 32) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_2 (Conv2D) │ (None, 20, 20, 64) │ 18,496 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ max_pooling2d_4 (MaxPooling2D) │ (None, 10, 10, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_3 (Flatten) │ (None, 6400) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_6 (Dense) │ (None, 400) │ 2,560,400 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 2,583,696 (9.86 MB)
Trainable params: 2,583,696 (9.86 MB)
Non-trainable params: 0 (0.00 B)
Model: "sequential_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense_7 (Dense) │ (None, 6400) │ 2,566,400 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ reshape_3 (Reshape) │ (None, 20, 20, 16) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_3 (Conv2D) │ (None, 20, 20, 256) │ 37,120 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ up_sampling2d (UpSampling2D) │ (None, 40, 40, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_4 (Conv2D) │ (None, 40, 40, 128) │ 295,040 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ up_sampling2d_1 (UpSampling2D) │ (None, 80, 80, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_5 (Conv2D) │ (None, 80, 80, 64) │ 73,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ conv2d_6 (Conv2D) │ (None, 80, 80, 1) │ 65 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ lambda_3 (Lambda) │ (None, 80, 80, 1) │ 0 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 2,972,417 (11.34 MB)
Trainable params: 2,972,417 (11.34 MB)
Non-trainable params: 0 (0.00 B)
Can be used to do feature engineering for supervised learning problems
It is also possible to include input variables as outputs to infer missing values or just help the model “understand” the features – in fact the winning solution of a claims prediction Kaggle competition heavily used denoising autoencoders together with model stacking and ensembling – read more here.
Jacky Poon
Source: Poon (2021), Multitasking Risk Pricing Using Deep Learning, Actuaries’ Analytical Cookbook.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
Source: François Chollet (2021), Deep Learning with Python, Second Edition, Figure 12.17.
Source: François Chollet (2021), Deep Learning with Python, Second Edition, Unnumbered listing in Chapter 12.
Source: François Chollet (2021), Deep Learning with Python, Second Edition, Figure 12.13.
Source: François Chollet (2021), Deep Learning with Python, Second Edition, Figure 12.18.
Lecture Outline
Text Generation
Sampling strategy
Transformers
Image Generation
Neural style transfer
Autoencoders
Variational Autoencoders
Diffusion Models
from watermark import watermark
print(watermark(python=True, packages="keras,matplotlib,numpy,pandas,seaborn,scipy,torch,tensorflow,tf_keras"))
Python implementation: CPython
Python version : 3.11.9
IPython version : 8.24.0
keras : 3.3.3
matplotlib: 3.9.0
numpy : 1.26.4
pandas : 2.2.2
seaborn : 0.13.2
scipy : 1.11.0
torch : 2.3.1
tensorflow: 2.16.1
tf_keras : 2.16.0