Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning : Shivam Chandhok

Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning
by: Shivam Chandhok
blow post content copied from  PyImageSearch
click here to view original post



Table of Contents


Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning

In this tutorial, you will learn about adversarial examples and how they affect the reliability of neural network-based computer vision systems. We will discuss the relationship between the robustness and reliability of deep learning models and understand how engineered noise samples, when added to input images, can change model predictions.

Furthermore, we will use Keras and TensorFlow to develop our adversarial examples and see how they change our model’s predictions even though they visibly look the same as the original inputs.

Specifically, we will discuss the following in detail:

  1. The relationship between robustness and reliability of deep learning-based models when deployed for various real-world applications
  2. Adversarial examples and their effects on neural network predictions
  3. Developing our adversarial examples using Keras and TensorFlow
  4. Learn how we can make our models robust to adversarial examples and enhance their reliability

This lesson is the 1st of a 4-part series on Adversarial Learning:

  1. Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning (this tutorial)
  2. Adversarial Learning with Keras and TensorFlow (Part 2): Implementing the Neural Structured Learning (NSL) Framework and Building a Data Pipeline
  3. Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL)
  4. Adversarial Learning with Keras and TensorFlow (Part 4): Enhancing Adversarial Defense and Comparing Models Trained With and Without Neural Structured Learning (NSL)

In this 1st part of the tutorial series, we will develop a holistic understanding of adversarial examples, how they affect our network predictions, and how we can protect our models from such engineered examples.

In the 2nd part of this tutorial series, we will start building our adversarial attacks and defenses using the TensorFlow NSL (Neural Structured Learning) framework, which makes it very easy to implement such adversarial applications. We will understand the dataset and the data pipeline for our application and discuss the salient features of the NSL framework in detail.

Next, in the 3rd part of this tutorial series, we will discuss two types of adversarial attacks used to engineer adversarial examples. Furthermore, we build our model architecture and related modules using Keras and TensorFlow.

Finally, in the 4th part of the tutorial series, we will look at our application’s training and inference pipeline and implement these routines using the Keras and TensorFlow libraries.

To learn how to create adversarial examples, just keep reading.

Looking for the source code to this post?

Jump Right To The Downloads Section

Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning

Recently, neural network-based systems have been used extensively for diverse applications due to their amazing ability to learn or approximate underlying functions from data directly. We have already seen the awesome capabilities of deep neural network-based computer vision models in image classification, object detection, image generation, and various other applications through different blogs and tutorials on PyImageSearch.

However, given their applicability in diverse domains and tasks, it is important to ask if these networks and their predictions are reliable. One way to understand and evaluate the reliability of these models is to quantify their robustness. A more robust model will be less sensitive to changes, have stable predictions, and be more reliable under diverse scenarios. Let us discuss what we exactly mean by robustness and how it relates to the reliability of a model.

The robustness of a model, in simplest terms, can be understood as the change in its predictions given a change in its inputs. There are different ways to look at robustness. For example, given a model trained to recognize cat and dog images from the real world, can it recognize sketches or cartoons of cats and dogs at test time? OR given a model trained to recognize cat and dog images, can it recognize images of cats and dogs perturbed with a small imperceptible (i.e., cannot be noticed with the naked eye) amount of noise (imagine sprinkling a few grains of salt, i.e., noise over the image).

The first example refers to the case of visible domain shift, that is, the input images of the training data (i.e., cat and dog images from the real world) and test data (i.e., sketches or cartoons of cats and dogs) visibly look very different in appearance or in other words have come from very different distributions. Hence, it is expected that given a network trained on one distribution, it may not be robust enough to this domain shift and might give incorrect predictions on the other distribution (i.e., sketches or cartoons).

The second example refers to the case where an imperceptible change is made to the images (say, adding a small amount of noise, as in this case). Both training and test distributions look the same to a human. Thus, we might expect that the network will be robust to the small added noise, and there will be no change in its predictions. Let us understand this with an example.

Figure 1 shows two images where one image is taken from the real world, and the other image has a small amount of imperceptible noise added. Can you tell after looking at them which image contains the noise? No, right !!!!

Figure 1: Original image of a panda (left) and corresponding adversarial example (right) (source: image by the author).

If we pass the first image (i.e., original image) through the model, it predicts panda as the correct class, and if we do the same for the second image, it predicts badger as the correct class. Notice that even though the images look the same to the naked eye, the prediction of the model changes. We can interpret this fact as follows: our model is not robust to this imperceptible change or noise we added to our image.

However, this raises a serious concern about the reliability of our model. For example, given the scenario where our model is being used for security purposes, its task is to correctly recognize a person’s face and let them into an office if the person is an employee.

When a face image of a non-employee is input, our model is trained to identify the face and predict that the person is not an employee and not grant access to the office building. However, what if we add a small amount of engineered noise (not visible to the naked eye) to the face image, which makes the network change its prediction and jeopardizes the office’s security? Is this system reliable?

Such an example with imperceptible engineered noise added to it such that it changes the neural network’s prediction is commonly referred to as an adversarial example.

Note that it is highly important to make our neural network-based systems robust to these adversarial examples so that the model does not drastically change its prediction when such engineered inputs that look the same as normal inputs to a human are added to the database.


Configuring Your Development Environment

To follow this guide, you need to have the TensorFlow and OpenCV libraries installed on your system.

Luckily, both TensorFlow and OpenCV are pip-installable:

$ pip install tensorflow
$ pip install opencv-contrib-python

If you need help configuring your development environment for OpenCV, we highly recommend that you read our pip install OpenCV guide — it will have you up and running in minutes.


Need Help Configuring Your Development Environment?

Need help configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in minutes.
Need help configuring your dev environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch University — you’ll be up and running with this tutorial in minutes.

All that said, are you:

  • Short on time?
  • Learning on your employer’s administratively locked system?
  • Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?
  • Ready to run the code immediately on your Windows, macOS, or Linux system?

Then join PyImageSearch University today!

Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.

And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!


Project Structure

We first need to review our project directory structure.

Start by accessing this tutorial’s “Downloads” section to retrieve the source code and example images.

From there, take a look at the directory structure:

├── demo.py
├── inference.py
├── output
├── pyimagesearch
│   ├── __init__.py
│   ├── callbacks.py
│   ├── config.py
│   ├── data.py
│   ├── model.py
│   ├── robust.py
│   └── visualization.py
└── train.py

This consists of the demo.py file, which gives a small overview of the various facets of the adversarial learning paradigm and generates adversarial examples.

The inference.py file implements the test-time routine for our adversarial learning application, and we will discuss this in detail in the 4th part of this series. The output folder saves our output results and visualizations, as we will see in the later parts of this series.

Furthermore, the pyimagesearch folder contains the following:

  • The implementations for our training callbacks (i.e., callbacks.py)
  • The parameter configurations (i.e., config.py)
  • The data pipeline (i.e., data.py)
  • The model architecture (i.e., model.py)

Additionally, the robust.py file implements the code to check robustness to different adversarial attacks, and the visualization.py file allows us to plot and visualize the results from our application.

Finally, the train.py file implements the code to train our adversarial learning application.

In this tutorial, we will discuss the demo.py file in detail.


Creating Adversarial Examples

In the previous section, we looked at an adversarial example. We discussed how it can affect and change our system’s predictions even for images that visually look similar and render them unreliable.

Let us now understand how we can engineer such an adversarial example that can fool our network and change its predictions.

From the previous section, we realize that any adversarial example must have two properties:

  1. It should look visibly the same as the original example
  2. It should ‘fool’ our model and change its prediction.

Let’s discuss this one by one.

First, we will add a constant noise to our image and visualize when it starts looking different from our original input.

Figure 2 shows our image set with added noise. When the noise level is within ±10, we do not notice a significant difference between the original image (i.e., noise=0) case. When the noise is further increased, we noticed that our image gets brighter (as positive values are added) and darker (as negative noise values are added).

Figure 2: Original image of a panda with different levels of constant noise (source: image by the author).

This helps us conclude that noise within the range of about ±10 does not cause perceptible (or visible) changes to our image.

Let us now look at the second point we discussed above. In order to change a model’s prediction, we have to discuss the output probabilities and the loss function.

Consider a classification model f pre-trained on a dataset using loss L, which can correctly classify a panda image. The loss takes as input the predicted output probabilities from the model p_model and the ground truth (one-hot) label p_gt. Here, p_model is the softmax output of our model f(x) (i.e., softmax(f(x))):

loss = L(p_model,p_gt)

loss = L(softmax(f(x)),p_gt)

Since this model is already trained to classification on the ImageNet dataset and minimize the loss for image-correct label pairs, this implies that for an input image of a panda, the output softmax(f(x)) and p_gt align the most and minimize the loss.

Now, we want to add noise to our input image x such that it does not visibly change x but changes the model’s prediction. In terms of loss, this can be thought of as changing input x to x+noise such that the loss L(p_model,p_gt) is no longer minimum for the panda class.

Thus, we want to maximize the following loss expression for our changed input x+noise:

loss = L(softmax(f(x+noise)),p_gt), where noise ∈ (-10, 10)

Note that the noise interval is kept from -10 to 10 as we verified above that this is the acceptable amount of noise we can add to avoid any perceptible or visible change to the image.

The equation above allows us to engineer an example x+noise such that it is visibly similar to the original panda image and is not correctly classified as panda by the model (i.e., model predictions change).

In the above equation, our job is to find the correct engineered noise tensor, which is in the range (-10, 10), and change the model’s prediction from panda to another class.

We can use SGD (stochastic gradient descent) to optimize the above equation, keeping x and the weights of model f fixed and optimizing for the right noise tensor that will change the model’s prediction to another class.

Let us take an example and code this process in TensorFlow.

Let us consider the case of image classification and use ResNet50 pre-trained on ImageNet. The model is trained with cross-entropy loss as it performs image classification.

### import necessary packages
import tensorflow as tf
import cv2
import PIL
import os
import numpy as np
import matplotlib.pyplot as plt

#### define configs
IMG_PATH = "/content/panda.jpeg"
SAVE_PATH = "/content/panda_adversarial.jpeg"

IMG_SIZE = (224,224)
CLASS_INDEX = 388 
NUM_CLASSES = 1000

LEARNING_RATE = 1e1
EPSILON = 5
ITERATIONS = 40

#### define function to undo preprocessing
def get_prediction(input_image, labelGT):

    pred_model = model.predict(input_image)
    prediction = tf.keras.applications.resnet50.decode_predictions(pred_model)

    print('Loss for Panda Class=', loss_fn(pred_model, labelGT).numpy())
    print('Class Names and Probabilities ',prediction)

def visualize(im_list):

    image = PIL.Image.fromarray(im_list[0], 'RGB')
    noise = PIL.Image.fromarray(im_list[1], 'RGB')
    imageAdversarial = PIL.Image.fromarray(im_list[2], 'RGB')
    
    plt.subplot(131)
    plt.imshow(image)
    plt.axis('off')

    plt.subplot(132)
    plt.imshow(noise)
    plt.axis('off')

    plt.subplot(133)
    plt.imshow(imageAdversarial)
    plt.axis('off')

    plt.savefig(SAVE_PATH)

#### define function to undo preprocessing
def deprocess_img(image_norm):

  x = np.squeeze(image_norm, 0)
  
  x[:, :, 0] += 103.939
  x[:, :, 1] += 116.779
  x[:, :, 2] += 123.68

  x = x[:, :, ::-1]

  x = np.clip(x, 0, 255).astype('uint8')

  return x

We start by importing the necessary packages such as tensorflow (Line 2), cv2 (Line 3), and PIL (Line 4) for image processing functionality, the os module (Line 5), numpy (Line 6), and matplotlib for plotting visualizations (Line 7).

Next, we define the parameters that we will use for this tutorial. Lines 10 and 11 include a path to the input image (i.e., IMG_PATH) and a path to save the final adversarial example visualization (i.e., SAVE_PATH).

Furthermore, we define the input image and class-related configurations. On Lines 13-15, this includes the dimensions of the input image (i.e., IMG_SIZE), the class index for the panda class (i.e., CLASS_INDEX), and the total number of classes in the ImageNet dataset (i.e., NUM_CLASSES).

Furthermore, on Lines 17-19, we also define the model training-related parameters like learning rate (i.e., LEARNING_RATE), the magnitude of the bound on our noise (i.e., EPSILON), and the number of iterations (i.e., ITERATIONS).

Now, we define the get_prediction function, which outputs the predicted class and corresponding probabilities from our model. It takes as arguments the input image and the ground-truth one-hot label of our class under consideration (Line 22).

On Line 24, use the model.predict() function to get the prediction from our pre-trained model (i.e., pred_model) and decode the prediction to get the corresponding predicted class names using the decode_predictions() function (Line 25).

Finally, we print the value of the loss for the input image and the predictions (Lines 27 and 28).

Now that we have defined our get_prediction function, let us go ahead and implement our visualization routine.

The visualize function (Lines 30-48) takes as input the image list (i.e., im_list) with the input image array, noise array, and adversarial image array.

The elements in the im_list are then converted to an RGB PIL image using the fromarray() function, as shown on Lines 32-34.

Next, we use matplotlib to create a subplot with 1 row and 3 columns and plot the input image (Lines 36-38), the noise image (Lines 40-42), and the adversarial image (Lines 44-46).

Finally, we save our visualization using the savefig function at the SAVE_PATH location (Line 48).

Finally, we implement the deprocess_img function, which takes as input a normalized image and unnormalized it (Line 51).

On Line 53, we first take the normalized image and remove the batch dimension using the np.squeeze function. Then, we add the channel mean to each channel (Lines 55-57) and reverse the channels to get the image into RGB channel format (Line 59).

Finally, we clip the image pixel values to be in the range (0, 255) (Line 61) and return the final image (Line 63).

Now that we have defined our parameters and the helper functions, it is time to define our model and load our input image.

#### load resnet model
model = tf.keras.applications.resnet50.ResNet50(
    include_top=True,
    weights='imagenet',
    classes=1000,
    classifier_activation='softmax'
)

#### classification loss 
loss_fn = tf.keras.losses.CategoricalCrossentropy()

#### load image
img = cv2.imread(IMG_PATH)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img,(IMG_SIZE))

#### preprocess image
image_processed = tf.keras.applications.resnet50.preprocess_input(img)
image = np.expand_dims(image_processed,0)

#### predictions with image 
pred_gt = CLASS_INDEX
one_hot_gt = tf.one_hot([pred_gt], NUM_CLASSES)

get_prediction(image, one_hot_gt)
#1/1 [==============================] - 2s 2s/step
#Loss for Panda Class= 0.7695639
#Class Names and Probabilities  [[('n02510455', 'giant_panda', #0.9522547), ('n02447366', 'badger', 0.027984638), ('n02509815', #'lesser_panda', 0.010850465), ('n02132136', 'brown_bear', #0.0009186132), ('n02493509', 'titi', 0.00030894685)]]

We load the pre-trained imagenet ResNet50 model from keras, as shown on Lines 66-71. Note that we keep the final linear layers (i.e., include_top=True), load the imagined trained weights (i.e., weights='imagenet'), define the total number of imagined classes (i.e., 1000), and final activation (i.e., softmax).

On Line 74, we define the loss function, which is CategoricalCrossentropy().

We are now ready to load our input image and make predictions with our pretrained model.

On Lines 77-79, we first use the cv2.imread() function and load our original panda image from the IMG_PATH. Then, we convert our image from BGR to RGB format and resize it to the desired IMG_SIZE using the cv2.resize function.

The panda class is the 388th index class in the imagenet labels list. We create a corresponding one-hot vector (1000 classes) and store it as pred_gt, as shown on Lines 86 and 87.

Next, we use the model.predict() function to get the prediction from our ResNet model (i.e., pred_model) and decode the prediction to get the corresponding predicted class names using the decode_predictions() function as shown.

Finally, we print the loss value for the panda image and the predictions.

Notice that the model predicted the ‘Giant-Panda’ class with a probability of 0.95 and a loss value of 0.76. This implies that our model is very confident that the image belongs to the ‘Giant-Panda’ class.

#### creating adversarial example

noise = tf.Variable(tf.zeros_like(image), trainable=True)

opt = tf.keras.optimizers.SGD(learning_rate=LEARNING_RATE)

for t in range(ITERATIONS):

    with tf.GradientTape() as tape:

      adversarial_img = image+ noise
      pred_model = model(adversarial_img)
      loss = -loss_fn(pred_model, one_hot_gt)


    gradients = tape.gradient(loss, noise)
    opt.apply_gradients([(gradients, noise)])

    noise.assign(tf.clip_by_value(noise, -EPSILON, EPSILON))

We are now ready to create our adversarial example.

We start by defining the noise tensor, which is filled with zeros and has the same dimension as our input image.

Recall that we want to use SGD to optimize and find this noise tensor, which is why we wrap it in tf.Variable() and set the trainable=True flag (Line 93). Next, we define our SGD-based optimizer with the learning rate (i.e., LEARNING_RATE) (Line 95).

Now, we start our iterative optimization, and for each iteration, we get our adversarial_img by adding noise to our original image. Then we pass the adversarial_img through our pre-trained model to get predictions (i.e., pred_model). Note that in this process, the weights of the model stay frozen, as we discussed earlier.

Finally, we compute the loss between the prediction (i.e., pred_model) and the ground truth one-hot vector for panda (i.e., pred_gt). Note that since we want to maximize this loss, we take the negative of this as the loss to optimize (since maximizing a loss is the same as minimizing its negative value).

This computation is done with tf.GradientTape() since we want tf to monitor gradients (Line 99).

Next, we compute the gradient of the loss w.r.t. noise (i.e., gradients) and use the apply_gradients function to take one SGD step to update noise (Lines 106 and 107).

To ensure our noise is always within the range (-EPSILON, EPSILON), we clamp its value at each iteration, as shown on Line 109.

Now that we have the optimized noise, let us see how the model predictions change when this noise is added to the original image.

#### predictions with adversarial example
noise = tf.clip_by_value(noise, -EPSILON, EPSILON)
adversarial_img = image+noise

get_prediction(adversarial_img, one_hot_gt)

#### visualize 
noise = np.squeeze(noise.numpy())
image = deprocess_img(image)
image_adversarial = deprocess_img(adversarial_img.numpy())

im_list = [image, noise, image_adversarial]
visualize(im_list)
#Loss for Panda Class= 14.488977
#Class Names and Probabilities  [[('n02447366', 'badger', #0.87211645), ('n02510455', 'giant_panda', 0.10107378), ('n02509815', 'lesser_panda', 0.0050036293), ('n02445715', 'skunk', 0.0024942914), ('n02056570', 'king_penguin', 0.0009352705)]]

Next, we create the adversarial image to see how our model’s predictions change.

To ensure our noise is always within the range (-EPSILON, EPSILON), we clamp its value (Line 112) and then create our adversarial example by adding the engineered noise to our image (Line 113).

We use our get_prediction function to get predictions from our model on our adversarial image (Line 115).

Notice that now our model predicts the input image to be a ‘badger’ with a probability of 0.87, and the loss for the ‘Giant-Panda’ class has increased from 0.76 earlier to 14.48 now.

Now that we have seen that our adversarial example changes the model’s prediction to another class, let us check if it visually looks the same as our original example.

We remove the batch dimension from our noise tensor and convert it to a numpy array, as shown on Line 118.

Then we use the deprocess_img function to undo the preprocessing that we applied to our original input image and adversarial example (Lines 119 and 120) to get the unnormalized image and adversarial example (i.e., image, image_adversarial).

Next, we create a list that contains the original image, engineered noise, and our adversarial image (Line 122) and pass it to our visualize() function (Line 123).

Figure 3 shows the output visualization of the output image, engineered noise, and adversarial image.

Figure 3: Original image of a panda (left), engineered noise (middle), and corresponding adversarial example (right) (source: image by the author).

Robustness Toward Adversarial Examples

In the previous sections of this tutorial, we discussed how adversarial examples can change the prediction of our model and render our systems unreliable and prone to attacks.

Naturally, the first question that arises is how we can tackle this problem, make our systems robust to such engineered examples, and enhance their reliability in scenarios where such examples might be input to our network.

Let us go back to the case we discussed at the beginning of this tutorial, where our model had to tackle a domain-related distribution shift since it was trained on original images of cats and dogs. We wanted to test it on sketches and cartoons of cats and dogs at inference.

How can we tackle this simple problem and enhance our model so that it not only correctly predicts categories on original images but also on sketches and cartoons of cats and dogs?

The simplest solution that comes to mind is to train and fine-tune our model using sketch and cartoon images of cats and dogs.

Adversarial examples are also very similar and just differ in distribution from the original images. Thus, to make our models robust to adversarial examples, we can simply fine-tune our models on these examples.

Tuning our models allows us to teach our system that the adversarial example of a panda should also be given the same label as the original image of a panda, which in turn makes our model robust to such engineered examples and stops it from drastically changing predictions for such engineered examples.

Let us use TensorFlow and Keras to fine-tune our pre-trained ResNet50 model on our one adversarial example of a panda that we created.

### fine-tune on adversarial example
opt_fineTune = tf.keras.optimizers.SGD(learning_rate=1e-4)
model.compile(opt_fineTune,loss_fn)
model.fit(adversarial_img, one_hot_gt, epochs=50)

#### predictions with adversarial example after adversarial training
get_prediction(adversarial_img, one_hot_gt)
get_prediction(np.expand_dims(image_processed,0), one_hot_gt)

First, on Line 126, we define the SGD optimizer with a low learning rate of 1e-4 since we only want to fine-tune the mode on our adversarial example.

Next, on Line 127, we compile the model using the model.compile() function, which takes as input the optimizer (i.e., opt_fineTune) and the cross-entropy loss function.

Finally, on Line 128, we call model.fit on our adversarial example and the corresponding one-hot vector for panda.

Once our model is fine-tuned, we use the get_prediction function on our adversarial image (i.e., adversarial_img) and our original image, as shown on Line 131.

You will notice that the loss for our adversarial example has now reduced, and our model predicts the correct ‘Giant-Panda’ class with a probability of 0.52 for the adversarial example.

Additionally, our model still classifies the original panda example correctly with a high probability of 0.96.


What's next? We recommend PyImageSearch University.

Course information:
83 total classes • 113+ hours of on-demand code walkthrough videos • Last updated: December 2023
★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That’s not the case.

All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.

If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • 83 courses on essential computer vision, deep learning, and OpenCV topics
  • 83 Certificates of Completion
  • 113+ hours of on-demand video
  • Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized code repos for all 532+ tutorials on PyImageSearch
  • Easy one-click downloads for code, datasets, pre-trained models, etc.
  • Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University


Summary

In this tutorial, we learned about the relationship between robustness and reliability of deep learning models. We developed an understanding of how adversarial examples can change our models’ predictions and affect their reliability by making them prone to attacks.

Specifically, we discussed adversarial examples and how they can be created using Keras and TensorFlow. Furthermore, we discussed ways to make our models robust to such attacks and fine-tuned them on adversarial examples to make them robust and reliable.


Citation Information

Chandhok, S. “Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning,” PyImageSearch, P. Chugh, A. R. Gosthipaty, S. Huot, K. Kidriavsteva, and R. Raha, eds., 2024, https://pyimg.co/h35l2

@incollection{Chandhok_2024_ALwKTFpt1,
  author = {Shivam Chandhok},
  title = {Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning},
  booktitle = {PyImageSearch},
  editor = {Puneet Chugh and Aritra Roy Gosthipaty and Susan Huot and Kseniia Kidriavsteva and Ritwik Raha},
  year = {2024},
  url = {https://pyimg.co/h35l2},
}

Featured Image

Unleash the potential of computer vision with Roboflow - Free!

  • Step into the realm of the future by signing up or logging into your Roboflow account. Unlock a wealth of innovative dataset libraries and revolutionize your computer vision operations.
  • Jumpstart your journey by choosing from our broad array of datasets, or benefit from PyimageSearch’s comprehensive library, crafted to cater to a wide range of requirements.
  • Transfer your data to Roboflow in any of the 40+ compatible formats. Leverage cutting-edge model architectures for training, and deploy seamlessly across diverse platforms, including API, NVIDIA, browser, iOS, and beyond. Integrate our platform effortlessly with your applications or your favorite third-party tools.
  • Equip yourself with the ability to train a potent computer vision model in a mere afternoon. With a few images, you can import data from any source via API, annotate images using our superior cloud-hosted tool, kickstart model training with a single click, and deploy the model via a hosted API endpoint. Tailor your process by opting for a code-centric approach, leveraging our intuitive, cloud-based UI, or combining both to fit your unique needs.
  • Embark on your journey today with absolutely no credit card required. Step into the future with Roboflow.

Join Roboflow Now


To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!

Download the Source Code and FREE 17-page Resource Guide

Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!

The post Adversarial Learning with Keras and TensorFlow (Part 1): Overview of Adversarial Learning appeared first on PyImageSearch.


January 08, 2024 at 07:30PM
Click here for more details...

=============================
The original post is available in PyImageSearch by Shivam Chandhok
this post has been published as it is through automation. Automation script brings all the top bloggers post under a single umbrella.
The purpose of this blog, Follow the top Salesforce bloggers and collect all blogs in a single place through automation.
============================

Salesforce