You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all, your work is so inspiring!
I want to consider how the receptive field is established in the network:
In BagNet-9:
The first convolutional layer uses a kernel size of 1x1. Since the stride is 1 by default, each output neuron in this layer sees a receptive field of size 1x1, centered around a single pixel in the input image.
The second convolutional layer uses a kernel size of 3x3 with a stride of 2. This means that each output neuron in this layer sees a receptive field of size 3x3, but with a stride of 2, the receptive fields are not directly adjacent. The receptive fields overlap but are not contiguous.
The third and fourth convolutional layers also use 3x3 kernels with a stride of 2. Similar to the second layer, the receptive fields of neurons in these layers overlap but are not directly adjacent.
Given the strides used in the convolutional layers, the 9x9 receptive field of BagNet-9 is not composed of directly adjacent pixels in the input image.
in your heatmaps visualization, you are dividing the input image into all 9x9 patches, assuming the 9x9 pixels are direct neighbors in the input image.
what am I missing?
Thanks in advance,
Bar
The text was updated successfully, but these errors were encountered:
Hi, first of all, your work is so inspiring!
I want to consider how the receptive field is established in the network:
In BagNet-9:
The first convolutional layer uses a kernel size of 1x1. Since the stride is 1 by default, each output neuron in this layer sees a receptive field of size 1x1, centered around a single pixel in the input image.
The second convolutional layer uses a kernel size of 3x3 with a stride of 2. This means that each output neuron in this layer sees a receptive field of size 3x3, but with a stride of 2, the receptive fields are not directly adjacent. The receptive fields overlap but are not contiguous.
The third and fourth convolutional layers also use 3x3 kernels with a stride of 2. Similar to the second layer, the receptive fields of neurons in these layers overlap but are not directly adjacent.
Given the strides used in the convolutional layers, the 9x9 receptive field of BagNet-9 is not composed of directly adjacent pixels in the input image.
in your heatmaps visualization, you are dividing the input image into all 9x9 patches, assuming the 9x9 pixels are direct neighbors in the input image.
what am I missing?
Thanks in advance,
Bar
The text was updated successfully, but these errors were encountered: