This folder contains the experiment where we asked a model to respond to a straightforward query to observe the consistency in different layers of a model.
In this folder, README.md
summarizes the experimental results; expt.ipynb
contains the code to run the experiment; utils.py
provides a class that modifies the model's attention mechanism, facilitating the experiment.
The given model .!..!..!
". We posed this question to the model to observe the preferred answers in the
Setting | Selected Token |
---|---|
Fix |
0 |
Fix |
0 |
Fix |
5 |
Fix |
4 |
Fix |
4 |
Fix |
4 |
Note
"Fix
Decoding Strategy | Selected Token |
---|---|
Greedy Decoding | 4 |
Beam Search Decoding ( |
4 |
Sampling Decoding | 3 |
Top-k Sampling Decoding ( |
4 |
Top-p Sampling Decoding ( |
4 |
- Response 1:
There is a period after the first dot, then another period after the second dot, and so on. So, there are 5 periods in total.
- Response 2:
There is one full stop after the first "!", then another after the second "!", and another after the third "!". So, there are 3 full stops in total.
- Response 3:
There is one full stop after the first "!", then another after the second "!", and so on. So, there are 3 full stops.
- Response 4:
There is one period at the end of the first part, then another after the second part, and finally one more after the third part. So, there are 3 periods in total.
- Response 5:
There is 1 period, then another one, and another one... So, there are 3 full stops!