Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Why quant QKMatMul as a block while it has no submodule? #26

Open
Sugar929 opened this issue Oct 18, 2023 · 6 comments
Open

Why quant QKMatMul as a block while it has no submodule? #26

Sugar929 opened this issue Oct 18, 2023 · 6 comments

Comments

@Sugar929
Copy link

(qkv_matmul): QuantQKMatMul(
              (act_quantizer): UniformAffineQuantizer(bit=8, scale_method=mse, symmetric=True, channel_wise=False, leaf_param=True)
              (activation_function): StraightThrough()
              (act_quantizer_q): UniformAffineQuantizer(bit=8, scale_method=mse, symmetric=True, channel_wise=False, leaf_param=True)
              (act_quantizer_k): UniformAffineQuantizer(bit=8, scale_method=mse, symmetric=True, channel_wise=False, leaf_param=True)
            )

qkv_matmul is replaced by QuantQKMatMul which inherits BaseQuantBlock, but it doesn't have any submodules.

in block_recon.py:

opt_params = []
for name, module in block.named_modules():
    if isinstance(module, QuantModule):
        opt_params += [module.weight_quantizer.alpha]
        if module.split != 0:
            opt_params += [module.weight_quantizer_0.alpha]
optimizer = torch.optim.Adam(opt_params)

opt_params is empty and throws an error ValueError: optimizer got an empty parameter list

@sihouzi21c
Copy link

Same question during experiment, have you ever fixed it?

@yuzheyao22
Copy link

it works fine for me by simply skipping it in the weight reconstruction

@yuzheyao22
Copy link

By the way, have any one of you succeed in calibrate a bedroom model? It raises a DefaultCPUAllocator with the given scruip: can't allocate memory: you tried to allocate 300647710720 bytes. Error code 12 (Cannot allocate memory). The memory needed is apparently too much...

@sihouzi21c
Copy link

@yuzheyao22 Thanks for your suggestion, I fixed it with the same way and I'm wondering whether there is another method. For your question, I've not calibrated a bedroom model, but will it work if you reduce 'batch_size=opt.cali_batch_size' in kwargs in line 498 and 548 in 'sample_diffusion_ldm.py '?

@yuzheyao22
Copy link

@yuzheyao22 Thanks for your suggestion, I fixed it with the same way and I'm wondering whether there is another method. For your question, I've not calibrated a bedroom model, but will it work if you reduce 'batch_size=opt.cali_batch_size' in kwargs in line 498 and 548 in 'sample_diffusion_ldm.py '?

Yes, thanks a lot! I did work in my case

@stein-666
Copy link

SMVMatMul doesn't have submodules neither. Have you also encountered this problem?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants