-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
The 4th order coefficient of FLUX does not show a clear relationship between the output_diff and the predicted_output_diff #20
Comments
There may be some difference between our implementation.
|
|
L1_rel(modulated input, previous modulated input) and L1_rel(residual output, previous residual output) |
![]() OKK!! Now I get the point !!!! My way of getting the coefficient is like below:
|
70 prompts from here. Maybe you can try with 28 inference steps. |
Closed due to inactive. Feel free to reopen it if necessary. |
The relative L1 distance should be relative_l1_distance = (torch.abs(prev - cur).mean()) / torch.abs(prev).mean() Is it correct? @LiewFeng |
@hkunzhe . Yes. |
I used 400 prompts from https://huggingface.co/datasets/k-mktr/improved-flux-prompts to generate 400 pairs of (modulated_input_diff, output_diff), the shape of each is 49 as I take the following hyperparameters.
The result is satisfying because the modulated_input_diff and modulated_output_diff using my 400 generated data always show a stable and close relationship using different prompt (Fig 2). However, I meet some problem when I use the 4th order coefficient provided in
./TeaCache4FLUX/teacache_flux.py
,[-34.84608751, -10.79323838, 16.39479138, -1.21976726, 0.12762022])
, but it also show a bad relationship.The code is displayed below, I wonder if I do it wrong ? (BTW, the TeaCache speed-up and performance is marvelous in both flux and hunyuanvideo !!! )
The text was updated successfully, but these errors were encountered: