You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to load the local model and ran into this issue
Error:
raise ValueError(
ValueError: rope_scaling must be a dictionary with with two fields, type and factor, got {'factor': 8.0, 'low_freq_factor': 1.0,
'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}
Test code:
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
# Choose any model available at https://health.petals.dev
model_name = "./ckpt/Meta-Llama-3.1-405B-Instruct"
# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name)
# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
The current problem is that transformers are in the wrong version, 4.33.3 should be used to solve this problem, but 4.33.3 is not compatible with petals.
petals:2.2.0.post1
The text was updated successfully, but these errors were encountered:
I tried to load the local model and ran into this issue
Error:
Test code:
The current problem is that transformers are in the wrong version, 4.33.3 should be used to solve this problem, but 4.33.3 is not compatible with petals.
petals:2.2.0.post1
The text was updated successfully, but these errors were encountered: