Closed
Description
🐛 Bug
I have a polynomial equation which I want to get in a desired range. This polynomial equation outputs values from -40 to 15, I want to get it into the 7.5 to 9.5 range. I have defined a separate objective function that outputs 1 when the value of the polynomial function is in that specific range and linearly decreases to 0 outside that range. I am using that objective function in the acquisition function to get the suggested points that will output the value of the polynomial function within that range. Still, the values of polynomial function remain the same. Could you please suggest to me where exactly I am going wrong?
To reproduce
** Code snippet to reproduce **
import torch
from botorch.acquisition.monte_carlo import MCAcquisitionObjective
class RangeObjective(MCAcquisitionObjective):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
def forward(self, samples: torch.Tensor, **kwargs) -> torch.Tensor:
output = torch.zeros_like(samples)
# Between 7.0 and 7.5, linearly scale from 0 to 1
mask1 = (samples > 7.0) & (samples <= 7.5)
output[mask1] = (samples[mask1] - 7.0) / 0.5
# Between 7.5 and 9.5, return 1
mask2 = (samples > 7.5) & (samples < 9.5)
output[mask2] = 1
# At exactly 7.5 and 9.5
mask_exact = (samples == 7.5) | (samples == 9.5)
output[mask_exact] = 1
# Between 9.5 and 10.0, linearly scale from 1 to 0
mask3 = (samples > 9.5) & (samples < 10.0)
output[mask3] = (10.0 - samples[mask3]) / 0.5
return output.squeeze(-1)
from scipy.optimize import LinearConstraint
# Number of initial random points and iterations for Bayesian Optimization
N_INIT = 2
N_ITER = 100
# Generate initial data within specified bounds for each feature
train_x = torch.rand(N_INIT, 10) * 100
for i in range(10):
train_x[:, i] = torch.rand(N_INIT) * (upper_bound[i] - lower_bound[i]) + lower_bound[i]
# Normalize the initial data so that the sum of features equals 100
train_x *= 100 / train_x.sum(dim=1, keepdim=True)
train_y = adh1_func(train_x)
train_y.requires_grad_(False) # Observations typically do not require gradient
# Define constraints: sum of elements in each candidate should be 1
A = torch.ones((1, 10)) # Coefficient matrix for the linear constraint
lower_bounds = torch.tensor([100.0]) # Lower bounds for the constraint
upper_bounds = torch.tensor([100.0]) # Upper bounds for the constraint
# Convert A, lower_bounds, and upper_bounds to numpy for scipy compatibility
A_np = A.numpy()
lower_bounds_np = lower_bounds.numpy()
upper_bounds_np = upper_bounds.numpy()
# Create a scipy LinearConstraint object
linear_constraint = LinearConstraint(A=A_np, lb=lower_bounds_np, ub=upper_bounds_np)
from botorch.acquisition.objective import GenericMCObjective
from botorch.acquisition.monte_carlo import qExpectedImprovement
from botorch.sampling.normal import SobolQMCNormalSampler
# Objective function instance
objective_func = RangeObjective()
for iteration in range(N_ITER):
# Fit the GP model
gp = SingleTaskGP(train_x, train_y)
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
fit_gpytorch_model(mll)
# Define the range-based objective
objective_func = RangeObjective()
# Find the current maximum 'y' considering the defined objective
#print("train y before passing to obj func", train_y)
transformed_y = objective_func(train_y)
#print("train y after passing to obj func", transformed_y)
current_max = transformed_y.max()
#print("currrent max", current_max)
sampler = SobolQMCNormalSampler(sample_shape=torch.Size([1]))
# Initialize the acquisition function
qEI = qExpectedImprovement(
model=gp,
best_f=current_max,
sampler = None,
objective=objective_func
)
# Optimize the acquisition function to find new candidates
candidate, acq_value = optimize_acqf(
acq_function=qEI,
bounds=bounds,
q=1, # The number of candidates to generate
num_restarts=5,
raw_samples=512, # The number of raw samples to consider
options={"constraints": linear_constraint}
)
# Evaluate the objective function at the new candidate
new_y = adh1_func(candidate)
new_y_transformed = objective_func(new_y)
# Update training data
train_x = torch.cat([train_x, candidate])
train_y = torch.cat([train_y, new_y])
print(f"Iteration {iteration + 1}, new point = {candidate.numpy()}, objective = {new_y.item()}")
Here you can see that the values vary over a quite a large range

## Expected Behavior
I want the values to be between 7.5 and 9.5.
Metadata
Metadata
Assignees
Labels
No labels