-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Enabling stricter workflows for destructive migrations #27897
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
And this comment of mine probably most explains motives behind this feature suggestion:
|
Also I'm aware that it isn't advisable for EF to apply migrations at all, but better for EF to generate SQL scripts which are reviewed and audited before being applied. I beleive even in this scenario it should be possible for SQL scripts that are generated by EF for Destructive migrations to carry some similar metadata indicating they contain Destructive changes. Whether this be metadata supported by the underlying database, or resorting to a comment in the Sql script that could be potentially parsed by tooling similar to the proposed [Destructive] attribute on the c# migration class. A human auditing these scripts would not rely on this ofcourse. However for teams trusting to automation, this metadata could be leveraged in supporting workflows. |
I'm not aware of any feature like this in either the database or in DB SQL tools. At most, the EF Core tools to generate the script (i.e. Stepping back, I'm still unclear on the actual value this would bring, and how an overall workflow would look like here. Anyone doing zero-downtime migrations is already carefully crafting and applying their migrations; e.g. instead of renaming a column, they add a new column in one migration, and some time later they add a second migration to remove the old one. Sure, I guess this feature could prevent accidental application of the second migration prematurely; but at that point all migrations are blocked, since no migrations after the potentially destructive one can be applied. And again, as above anyone with a critical-enough system to do zero-downtime migrations is already reviewing their migrations carefully and not blindly updating. |
Some ideas of how we'd plan to utilise this feature:
These are just some ideas. This would mostly be possible with just the additional output of the attribute, however the CLI changes to prevent generating or applying destructive changes without explicit opt-in would provide a safety net, such that the default mode of operation for developers (and deployments) wishing to work "safely" within this model, they would not use the opt in flag in the commands for destructive changes. Generating, or applying destructive changes through EF would require cognizance.
The keyword for me here is
I am thinking there is room here for tooling to assist these processes. I am thinking if EF has the ability / smarts to know if a migration it is generating can be considered "destructive" its in a prime position to help ensure the above 2 points are raised, in the mind of the developer (via the CLI commands) introducing a destructive change, and in teams that practice peer review where destructive changes could be more easily visible (via [Desctructive] attribute), ensuring less likely that they are released unplanned, or undiscussed.
Not necessarily true. Some teams rely on tests in a dev and staging environment to gain confidence in a release, perhaps restoring staging from prod data. If the release works there and passes all tests this is good enough, it does not necessarily need a human to review all database changes prior to a prod deployment - it just needs all tests to pass. I think that the exact process here would very much depend on the organisation and the product involved. |
Possible duplicate of #8932 (or very related). |
We discussed this in triage, and this is not something we plan to implement. |
Ok - thanks for considering |
It's very easy today for a developer working on a feature to generate a "destructive" migration which can go unnoticed or undiscussed through to deployment to various environments.
In some situations it makes sense for teams to have more formal workflows (or tooling) around recognition of Destructive vs non destructive migrations.
One reason is that if a deployment is pushed to a given environment and it doesn't contain any destructive database migrations, then that could be a factor in choosing a deployment strategy where not all of the replicas of the service need be brought down whilst the database migrations are applied - catering towards part of a zero downtime deployment strategy. Where as if the release does contain destructive changes, then it may be more prudent to do additional auditing checks before selecting the approach. If the team is following "expand/contract" approach then it may still be safe to do a zero downtime deployment assuming the destructive changes are a planned "contraction" of the schema for objects no longer being utilised.
The "expand/contract" pattern has been documented as a formal pattern for rolling out significant schema changes to the database in a series of steps allowing services to transition onto an adjusted schema whilst still supporting the old schema in parallel for some period of time, before a release with a "contraction" of the database schema is made finally to deprecate the old schema once no parts of the system are using it any longer.
For teams that wish to enable these types of worklows - like expand / contract, it would be helpful to for Destructive migrations to be flagged as soon as they enter the code base as an asset.
The following is taken from another issue which talks about the mechanics of how this might work in more depth:
Originally posted by @dazinator in #19587 (comment)
The text was updated successfully, but these errors were encountered: