Skip to content

Commit

Permalink
feat: update L1 CloudFormation resource definitions (#31086)
Browse files Browse the repository at this point in the history
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`

**L1 CloudFormation resource definition changes:**
```
├[~] service aws-acmpca
│ └ resources
│    └[~] resource AWS::ACMPCA::CertificateAuthority
│      └ types
│         └[~] type CrlConfiguration
│           └ properties
│              ├[+] CustomPath: string
│              ├[+] MaxPartitionSizeMB: integer
│              ├[+] PartitioningEnabled: boolean
│              └[+] RetainExpiredCertificates: boolean
├[~] service aws-auditmanager
│ └ resources
│    └[~] resource AWS::AuditManager::Assessment
│      └ types
│         ├[~] type AWSService
│         │ ├  - documentation: The `AWSService` property type specifies an AWS service such as Amazon S3 , AWS CloudTrail , and so on.
│         │ │  + documentation: The `AWSService` property type specifies an  such as Amazon S3 , AWS CloudTrail , and so on.
│         │ └ properties
│         │    └ ServiceName: (documentation changed)
│         └[~] type Scope
│           └ properties
│              └ AwsServices: (documentation changed)
├[~] service aws-chatbot
│ └ resources
│    └[~] resource AWS::Chatbot::SlackChannelConfiguration
│      └ properties
│         └ SlackChannelId: (documentation changed)
├[~] service aws-cloudtrail
│ └ resources
│    └[~] resource AWS::CloudTrail::Trail
│      └ types
│         └[~] type DataResource
│           ├  - documentation: You can configure the `DataResource` in an `EventSelector` to log data events for the following three resource types:
│           │  - `AWS::DynamoDB::Table`
│           │  - `AWS::Lambda::Function`
│           │  - `AWS::S3::Object`
│           │  To log data events for all other resource types including objects stored in [directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) , you must use [AdvancedEventSelectors](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) . You must also use `AdvancedEventSelectors` if you want to filter on the `eventName` field.
│           │  Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│           │  > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail. 
│           │  The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named `DOC-EXAMPLE-BUCKET1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│           │  - A user uploads an image file to `DOC-EXAMPLE-BUCKET1` .
│           │  - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│           │  - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::DOC-EXAMPLE-BUCKET1` .
│           │  - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│           │  The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│           │  - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│           │  - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│           │  - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│           │  + documentation: You can configure the `DataResource` in an `EventSelector` to log data events for the following three resource types:
│           │  - `AWS::DynamoDB::Table`
│           │  - `AWS::Lambda::Function`
│           │  - `AWS::S3::Object`
│           │  To log data events for all other resource types including objects stored in [directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) , you must use [AdvancedEventSelectors](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) . You must also use `AdvancedEventSelectors` if you want to filter on the `eventName` field.
│           │  Configure the `DataResource` to specify the resource type and resource ARNs for which you want to log data events.
│           │  > The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors for the trail. 
│           │  The following example demonstrates how logging works when you configure logging of all data events for a general purpose bucket named `amzn-s3-demo-bucket1` . In this example, the CloudTrail user specified an empty prefix, and the option to log both `Read` and `Write` data events.
│           │  - A user uploads an image file to `amzn-s3-demo-bucket1` .
│           │  - The `PutObject` API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.
│           │  - A user uploads an object to an Amazon S3 bucket named `arn:aws:s3:::amzn-s3-demo-bucket1` .
│           │  - The `PutObject` API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.
│           │  The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named *MyLambdaFunction* , but not for all Lambda functions.
│           │  - A user runs a script that includes a call to the *MyLambdaFunction* function and the *MyOtherLambdaFunction* function.
│           │  - The `Invoke` API operation on *MyLambdaFunction* is an Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for *MyLambdaFunction* , any invocations of that function are logged. The trail processes and logs the event.
│           │  - The `Invoke` API operation on *MyOtherLambdaFunction* is an Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the `Invoke` operation for *MyOtherLambdaFunction* does not match the function specified for the trail. The trail doesn’t log the event.
│           └ properties
│              └ Values: (documentation changed)
├[~] service aws-codecommit
│ └ resources
│    └[~] resource AWS::CodeCommit::Repository
│      └  - documentation: Creates a new, empty repository.
│         + documentation: Creates a new, empty repository.
│         > AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider)
├[~] service aws-codeconnections
│ └ resources
│    └[~] resource AWS::CodeConnections::Connection
│      └ attributes
│         └ ConnectionArn: (documentation changed)
├[~] service aws-codepipeline
│ └ resources
│    ├[~] resource AWS::CodePipeline::Pipeline
│    │ └ types
│    │    ├[+] type BeforeEntryConditions
│    │    │ ├  documentation: The conditions for making checks for entry to a stage.
│    │    │ │  name: BeforeEntryConditions
│    │    │ └ properties
│    │    │    └Conditions: Array<Condition>
│    │    ├[+] type Condition
│    │    │ ├  documentation: The condition for the stage. A condition is made up of the rules and the result for the condition.
│    │    │ │  name: Condition
│    │    │ └ properties
│    │    │    ├Result: string
│    │    │    └Rules: Array<RuleDeclaration>
│    │    ├[~] type FailureConditions
│    │    │ └ properties
│    │    │    └[+] Conditions: Array<Condition>
│    │    ├[+] type RuleDeclaration
│    │    │ ├  documentation: Represents information about the rule to be created for an associated condition. An example would be creating a new rule for an entry condition, such as a rule that checks for a test result before allowing the run to enter the deployment stage.
│    │    │ │  name: RuleDeclaration
│    │    │ └ properties
│    │    │    ├RuleTypeId: RuleTypeId
│    │    │    ├Configuration: json
│    │    │    ├InputArtifacts: Array<InputArtifact>
│    │    │    ├Region: string
│    │    │    ├RoleArn: string
│    │    │    └Name: string
│    │    ├[+] type RuleTypeId
│    │    │ ├  documentation: The ID for the rule type, which is made up of the combined values for category, owner, provider, and version.
│    │    │ │  name: RuleTypeId
│    │    │ └ properties
│    │    │    ├Owner: string
│    │    │    ├Category: string
│    │    │    ├Version: string
│    │    │    └Provider: string
│    │    ├[~] type StageDeclaration
│    │    │ └ properties
│    │    │    ├[+] BeforeEntry: BeforeEntryConditions
│    │    │    └[+] OnSuccess: SuccessConditions
│    │    └[+] type SuccessConditions
│    │      ├  documentation: The conditions for making checks that, if met, succeed a stage.
│    │      │  name: SuccessConditions
│    │      └ properties
│    │         └Conditions: Array<Condition>
│    └[~] resource AWS::CodePipeline::Webhook
│      ├ properties
│      │  └ Authentication: (documentation changed)
│      └ types
│         └[~] type WebhookAuthConfiguration
│           └ properties
│              └ SecretToken: (documentation changed)
├[~] service aws-cognito
│ └ resources
│    ├[~] resource AWS::Cognito::LogDeliveryConfiguration
│    │ ├  - documentation: The logging parameters of a user pool.
│    │ │  + documentation: The logging parameters of a user pool returned in response to `GetLogDeliveryConfiguration` .
│    │ ├ properties
│    │ │  ├ LogConfigurations: (documentation changed)
│    │ │  └ UserPoolId: (documentation changed)
│    │ └ types
│    │    ├[~] type CloudWatchLogsConfiguration
│    │    │ └  - documentation: The CloudWatch logging destination of a user pool detailed activity logging configuration.
│    │    │    + documentation: Configuration for the CloudWatch log group destination of user pool detailed activity logging, or of user activity log export with advanced security features.
│    │    ├[+] type FirehoseConfiguration
│    │    │ ├  name: FirehoseConfiguration
│    │    │ └ properties
│    │    │    └StreamArn: string
│    │    ├[~] type LogConfiguration
│    │    │ └ properties
│    │    │    ├ CloudWatchLogsConfiguration: (documentation changed)
│    │    │    ├ EventSource: (documentation changed)
│    │    │    ├[+] FirehoseConfiguration: FirehoseConfiguration
│    │    │    ├ LogLevel: (documentation changed)
│    │    │    └[+] S3Configuration: S3Configuration
│    │    └[+] type S3Configuration
│    │      ├  name: S3Configuration
│    │      └ properties
│    │         └BucketArn: string
│    └[~] resource AWS::Cognito::UserPool
│      └ types
│         └[~] type PasswordPolicy
│           └ properties
│              └[+] PasswordHistorySize: integer
├[~] service aws-datapipeline
│ └ resources
│    └[~] resource AWS::DataPipeline::Pipeline
│      └  - documentation: The AWS::DataPipeline::Pipeline resource specifies a data pipeline that you can use to automate the movement and transformation of data. In each pipeline, you define pipeline objects, such as activities, schedules, data nodes, and resources. For information about pipeline objects and components that you can use, see [Pipeline Object Reference](https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-objects.html) in the *AWS Data Pipeline Developer Guide* .
│         The `AWS::DataPipeline::Pipeline` resource adds tasks, schedules, and preconditions to the specified pipeline. You can use `PutPipelineDefinition` to populate a new pipeline.
│         `PutPipelineDefinition` also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following validation errors exist in the pipeline.
│         - An object is missing a name or identifier field.
│         - A string or reference field is empty.
│         - The number of objects in the pipeline exceeds the allowed maximum number of objects.
│         - The pipeline is in a FINISHED state.
│         Pipeline object definitions are passed to the [PutPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_PutPipelineDefinition.html) action and returned by the [GetPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_GetPipelineDefinition.html) action.
│         + documentation: The AWS::DataPipeline::Pipeline resource specifies a data pipeline that you can use to automate the movement and transformation of data.
│         > AWS Data Pipeline is no longer available to new customers. Existing customers of AWS Data Pipeline can continue to use the service as normal. [Learn more](https://docs.aws.amazon.com/big-data/migrate-workloads-from-aws-data-pipeline/) 
│         In each pipeline, you define pipeline objects, such as activities, schedules, data nodes, and resources.
│         The `AWS::DataPipeline::Pipeline` resource adds tasks, schedules, and preconditions to the specified pipeline. You can use `PutPipelineDefinition` to populate a new pipeline.
│         `PutPipelineDefinition` also validates the configuration as it adds it to the pipeline. Changes to the pipeline are saved unless one of the following validation errors exist in the pipeline.
│         - An object is missing a name or identifier field.
│         - A string or reference field is empty.
│         - The number of objects in the pipeline exceeds the allowed maximum number of objects.
│         - The pipeline is in a FINISHED state.
│         Pipeline object definitions are passed to the [PutPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_PutPipelineDefinition.html) action and returned by the [GetPipelineDefinition](https://docs.aws.amazon.com/datapipeline/latest/APIReference/API_GetPipelineDefinition.html) action.
├[~] service aws-ec2
│ └ resources
│    ├[~] resource AWS::EC2::LaunchTemplate
│    │ └ types
│    │    └[~] type LaunchTemplateData
│    │      └ properties
│    │         └ ImageId: (documentation changed)
│    ├[~] resource AWS::EC2::NetworkInsightsAnalysis
│    │ └ types
│    │    └[~] type AnalysisRouteTableRoute
│    │      └ properties
│    │         └ destinationPrefixListId: (documentation changed)
│    ├[~] resource AWS::EC2::TransitGatewayAttachment
│    │ └ types
│    │    └[~] type Options
│    │      └ properties
│    │         └[-] SecurityGroupReferencingSupport: string
│    ├[~] resource AWS::EC2::TransitGatewayMulticastGroupMember
│    │ └ attributes
│    │    └ SourceType: (documentation changed)
│    ├[~] resource AWS::EC2::TransitGatewayMulticastGroupSource
│    │ └ attributes
│    │    └ MemberType: (documentation changed)
│    └[~] resource AWS::EC2::VPCEndpoint
│      └  - documentation: Specifies a VPC endpoint. A VPC endpoint provides a private connection between your VPC and an endpoint service. You can use an endpoint service provided by AWS , an AWS Marketplace Partner, or another AWS accounts in your organization. For more information, see the [AWS PrivateLink User Guide](https://docs.aws.amazon.com/vpc/latest/privatelink/) .
│         An endpoint of type `Interface` establishes connections between the subnets in your VPC and an AWS service , your own service, or a service hosted by another AWS account . With an interface VPC endpoint, you specify the subnets in which to create the endpoint and the security groups to associate with the endpoint network interfaces.
│         An endpoint of type `gateway` serves as a target for a route in your route table for traffic destined for Amazon S3 or DynamoDB . You can specify an endpoint policy for the endpoint, which controls access to the service from your VPC. You can also specify the VPC route tables that use the endpoint. For more information about connectivity to Amazon S3 , see [Why can't I connect to an S3 bucket using a gateway VPC endpoint?](https://docs.aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint)
│         An endpoint of type `GatewayLoadBalancer` provides private connectivity between your VPC and virtual appliances from a service provider.
│         + documentation: Specifies a VPC endpoint. A VPC endpoint provides a private connection between your VPC and an endpoint service. You can use an endpoint service provided by AWS , an AWS Marketplace Partner, or another AWS accounts in your organization. For more information, see the [AWS PrivateLink User Guide](https://docs.aws.amazon.com/vpc/latest/privatelink/) .
│         An endpoint of type `Interface` establishes connections between the subnets in your VPC and an  , your own service, or a service hosted by another AWS account . With an interface VPC endpoint, you specify the subnets in which to create the endpoint and the security groups to associate with the endpoint network interfaces.
│         An endpoint of type `gateway` serves as a target for a route in your route table for traffic destined for Amazon S3 or DynamoDB . You can specify an endpoint policy for the endpoint, which controls access to the service from your VPC. You can also specify the VPC route tables that use the endpoint. For more information about connectivity to Amazon S3 , see [Why can't I connect to an S3 bucket using a gateway VPC endpoint?](https://docs.aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint)
│         An endpoint of type `GatewayLoadBalancer` provides private connectivity between your VPC and virtual appliances from a service provider.
├[~] service aws-ecs
│ └ resources
│    ├[~] resource AWS::ECS::Service
│    │ └ types
│    │    └[~] type AwsVpcConfiguration
│    │      └  - documentation: An object representing the networking details for a task or service. For example `awsvpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}`
│    │         + documentation: An object representing the networking details for a task or service. For example `awsVpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}` .
│    └[~] resource AWS::ECS::TaskSet
│      └ types
│         └[~] type AwsVpcConfiguration
│           └  - documentation: An object representing the networking details for a task or service. For example `awsvpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}`
│              + documentation: An object representing the networking details for a task or service. For example `awsVpcConfiguration={subnets=["subnet-12344321"],securityGroups=["sg-12344321"]}` .
├[~] service aws-elasticloadbalancingv2
│ └ resources
│    └[~] resource AWS::ElasticLoadBalancingV2::TargetGroup
│      └ types
│         └[~] type TargetGroupAttribute
│           └ properties
│              └ Key: (documentation changed)
├[~] service aws-forecast
│ └ resources
│    ├[~] resource AWS::Forecast::Dataset
│    │ └  - documentation: Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:
│    │    - *`DataFrequency`* - How frequently your historical time-series data is collected.
│    │    - *`Domain`* and *`DatasetType`* - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.
│    │    - *`Schema`* - A schema specifies the fields in the dataset, including the field name and data type.
│    │    After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see [Importing datasets](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│    │    To get a list of all your datasets, use the [ListDatasets](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasets.html) operation.
│    │    For example Forecast datasets, see the [Amazon Forecast Sample GitHub repository](https://docs.aws.amazon.com/https://github.com/aws-samples/amazon-forecast-samples) .
│    │    > The `Status` of a dataset must be `ACTIVE` before you can import training data. Use the [DescribeDataset](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDataset.html) operation to get the status.
│    │    + documentation: Creates an Amazon Forecast dataset.
│    │    > Amazon Forecast is no longer available to new customers. Existing customers of Amazon Forecast can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/machine-learning/transition-your-amazon-forecast-usage-to-amazon-sagemaker-canvas/) 
│    │    The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:
│    │    - *`DataFrequency`* - How frequently your historical time-series data is collected.
│    │    - *`Domain`* and *`DatasetType`* - Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields.
│    │    - *`Schema`* - A schema specifies the fields in the dataset, including the field name and data type.
│    │    After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see [Importing datasets](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│    │    To get a list of all your datasets, use the [ListDatasets](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasets.html) operation.
│    │    For example Forecast datasets, see the [Amazon Forecast Sample GitHub repository](https://docs.aws.amazon.com/https://github.com/aws-samples/amazon-forecast-samples) .
│    │    > The `Status` of a dataset must be `ACTIVE` before you can import training data. Use the [DescribeDataset](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDataset.html) operation to get the status.
│    └[~] resource AWS::Forecast::DatasetGroup
│      └  - documentation: Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the [UpdateDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_UpdateDatasetGroup.html) operation.
│         After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see [Dataset groups](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│         To get a list of all your datasets groups, use the [ListDatasetGroups](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasetGroups.html) operation.
│         > The `Status` of a dataset group must be `ACTIVE` before you can use the dataset group to create a predictor. To get the status, use the [DescribeDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDatasetGroup.html) operation.
│         + documentation: Creates a dataset group, which holds a collection of related datasets. You can add datasets to the dataset group when you create the dataset group, or later by using the [UpdateDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_UpdateDatasetGroup.html) operation.
│         > Amazon Forecast is no longer available to new customers. Existing customers of Amazon Forecast can continue to use the service as normal. [Learn more"](https://docs.aws.amazon.com/machine-learning/transition-your-amazon-forecast-usage-to-amazon-sagemaker-canvas/) 
│         After creating a dataset group and adding datasets, you use the dataset group when you create a predictor. For more information, see [Dataset groups](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-datasets-groups.html) .
│         To get a list of all your datasets groups, use the [ListDatasetGroups](https://docs.aws.amazon.com/forecast/latest/dg/API_ListDatasetGroups.html) operation.
│         > The `Status` of a dataset group must be `ACTIVE` before you can use the dataset group to create a predictor. To get the status, use the [DescribeDatasetGroup](https://docs.aws.amazon.com/forecast/latest/dg/API_DescribeDatasetGroup.html) operation.
├[~] service aws-kinesisfirehose
│ └ resources
│    └[~] resource AWS::KinesisFirehose::DeliveryStream
│      └ types
│         └[~] type MSKSourceConfiguration
│           └ properties
│              └[+] ReadFromTimestamp: string
├[~] service aws-lambda
│ └ resources
│    ├[~] resource AWS::Lambda::Function
│    │ └ types
│    │    └[~] type Code
│    │      └ properties
│    │         └[+] SourceKMSKeyArn: string
│    └[~] resource AWS::Lambda::Permission
│      └ properties
│         ├ Principal: (documentation changed)
│         ├ SourceAccount: (documentation changed)
│         └ SourceArn: (documentation changed)
├[~] service aws-medialive
│ └ resources
│    └[~] resource AWS::MediaLive::Multiplexprogram
│      └ attributes
│         └ ChannelId: (documentation changed)
├[~] service aws-networkfirewall
│ └ resources
│    └[~] resource AWS::NetworkFirewall::LoggingConfiguration
│      └ types
│         └[~] type LogDestinationConfig
│           └ properties
│              └ LogType: (documentation changed)
├[~] service aws-networkmanager
│ └ resources
│    ├[~] resource AWS::NetworkManager::ConnectAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    ├[~] resource AWS::NetworkManager::CoreNetwork
│    │ ├ attributes
│    │ │  └[+] NetworkFunctionGroups: Array<CoreNetworkNetworkFunctionGroup>
│    │ └ types
│    │    ├[+] type CoreNetworkNetworkFunctionGroup
│    │    │ ├  documentation: Describes a network function group.
│    │    │ │  name: CoreNetworkNetworkFunctionGroup
│    │    │ └ properties
│    │    │    ├Name: string
│    │    │    ├EdgeLocations: Array<string>
│    │    │    └Segments: Segments
│    │    └[+] type Segments
│    │      ├  name: Segments
│    │      └ properties
│    │         ├SendTo: Array<string>
│    │         └SendVia: Array<string>
│    ├[~] resource AWS::NetworkManager::SiteToSiteVpnAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    ├[~] resource AWS::NetworkManager::TransitGatewayRouteTableAttachment
│    │ ├ properties
│    │ │  ├[+] NetworkFunctionGroupName: string
│    │ │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│    │ └ types
│    │    └[+] type ProposedNetworkFunctionGroupChange
│    │      ├  documentation: Describes proposed changes to a network function group.
│    │      │  name: ProposedNetworkFunctionGroupChange
│    │      └ properties
│    │         ├Tags: Array<tag>
│    │         ├AttachmentPolicyRuleNumber: integer
│    │         └NetworkFunctionGroupName: string
│    └[~] resource AWS::NetworkManager::VpcAttachment
│      ├ properties
│      │  └[+] ProposedNetworkFunctionGroupChange: ProposedNetworkFunctionGroupChange
│      ├ attributes
│      │  └[+] NetworkFunctionGroupName: string
│      └ types
│         └[+] type ProposedNetworkFunctionGroupChange
│           ├  documentation: Describes proposed changes to a network function group.
│           │  name: ProposedNetworkFunctionGroupChange
│           └ properties
│              ├Tags: Array<tag>
│              ├AttachmentPolicyRuleNumber: integer
│              └NetworkFunctionGroupName: string
├[~] service aws-osis
│ └ resources
│    └[~] resource AWS::OSIS::Pipeline
│      └ types
│         ├[~] type VpcAttachmentOptions
│         │ ├  - documentation: Options for attaching a VPC to the pipeline.
│         │ │  + documentation: Options for attaching a VPC to pipeline.
│         │ └ properties
│         │    └ AttachToVpc: (documentation changed)
│         └[~] type VpcOptions
│           └ properties
│              └ VpcAttachmentOptions: (documentation changed)
├[~] service aws-pipes
│ └ resources
│    └[~] resource AWS::Pipes::Pipe
│      └ types
│         └[~] type S3LogDestination
│           └ properties
│              └ OutputFormat: (documentation changed)
├[~] service aws-rds
│ └ resources
│    └[~] resource AWS::RDS::DBInstance
│      └ properties
│         ├ RestoreTime: (documentation changed)
│         └ UseLatestRestorableTime: (documentation changed)
├[~] service aws-redshift
│ └ resources
│    └[~] resource AWS::Redshift::Cluster
│      └ types
│         └[~] type LoggingProperties
│           └ properties
│              ├[+] LogDestinationType: string
│              └[+] LogExports: Array<string>
├[~] service aws-rolesanywhere
│ └ resources
│    └[~] resource AWS::RolesAnywhere::Profile
│      └ properties
│         └[+] AcceptRoleSessionName: boolean
├[~] service aws-route53resolver
│ └ resources
│    └[~] resource AWS::Route53Resolver::ResolverRule
│      └ properties
│         ├[+] DelegationRecord: string
│         └ DomainName: - string (required, immutable?)
│                       + string (immutable?)
├[~] service aws-s3
│ └ resources
│    ├[~] resource AWS::S3::AccessPoint
│    │ └ types
│    │    └[~] type PublicAccessBlockConfiguration
│    │      └ properties
│    │         └ RestrictPublicBuckets: (documentation changed)
│    ├[~] resource AWS::S3::Bucket
│    │ └ types
│    │    └[~] type PublicAccessBlockConfiguration
│    │      └ properties
│    │         └ RestrictPublicBuckets: (documentation changed)
│    └[~] resource AWS::S3::MultiRegionAccessPoint
│      └ types
│         └[~] type PublicAccessBlockConfiguration
│           └ properties
│              └ RestrictPublicBuckets: (documentation changed)
├[~] service aws-s3objectlambda
│ └ resources
│    └[~] resource AWS::S3ObjectLambda::AccessPoint
│      └ types
│         └[~] type PublicAccessBlockConfiguration
│           └ properties
│              └ RestrictPublicBuckets: (documentation changed)
├[~] service aws-sagemaker
│ └ resources
│    └[~] resource AWS::SageMaker::ModelPackage
│      ├ properties
│      │  └ ModelCard: (documentation changed)
│      └ types
│         ├[~] type ModelAccessConfig
│         │ ├  - documentation: Specifies the access configuration file for the ML model.
│         │ │  + documentation: The access configuration file to control access to the ML model. You can explicitly accept the model end-user license agreement (EULA) within the `ModelAccessConfig` .
│         │ │  - If you are a Jumpstart user, see the [End-user license agreements](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-choose.html#jumpstart-foundation-models-choose-eula) section for more details on accepting the EULA.
│         │ │  - If you are an AutoML user, see the *Optional Parameters* section of *Create an AutoML job to fine-tune text generation models using the API* for details on [How to set the EULA acceptance when fine-tuning a model using the AutoML API](https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-create-experiment-finetune-llms.html#autopilot-llms-finetuning-api-optional-params) .
│         │ └ properties
│         │    └ AcceptEula: (documentation changed)
│         ├[~] type ModelCard
│         │ ├  - documentation: The model card associated with the model package.
│         │ │  + documentation: An Amazon SageMaker Model Card.
│         │ └ properties
│         │    └ ModelCardStatus: (documentation changed)
│         ├[~] type ModelDataSource
│         │ └  - documentation: Specifies the location of ML model data to deploy during endpoint creation.
│         │    + documentation: Specifies the location of ML model data to deploy. If specified, you must specify one and only one of the available data sources.
│         └[~] type S3ModelDataSource
│           └ properties
│              ├ CompressionType: (documentation changed)
│              ├ ModelAccessConfig: (documentation changed)
│              └ S3DataType: (documentation changed)
├[~] service aws-securityhub
│ └ resources
│    ├[~] resource AWS::SecurityHub::AutomationRule
│    │ └ types
│    │    └[~] type AutomationRulesFindingFilters
│    │      └ properties
│    │         └ ResourceId: (documentation changed)
│    ├[~] resource AWS::SecurityHub::ConfigurationPolicy
│    │ └ types
│    │    └[~] type Policy
│    │      └ properties
│    │         └ SecurityHub: (documentation changed)
│    ├[~] resource AWS::SecurityHub::Insight
│    │ └ types
│    │    └[~] type AwsSecurityFindingFilters
│    │      └ properties
│    │         └ ComplianceSecurityControlId: (documentation changed)
│    └[~] resource AWS::SecurityHub::SecurityControl
│      └ properties
│         └ SecurityControlId: (documentation changed)
└[~] service aws-ssm
  └ resources
     └[~] resource AWS::SSM::PatchBaseline
       └ types
          └[~] type Rule
            └ properties
               ├ ApproveAfterDays: (documentation changed)
               └ ApproveUntilDate: (documentation changed)
```
  • Loading branch information
aws-cdk-automation authored Aug 12, 2024
1 parent 7c4f423 commit 62a641c
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 29 deletions.
4 changes: 2 additions & 2 deletions packages/@aws-cdk/cloudformation-diff/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
},
"license": "Apache-2.0",
"dependencies": {
"@aws-cdk/aws-service-spec": "^0.1.15",
"@aws-cdk/service-spec-types": "^0.0.83",
"@aws-cdk/aws-service-spec": "^0.1.16",
"@aws-cdk/service-spec-types": "^0.0.84",
"chalk": "^4",
"diff": "^5.2.0",
"fast-deep-equal": "^3.1.3",
Expand Down
2 changes: 1 addition & 1 deletion packages/@aws-cdk/integ-runner/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@
"@aws-cdk/cloud-assembly-schema": "0.0.0",
"@aws-cdk/cloudformation-diff": "0.0.0",
"@aws-cdk/cx-api": "0.0.0",
"@aws-cdk/aws-service-spec": "^0.1.15",
"@aws-cdk/aws-service-spec": "^0.1.16",
"cdk-assets": "0.0.0",
"@aws-cdk/cdk-cli-wrapper": "0.0.0",
"aws-cdk": "0.0.0",
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@
"mime-types": "^2.1.35"
},
"devDependencies": {
"@aws-cdk/aws-service-spec": "^0.1.15",
"@aws-cdk/aws-service-spec": "^0.1.16",
"@aws-cdk/cdk-build-tools": "0.0.0",
"@aws-cdk/custom-resource-handlers": "0.0.0",
"@aws-cdk/pkglint": "0.0.0",
Expand Down
6 changes: 3 additions & 3 deletions tools/@aws-cdk/spec2cdk/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@
},
"license": "Apache-2.0",
"dependencies": {
"@aws-cdk/aws-service-spec": "^0.1.15",
"@aws-cdk/service-spec-importers": "^0.0.43",
"@aws-cdk/service-spec-types": "^0.0.83",
"@aws-cdk/aws-service-spec": "^0.1.16",
"@aws-cdk/service-spec-importers": "^0.0.44",
"@aws-cdk/service-spec-types": "^0.0.84",
"@cdklabs/tskb": "^0.0.3",
"@cdklabs/typewriter": "^0.0.3",
"camelcase": "^6",
Expand Down
37 changes: 15 additions & 22 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@
resolved "https://registry.npmjs.org/@aws-cdk/asset-node-proxy-agent-v6/-/asset-node-proxy-agent-v6-2.0.3.tgz#9b5d213b5ce5ad4461f6a4720195ff8de72e6523"
integrity sha512-twhuEG+JPOYCYPx/xy5uH2+VUsIEhPTzDY0F1KuB+ocjWWB/KEDiOVL19nHvbPCB6fhWnkykXEMJ4HHcKvjtvg==

"@aws-cdk/aws-service-spec@^0.1.15":
version "0.1.15"
resolved "https://registry.npmjs.org/@aws-cdk/aws-service-spec/-/aws-service-spec-0.1.15.tgz#2d4ab7b847ddc255e5d3a300bb91905c513ffac4"
integrity sha512-r5hNmHKqsuY+Y3bh0TLOTla0yORh3e6o79pOUkDRwyL1tdcds2ziY1Kc967KJDcET5Tn1zvoxTuksD40abmKhw==
"@aws-cdk/aws-service-spec@^0.1.16":
version "0.1.16"
resolved "https://registry.npmjs.org/@aws-cdk/aws-service-spec/-/aws-service-spec-0.1.16.tgz#2cb1f7b1783c4dc362492296ebf61c7fd5cc88c7"
integrity sha512-9NX+04puH6zkTQY2shOzSWa8Ge1sdz0M4sqZw/UI9mgHbflfhxgSkjTwz6Fe/B3FH3ZA1RXl/wW6ThEqeAb3fw==
dependencies:
"@aws-cdk/service-spec-types" "^0.0.83"
"@aws-cdk/service-spec-types" "^0.0.84"
"@cdklabs/tskb" "^0.0.3"

"@aws-cdk/lambda-layer-kubectl-v24@^2.0.242":
Expand All @@ -74,12 +74,12 @@
resolved "https://registry.npmjs.org/@aws-cdk/lambda-layer-kubectl-v30/-/lambda-layer-kubectl-v30-2.0.0.tgz#97c40d31e5350ce7170be5d188361118b1e39231"
integrity sha512-yES6NfrJ3QV1372lAZ2FLXp/no4bqDWBXeSREJdrpWjQzD0wvL/hCpHEyjZrzHhOi27YbMxFTQ3g9isKAul8+A==

"@aws-cdk/service-spec-importers@^0.0.43":
version "0.0.43"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-importers/-/service-spec-importers-0.0.43.tgz#94de14d9d21243c213de448edf14f3b83db76086"
integrity sha512-iu1uOGyzI/MF5y3WL/7txu81Bw9KoxgD+dO+M1yLhwKY7zJR6HulQ2FCZCAAU4CDHpXXbpdEz3vY5G692a8uBA==
"@aws-cdk/service-spec-importers@^0.0.44":
version "0.0.44"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-importers/-/service-spec-importers-0.0.44.tgz#8a2c55e69f1fd33ff19877e7eb82d87cf35cd229"
integrity sha512-Oo5qbamIPx/YOeZlmxNJsenPvNkyaofgieWhZavqhAgk0H5VCis4/stxnUwZzsu3Bc7SCg/vQRILDt4oGt981Q==
dependencies:
"@aws-cdk/service-spec-types" "^0.0.82"
"@aws-cdk/service-spec-types" "^0.0.84"
"@cdklabs/tskb" "^0.0.3"
ajv "^6"
canonicalize "^2.0.0"
Expand All @@ -90,17 +90,10 @@
glob "^8"
sort-json "^2.0.1"

"@aws-cdk/service-spec-types@^0.0.82":
version "0.0.82"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-types/-/service-spec-types-0.0.82.tgz#f677f017fd54b311092af7721946b6464ae100f6"
integrity sha512-8vdhrkYq3p1kg7WY4thblhin8djcKCf1MfcESFoYa5dG8zu9DmdBNXUFx8GiXjkHXADGrPK2/jaL1XhK4qkLpw==
dependencies:
"@cdklabs/tskb" "^0.0.3"

"@aws-cdk/service-spec-types@^0.0.83":
version "0.0.83"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-types/-/service-spec-types-0.0.83.tgz#20337cb6adde4627ffbcc624fc43e3ae042e746c"
integrity sha512-M3G0UiTKm81SCK9tTSfzmnojg5Mx/NQ3nsIQUIYNmlYHaw/EM9A933sjSv02lJt42fIqnzNjWOH1wiwQFnX28Q==
"@aws-cdk/service-spec-types@^0.0.84":
version "0.0.84"
resolved "https://registry.npmjs.org/@aws-cdk/service-spec-types/-/service-spec-types-0.0.84.tgz#b6fa7429bb556d26eb39c18a2ee9802079bdb234"
integrity sha512-AM3ghRsd9cZlpW+nuVRRdQiPuGV9iWDyHnR/Vjd9xKQEf+Qmh9vnRmB205rFncAIlbFjHXxgapII+lujHCGDmQ==
dependencies:
"@cdklabs/tskb" "^0.0.3"

Expand Down Expand Up @@ -16824,4 +16817,4 @@ zip-stream@^4.1.0:
dependencies:
archiver-utils "^3.0.4"
compress-commons "^4.1.2"
readable-stream "^3.6.0"
readable-stream "^3.6.0"

0 comments on commit 62a641c

Please # to comment.