Amazon Bedrock Guardrails now offers cross-account safeguards, enabling organizations to centrally enforce AI safety controls across multiple AWS accounts while maintaining flexibility for account-specific requirements.
Amazon Web Services has announced the general availability of cross-account safeguards in Amazon Bedrock Guardrails, a capability that enables centralized enforcement and management of safety controls across multiple AWS accounts within an organization. This new feature allows security teams to specify guardrails in a new Amazon Bedrock policy within the management account of their organization, automatically enforcing configured safeguards across all member entities for every model invocation with Amazon Bedrock.
Centralized Control with Organizational Flexibility
The cross-account safeguards capability supports uniform protection across all accounts and generative AI applications with centralized control and management. Organizations can now implement organization-wide enforcement that applies a single guardrail from their management account to all entities within the organization through policy settings. This guardrail automatically enforces filters across all member entities, including organizational units (OUs) and individual accounts, for all Amazon Bedrock model invocations.
Beyond organizational-level enforcement, the feature offers flexibility to apply account-level and application-specific controls depending on use case requirements. Account-level enforcement enables automatic enforcement of configured safeguards across all Amazon Bedrock model invocations in an AWS account, with the configured safeguards applying to all inference API calls.
This dual approach supports consistent adherence to corporate responsible AI requirements while significantly reducing the administrative burden of monitoring individual accounts and applications. Security teams no longer need to oversee and verify configurations or compliance for each account independently, streamlining governance across complex multi-account AWS environments.
Getting Started with Cross-Account Enforcement
To enable account-level enforcement, administrators can navigate to the Amazon Bedrock Guardrails console and choose "Create" in the Account-level enforcement configurations section. The process requires selecting a specific guardrail version to ensure immutability—once configured, the guardrail cannot be modified by member accounts. This design ensures consistent enforcement across the organization.
The general availability introduces new features for model selection and content guarding controls. Administrators can now define which models will be affected by enforcement using either Include or Exclude behavior. For content guarding, two options are available: Comprehensive mode enforces guardrails on everything regardless of what the caller tags, providing the safer default when relying on callers to correctly identify sensitive content isn't desirable. Selective mode trusts callers to tag the right content and reduces unnecessary guardrail processing, useful when callers handle a mix of pre-validated and user-generated content.
Testing enforcement can be performed using a role in the account. The account-enforced guardrail should automatically apply to both prompts and outputs, with responses including guardrail assessment information. Testing can also be conducted through Bedrock inference calls using InvokeModel, InvokeModelWithResponseStream, Converse, or ConverseStream APIs.
For organization-level enforcement, administrators must use the AWS Organizations console to enable Bedrock policies. The process involves creating a Bedrock policy that specifies the guardrail and attaching it to target accounts or OUs. The policy configuration includes specifying the guardrail ARN and version, along with input tags settings for AWS Organizations.
Key Considerations and Limitations
Several important considerations apply to this capability. Administrators can now choose to include or exclude specific models in Bedrock for inference, enabling centralized enforcement on model invocation calls. They can also choose to safeguard partial or complete system prompts and input prompts, providing granular control over what content receives protection.
Accuracy in specifying guardrail Amazon Resource Names (ARNs) is critical. Specifying an incorrect or invalid ARN will result in policy violations, non-enforcement of safeguards, and the inability to use models in Amazon Bedrock for inference. Organizations should consult the Best practices for using Amazon Bedrock policies documentation to ensure proper configuration.
It's important to note that Automated Reasoning checks are not supported with this capability, which may influence how organizations approach certain types of content validation and safety requirements.
Availability and Pricing
Cross-account safeguards in Amazon Bedrock Guardrails is generally available today in all AWS commercial and GovCloud Regions where Bedrock Guardrails is available. Organizations can reference the AWS Capabilities by Region documentation for detailed availability information and future roadmap details.
Charges apply to each enforced guardrail according to its configured safeguards. Organizations should review the Amazon Bedrock Pricing page for detailed pricing information on individual safeguards to understand the cost implications of implementing comprehensive cross-account protection.
The capability is accessible through the Amazon Bedrock console, and AWS encourages users to provide feedback through AWS re:Post for Amazon Bedrock Guardrails or through their usual AWS Support contacts.
This announcement represents a significant advancement in enterprise AI governance, addressing the growing need for consistent safety controls across distributed AWS environments while maintaining the flexibility required for diverse organizational needs and use cases.


Comments
Please log in or register to join the discussion