
Using Amazon S3 as the Backup Shared Storage for Adobe Connect On-premise Deployments
Starting with version 12.10, Adobe Connect (LCC) provides the ability to natively use Amazon S3 as the backup shared storage where content is stored and from which it is retrieved.
Follow the steps below to configure your Adobe Connect cluster to use S3 (or a mixture of S3 and non-S3 volumes) for content storage.
Note: This is a change that requires an entire Connect cluster to be stopped, since the volume information is initialized at start up, and all servers need to be using the same volumes for the sake of consistency.
In all cases, these instructions assume that your volumes are accessible from the Connect instance for the purposes of reads/writes/deletes. For more information on this, see the last section below in this article.
Use case 1: New storage volumes
Follow these steps if you have previously never used a shared storage path to persist content on a Connect cluster. Not configuring a shared storage path would mean all necessary content will be stored locally on every node in the cluster, and get cleaned out as defined by the cache size configurations on that node.
- Add the following configuration entries to your custom.ini file:
- TRACK_SHARED_STORAGE_VOLUMES=true
- FALLBACK_SEARCH_ALL_VOLUMES=true
- Use the following script (modified appropriately) to set up your storage volumes in the database.
- To insert new S3 path (VOL_TYPE=’S’) as an active (VOL_STATUS=’A’) volume
INSERT INTO PPS_STORAGE_VOLUMES
(STORAGE_VOLUME_ID, NAME, URL, REGION, PREFIX, VOL_ORDER, VOL_TYPE, VOL_STATUS, TOTAL_BYTES, DATE_CREATED, DISABLED)
VALUES
(VOL_ID, VOLUME_NAME, S3_URL, REGION, PREFIX_PATH, INDEX, ‘S’, ‘A’, 0, GETUTCDATE(), NULL);
- For instance, to insert a primary write volume pointing to s3://mybucket/myaccount/01 in the us-west-2 region, use this query:
INSERT INTO PPS_STORAGE_VOLUMES
(STORAGE_VOLUME_ID, NAME, URL, REGION, PREFIX, VOL_ORDER, VOL_TYPE, VOL_STATUS, TOTAL_BYTES, DATE_CREATED, DISABLED)
VALUES
(1, ‘mybucket’, ‘s3://mybucket’, ‘us-west-2’, ‘myaccount/01’, 5, ‘S’, ‘A’, 0, GETUTCDATE(), NULL);
Note: the VOL_ORDER=5 above, which allows for a buffer of entries to be inserted in the future if needed.
- To insert new SMB storage paths (VOL_TYPE=’F’ for Amazon FSx, ‘G’ for Amazon File Gateway, ‘N’ for other SMB shares) as active (VOL_STATUS=’A’) volumes, use this query:
INSERT INTO PPS_STORAGE_VOLUMES
(STORAGE_VOLUME_ID, NAME, URL, REGION, PREFIX, VOL_ORDER, VOL_TYPE, VOL_STATUS, TOTAL_BYTES, DATE_CREATED, DISABLED)
VALUES
(2, ‘Amazon FSx’, ‘FSx1’, ‘us-west-2′, ‘myacount/01’, 10, ‘F’, ‘A’, 0, GETUTCDATE(), NULL);
INSERT INTO PPS_STORAGE_VOLUMES
(STORAGE_VOLUME_ID, NAME, URL, REGION, PREFIX, VOL_ORDER, VOL_TYPE, VOL_STATUS, TOTAL_BYTES, DATE_CREATED, DISABLED)
VALUES
(3, ‘Amazon File Gateway’, ‘GW1’, ‘us-west-2′, ‘myaccount/01’, 15, ‘G’, ‘A’, 0, GETUTCDATE(), NULL);
INSERT INTO PPS_STORAGE_VOLUMES
(STORAGE_VOLUME_ID, NAME, URL, REGION, PREFIX, VOL_ORDER, VOL_TYPE, VOL_STATUS, TOTAL_BYTES, DATE_CREATED, DISABLED)
VALUES
(4, ‘On-prem NAS’, ‘myNAS’, ‘us-west-2′, ‘myaccount/01’, 20, ‘N’, ‘A’, 0, GETUTCDATE(), NULL);
- The first volume in the list sorted in ascending order by VOL_ORDER will be used as a read/write volume. The remaining volumes will generally be used as read-only volumes.
- Start Adobe Connect – you will see the storage volumes being initialized as follows, in the debug logs:
cps-startup (INFO) ….. StorageVolumes.getAllEnumRows
cps-startup (INFO) Storage Volumes Rows Count: NO_OF_ROWS (e.g. 4, 5 etc.)
cps-startup (INFO) Storage Volumes Initialization Starting in TBM cps-startup (INFO) primary ID key found for table PPS_STORAGE_VOLUMES
- Upload and access content. You will see entries being populated in the PPS_ASSET_STORAGE_STATUS table as content is published.
Use case 2: Migrated storage volumes
Follow these steps if your current configuration files already have a SHARED_STORAGE or BACKUP_PATH, pointing to Amazon FSx, Amason File GW, and/or on-prem NAS storage. You may either migrate this content, as needed, to S3, or continue to access them from their current volumes.
- Add the following configuration entries to your custom.ini file:
- TRACK_SHARED_STORAGE_VOLUMES=true
- FALLBACK_SEARCH_ALL_VOLUMES=true
You can also optionally add this configuration if (a) you wish to use a mixture of S3 and non-S3 (FSx/GW/NAS) volumes, and (b) the primary volume is an S3 volume and (c) you wish content to be automatically migrated to the S3 volume, on access from any of the non-S3 volumes:
MIGRATE_ASSET_TO_S3=true
- Migrate content to S3 as required for your purpose
- Use the same scripts as in case #1 (modified appropriately) to set up your storage volumes in the database
- Restart Adobe Connect, validate as in case #1
Note that any currently configured shared storage backup (SHARED_STORAGE or BACKUP_PATH) configured in the config files will be quietly ignored if any storage volume is successfully initialized from PPS_STORAGE_VOLUMES. If no volume was found in PPS_STORAGE_VOLUMES, then any existing SHARED_STORAGE value in existing configurations will be used as the fallback value.
Grant access to your S3 buckets from the Adobe Connect cluster
- Enable access from Adobe Connect instances
- Create a new least-privilege IAM role specifically for the cluster
- Attach bucket policy to your S3 bucket(s) to allow the new IAM role
- Associate the new IAM role with your Adobe Connect instances
- Grant appropriate (least) privilege:
- Bucket level –
{
“Effect”: “Allow”,
“Sid”: “AllowCustomerInstanceAccessToCustomerBucket”,
“Action”: [
“s3:ListBucket”,
“s3:ListBucketVersions”,
“s3:ListBucketMultipartUploads”,
“s3:GetBucketLocation”
],
“Resource”: “…”
}
o Object Level –
{
“Effect”: “Allow”,
“Sid”: “AllowCustomerInstanceAccessToCustomerBucketObjects”,
“Action”: [
“s3:GetObject”, “s3:GetObjectACL”, “s3:GetObjectTagging”,
“s3:GetObjectVersion”, “s3:GetObjectVersionTagging”,
“s3:PutObject”, “s3:PutObjectACL”,
“s3:DeleteObject”, “s3:DeleteObjectVersion”,
“s3:ListMultipartUploadParts”, “s3:AbortMultipartUpload”
],
“Resource”: “…”
}
- Other – grant a minimal set of necessary permissions for any other operations you need. For instance, if you use SSE-KMS:
{
"Effect": "Allow",
"Sid": "AllowKMSGenerateDataKey",
"Action": [
"kms:GenerateDataKey"
],
"Resource": "..."
}