DynamoDB - Data Backup

DynamoDB - Data Backup

Use the Data Pipeline import/export feature to perform backups. How you back up depends on whether you're using a GUI console or using the data pipeline (API) directly. Either create separate pipelines for each table when using the console, or import/export multiple tables into one pipeline when using the direct option.

Export and import data

You must create an Amazon S3 bucket before exporting. You can export from one or more tables.

Follow the following four steps to perform the export −

Step 1: Sign in to the AWS Management Console and open the Data Pipeline Console located at https://console.aws.amazon.com/datapipeline/.

Step 2: If there are no pipelines in your AWS Region, select Start Now . If you have one or more, select Create a new pipeline .

Step 3 − On the create page, enter a name for your pipeline. Select Build using a wildcard for the Source parameter. Select Export DynamoDB Table to S3 from the list. Enter the source table in the DynamoDB source table name field .

Enter the target S3 container in the S3 Output Folder text box in the following format: s3://nameOfBucket/region/nameOfFolder. Enter the S3 destination for the log file in the S3 location for logs field .

Step 4 − Select Activate after entering all the settings.

The pipeline may take several minutes to complete the creation process. Use the console to monitor its status. Confirm successful processing using the S3 console by viewing the exported file.

Data import

A successful import can only occur if the following conditions are true: you created the destination table, the destination and source use the same name, and the destination and source use the same key scheme.

You can use a completed target table, but the import replaces the data elements that share a key with the source elements, and also adds extra elements to the table. The destination may also use a different region.

While you can export multiple sources, you can only import one per operation. You can perform the import by adhering to the following steps −

Step 1: Sign in to the AWS Management Console and open the Data Pipeline Console.

Step 2 − If you are going to perform a cross-regional import, you should select the destination region.

Step 3 − Select Create New Pipeline .

Step 4 − Enter a name for the pipeline in the Name field . Select Build using template for the Source option and from the list of templates select Import DynamoDB backup data from S3 .

Enter the location of the source file in the S3 Input Folder text box . Enter a name for the target table in the Target DynamoDB Table Name field . Then enter the location of the log file in the S3 location for logs text box .

Step 5 − Select Activate after entering all the settings.

The import starts right after the pipeline is created. The pipeline may take several minutes to complete the creation process.

mistakes

When errors occur, the Data Pipeline console displays ERROR as the status of the pipeline. Clicking on the failing pipeline will take you to a detail page that shows each step of the process and the point at which the failure occurred. The log files inside also give some insight.

You can view common error causes as follows:

The destination table for import does not exist or does not use the same key scheme as the source.

The S3 container does not exist, or you do not have read/write permission for it.

Time is up.

You do not have the necessary export/import permissions.

Your AWS account has reached its resource limit.