FG
☁️ Cloud & DevOpsAmazon

s3 cp "Cannot allocate memory" error

Freshabout 21 hours ago
Mar 14, 20260 views
Confidence Score55%
55%

Problem

Confirm by changing [ ] to [x] below: - [x ] I've gone though the User Guide and the API reference - [x] I've searched for previous similar issues and didn't find any solution Issue is about usage on: - [ ] Service API : I want to do X using Y service, what should I do? - [ ] CLI : passing arguments or cli configurations. - [x] Other/Not sure. Platform/OS/Hardware/Device What are you running the cli on? ECS via Batch. awscliv2 is installed via a launch template. Describe the question Intermittently, I get the following error when trying to download a large file (~45-50GB): `download failed...[Errno 12] Cannot allocate memory` as part of a workflow of batch jobs. This is occurring for batch jobs that each have >=3GB of memory specified for each; the last time this occurred, the batch job had 7GB memory allocated. The command being executed looks something like `/usr/local/aws-cli/v2/current/bin/aws s3 cp --no-progress s3://my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam /tmp/scratch/my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam` Is the python subprocess causing this? What do you recommend to avoid this while running on AWS Batch/ECS? Logs/output There are no more informative logs atm -- I will put debugging in so that the next time this happens the debug flag is passed.

Error Output

error when trying to download a large file (~45-50GB): `download failed...[Errno 12] Cannot allocate memory` as part of a workflow of batch jobs. This is occurring for batch jobs that each have >=3GB of mem

Unverified for your environment

Select your OS to check compatibility.

1 Fix

Canonical Fix
Unverified Fix
New Fix – Awaiting Verification

Increase Memory and Optimize S3 Download for AWS Batch Jobs

Medium Risk

The 'Cannot allocate memory' error typically occurs when the system is unable to allocate sufficient memory for the process. In the context of downloading large files using AWS CLI on ECS via Batch, this can happen due to insufficient memory allocation for the job or due to the way the AWS CLI handles large file downloads, which may require more memory than specified.

Awaiting Verification

Be the first to verify this fix

  1. 1

    Increase Memory Allocation

    Ensure that the memory allocated to your AWS Batch job is sufficient for downloading large files. For a 45-50GB file, consider increasing the allocated memory to at least 16GB or more, depending on other resource requirements.

    bash
    aws batch update-job --job-id <job-id> --container-overrides 'memory=16384'
  2. 2

    Use AWS S3 Multipart Download

    Modify the AWS CLI command to use multipart downloads, which can help manage memory usage more effectively by breaking the download into smaller parts. This can be achieved by using the '--part-size' option.

    bash
    /usr/local/aws-cli/v2/current/bin/aws s3 cp --no-progress --part-size 10MB s3://my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam /tmp/scratch/my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam
  3. 3

    Enable Debugging for Further Insights

    Add debugging flags to your AWS CLI command to capture more detailed logs when the error occurs. This will help in diagnosing the issue if it persists after making the above changes.

    bash
    /usr/local/aws-cli/v2/current/bin/aws s3 cp --no-progress --debug s3://my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam /tmp/scratch/my-s3-bucket/etc/etc/1000.unmapped.unmerged.bam
  4. 4

    Monitor Resource Usage

    Use AWS CloudWatch or ECS monitoring tools to track memory and CPU usage of your Batch jobs. This will help you understand if the changes made are effective and if further adjustments are needed.

    bash
    aws cloudwatch get-metric-statistics --metric-name MemoryUtilization --start-time <start-time> --end-time <end-time> --period 300 --namespace AWS/ECS --statistics Average --dimensions Name=ClusterName,Value=<cluster-name> Name=ServiceName,Value=<service-name>

Validation

Confirm that the job runs successfully without the 'Cannot allocate memory' error after increasing memory allocation and using multipart downloads. Monitor the job's resource usage to ensure it stays within limits.

Sign in to verify this fix

Environment

Submitted by

AC

Alex Chen

2450 rep

Tags

awsclicloudbugs3