Amazon
127 verified issues
Unable to install aws-cli via pip - package dependency broke
Describe the bug $ pip3 install awscli Collecting awscli Obtaining dependency information for awscli from https://files.pythonhosted.org/packages/43/9c/bf16d97f5de8aa4f9171c6c82bad0b4179921ddca5066ba980358da0e9a5/awscli-1.29.3-py3-none-any.whl.metadata Downloading awscli-1.29.3-py3-none-any.whl.metadata (11 kB) Collecting botocore==1.31.3 (from awscli) Obtaining dependency information for botocore==1.31.3 from https://files.pythonhosted.org/packages/b0/f0/5755508b3305534cd4cf2a8a82bbbe42ee9d66fd2688be5ff3dfb85e9a99/botocore-1.31.3-py3-none-any.whl.metadata Downloading botocore-1.31.3-py3-none-any.whl.metadata (5.9 kB) Collecting docutils<0.17,>=0.10 (from awscli) Downloading docutils-0.16-py2.py3-none-any.whl (548 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 548.2/548.2 kB 34.8 MB/s eta 0:00:00 Collecting s3transfer<0.7.0,>=0.6.0 (from awscli) Downloading s3transfer-0.6.1-py3-none-any.whl (79 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.8/79.8 kB 12.2 MB/s eta 0:00:00 Collecting PyYAML<5.5,>=3.10 (from awscli) Downloading PyYAML-5.4.1.tar.gz (175 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 175.1/175.1 kB 24.9 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error' error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully.
Unable to deploy single lambda function for version 1.52.1
sls deploy -f "function_name" is not working throwing and exception My serverless version is: Framework Core: 1.52.1 Plugin: 3.0.0 SDK: 2.1.1 `sls deploy -f function_name` is not working it throws and errors the complete trackback would be `TypeError: Cannot read property 'artifact' of undefined at ServerlessPythonRequirements.BbPromise.bind.then.then.then (/home/mahesh/shuttle/H2OLambda/pythonh20lambda/node_modules/serverless-python-requirements/index.js:176:48) ` Any update on this would be appreciated. Thank you
Add --no-overwrite option to aws s3 cp/mv
It would be nice to have a convenience function `--no-overwrite` for `aws s3 cp/mv` commands, which would check the target destination doesn't already exist before putting a file into an s3 bucket. Of course this logic couldn't be guaranteed by the AWS API (afaik...) and is vulnerable to race conditions, etc. But it would be helpful to prevent unintentional mistakes!
Narrowing the Serverless IAM Deployment Policy
I’ve been spending time recently trying to remove Admin rights as a requirement for sls deployments. Still a work in progress, but so far I have this policy that I can attach to any “serverless-agent” AWS user, so that the serverless-agent user is empowered enough to deploy: [code block] Right now, I'm focused on a single policy that can deploy to all stages. But some enterprises may need this IAM policy to allow dev and staging deployments, but limit who can deploy to production. So, I've also been experimenting with adding "${stage}" to some of the resource ARNs, but don't have it fully worked out yet. For example: [code block] There are still a few places where the permissions could be narrowed further. Specifically, the REST API section allows delete of ALL apis right now. And the lambda permissions are too broad. But I’ve had some annoying technical issues trying to narrow those two sections. The API Gateway policy is still broad because you must have the 'api-id' in the ARN. But you don't know that until a deployment generates it. So on the surface, seems like a chicken/egg problem to me, but maybe there is a way to supply that api-id, instead of having AWS generate it. And the lambda permissions are still broad because I can't see the particular Arn it is trying to manipulate to add an event mapping to a lambda, and the obvious ARNs don't work. Maybe there is a way to show the ARN being accessed in serverless, when the deployment fails so that I can add it to th
Error uploading empty file: "seek() takes 2 positional arguments but 3 were given"
awscli version "1.11.13-1ubuntu1~16.04.0" was installed with the Ubuntu package manager and run like: [code block] It successfully uploads all the files in the source directory except for one zero-byte file. It fails there with: [code block]
Feature request: Assume role with EC2 instance profile as the source profile
Right now you can execute commands using credentials from one of these sources: root credentials, IAM credentials, temporary credentials from an EC2 instance profile, and temporary credentials from assuming a role via IAM credentials. I would like to execute commands by using temporary credentials from assuming a role via the EC2 instance profile. I need this ability because I'm using two AWS accounts and I'm using an EC2 instance to run AWS CLI commands against both accounts. The EC2 instance profile allows me to perform tasks for one account, but I need to assume a cross-account role to perform tasks for the other account. Unfortunately there is no way to get AWS CLI to assume the cross-account role even though the EC2 instance profile has permissions to assume that role. I tried removing the source_profile property from my role-based profile in hopes that the source_profile would use the instance profile, but that failed. After looking at AssumeRoleProvider in awscli/customizations/assumerole.py, I see that AWS CLI can only assume a role if the source profile has actual credentials in the config file. So currently that excludes any use of an instance profile to assume a different role.
Support New AWS APIGW Binary Responses
This is a Feature Proposal Description Previously, AWS API Gateway did not support binary responses, making it impossible to return images from your serverless API. Now they do (see https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/). We need to be able to configure HTTP endpoints/events in serverless to use this new functionality.
Add support for AWS Single Sign-On
AWS recently released a SSO service that integrates with Organizations and the AWS Directory Service: https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html Currently, the only way to consume this service is via a browser. Shortcuts are provided to copy and paste shell commands to export the appropriate environment variables, but this is unacceptable. Users should not need to use a web browser to authenticate with CLI tools. Other tools such as aws-adfs exist to do this for ADFS, Octa, etc, but there are not currently any for AWS SSO. Since this is a first-party AWS service, aws-cli should support it.
Add support for AWS API Gateway Basic Request Validation
This is a Feature Proposal Description AWS API GW officially supports this now - http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-request-validation.html For example, it could be set as a new property somewhere in `service.function.events.http.validation` config object. Blog post: https://aws.amazon.com/blogs/compute/how-to-remove-boilerplate-validation-logic-in-your-rest-apis-with-amazon-api-gateway-request-validation/
autocomplete for fish shell
I see there is already a virt env for fish (/usr/local/aws/bin/activate.fish) can you please also add an autocompleter for the fish shell?
Can we have jq installed by default in aws v2 cli docker image?
Is your feature request related to a problem? Please describe. We commonly use jq to manipulate json as input or pick properties from aws command output in shell script. It will be really good if we can include jq in official docker image of v2 like what Azure cli does. Describe the solution you'd like include jq program in aws v2 docker image Describe alternatives you've considered We currently build our own custom image for v1 to have jq installed. Additional context
Provide Official AWS CLI Docker Image
I was surprised to find that there is no official Docker image for development with the AWS CLI. The "amazon" user on Docker Hub contains only the these images for working specifically with ECS and Elastic Beanstalk, and there do not appear to be any official Amazon Docker images on the new Docker Store yet. When I searched "aws cli" on Docker Hub, the most popular image (with 1M+ downloads) was this one, created by Mesosphere. It's good enough, with a very simple Dockerfile based on the super-tiny Alpine Linux image. Upon further investigation, I found the `aws-codebuild-docker-images` repo in this organization, with an `ubuntu-base` Dockerfile. This image looks great, so why hasn't it been pushed to Docker Hub/Store? For that matter, why haven't any of the images in that repo been pushed? The Mesosphere `aws-cli` image will work fine, but its simplicity compared to that `ubuntu-base` made me concerned that it hasn't been properly optimized; similarly, the `ubuntu-base` image is based off Ubuntu 14.04.5, which is both old and bulky compared to Alpine Linux. TL; DR I believe there should be an official `aws-cli` Docker image maintained by Amazon and pushed to Docker Hub/Store. Ideally, it should support all the major modern Linux distros, including Alpine Linux. One should be able to run the following command, and have everything just work: `docker run -it amazon/aws-cli` EDIT This Issue is now being tracked at #3553. You should like that one to show your support
aws eks update-kubeconfig invalid apiVersion
Describe the bug Update kubectl from v1.23.6 to 1.24.0 and run commands [code block] I get the following error and exit status 1 [code block] Kubectl must need an updated apiVersion in kubeconfig file. Not sure if this is on aws side or kubectl side. https://github.com/kubernetes/kubectl/issues/1210 Expected Behavior No error message when using `kubectl` and `aws eks update-kubeconfig` Current Behavior error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" Reproduction Steps Update kubectl from v1.23.6 to 1.24.0 and run commands [code block] Possible Solution _No response_ Additional Information/Context _No response_ CLI version used whatever is running in aws/codebuild/standard:5.0 Environment details (OS name and version, etc.) aws/codebuild/standard:5.0
aws-cli-v2 issue with alpine using Docker
I tried to install aws cli v2 with alpine using docker but after installation it doesn't find the aws command even the directories existing. I tried using the following commands [code block]
No official way to install the AWS CLI (v2) on Mac M1 (without Rosetta)
Describe the bug There is no package available for M1 Macs for the AWS CLI v2. The only current way to do so is through Homebrew the problems are detailed here - https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/ Expected Behavior Base don the documentation I would assume that I can do this [code block] Current Behavior The only way to install AWS is through Homebrew. This is an unofficial workaround https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/ Reproduction Steps https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/ Possible Solution https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/ or to install Rosetta on a M1 Mac https://support.apple.com/en-il/HT211861 Additional Information/Context _No response_ CLI version used v2 Environment details (OS name and version, etc.) M1 Mac
add --all-dependencies option to ec2 delete-vpc
Feature request support `aws ec2 delete-vpc --all-dependencies --vpc-id vpc-deadbeef` Details The AWS web console will delete a VPC along with all its dependencies. The `aws` cli tool says (when trying to delete a VPC with any dependencies): `A client error (DependencyViolation) occurred when calling the DeleteVpc operation: The vpc 'vpc-deadbeef' has dependencies and cannot be deleted.` 1. Subnets 2. Security Groups 3. Network ACLs 4. VPN Attachments 5. Internet Gateways 6. Route Tables 7. Network Interfaces 8. VPC Peering Connections Maybe also add `--vpn-connection`.
aws s3 ls - find files by modified date?
Hi, We'd like to be able to search a bucket with many thousands (likely growing to hundreds of thousands) of objects and folders/prefixes to find objects that were recently added or updated. Executing aws s3 ls on the entire bucket several times a day and then sorting through the list seems inefficient. Is there a way to simply request a list of objects with a modified time <, >, = a certain timestamp? Also, are we charged once for the aws s3 ls request, or once for each of the objects returned by the request? New to github, wish I knew enough to contribute actual code...appreciate the help.
Dependency Broken (2-thenable) - 404
❗️ NOTE FROM MAINTAINERS ❗️ It's an issue on npm registry side, where for requests which include the `accept: application/vnd.npm.install-v1+json` header a `404` response is returned Following returns 200 [code block] Following 404 [code block] I was informed that __issue is already fixed on npm side__. Still it may might take a couple of hours to propagate across all nodes and CDN layer. --- Hello there! I'm getting 404 err when installing serverless with: `npm install serverless`, check this out: Is someone facing the same problem?
kubernetes-e2e-gci-gke-multizone: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-multizone/1128/ Run so broken it didn't make JUnit output!
ci-kubernetes-e2e-gci-gke-serial: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial/93/ Run so broken it didn't make JUnit output!
kubernetes-e2e-gci-gke-flaky: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-flaky/31/ Multiple broken tests: Failed: [k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes create a PV and a pre-bound PVC: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes create a PVC and a pre-bound PV: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: Test {e2e.go} [code block] Issues about this test specifically: #33361 Failed: [k8s.io] PersistentVolumes create a PVC and non-pre-bound PV: test write access [Flaky] {Kubernetes e2e suite} [code block] Failed: [k8s.io] PersistentVolumes should create a non-pre-bound PV and PVC: test write access [Flaky] {Kubernetes e2e suite} [code block]
ci-kubernetes-e2e-gci-gke-reboot: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot/346/ Multiple broken tests: Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33405 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on {Kubernetes e2e suite} [code block] Issues about this test specifically: #33407 #33623 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33874 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33882 #35316 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33703 #36230 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #34123 #35398 Previous issues for this suite: #37
Add ability to limit bandwidth for S3 uploads/downloads
Original from #1078, this is a feature request to add the ability for the `aws s3` commands to limit the amount of bandwidth used for uploads and downloads. In the referenced issue, it was specifically mentioned that some ISPs charge fees if you go above a specific mbps, so users need the ability to limit bandwidth. I imagine this is something we'd only need to add to the `aws s3` commands.
ECR image push fails: image tag already exists in immutable repository
Pushing a Docker image to an AWS ECR repository with immutable tags fails because the tag (e.g. the version from package.json) was already pushed previously. ECR with immutable tags rejects any push that would overwrite an existing tag. The fix is to bump the version in package.json before every push to a production ECR repository. CI/CD pipelines that don't auto-bump versions will fail repeatedly on the same tag.
kubernetes-soak-continuous-e2e-gke: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-soak-continuous-e2e-gke/8059/ Multiple broken tests: Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite} [code block] Issues about this test specifically: #26982 Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite} [code block] Issues about this test specifically: #29816 #30018 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite} [code block] Issues about this test specifically: #26784 #28384 Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite} [code block] Issues about this test specifically: #28220 Previous issues for this suite: #28514 #30157
After integrating Api getaway and lambda on Amazon console I see error lile "Could not list api-gateway event sources."
Serverless Framework Version: beta 1.0 Operating System: windows Additional Details: After integrating Api getaway and lambda on Amazon console(Lambda tab) I see error lile "Could not list api-gateway event sources." My yaml file: Functions: lambdaFunction: handler: com.myHandler.DataExtractorLambda events: - schedule: cron(0/1 \ \ \ ? ) - http: path: publicGetaway/getResources method: get Note: Schedule trigger is available for editing and works fine.
Missing required key 'Bucket' in params
This is a Bug Report Description I am getting following error when run `serverless deploy -v` [code block] Current `serverless.yml` [code block] I am not using anything related bucket than why i need to setup bucket key ?
S3 sync: s3 -> local redownloads unchanged files
We store a pile of files in S3 and it's handy to have a local copy of our S3 buckets for development and backup. Upon first glance `aws s3 sync` looks like it'll work. I ran sync on our entire bucket and it completed successfully; it downloaded a whole bucket to local disk. The second time I ran the command it was redownloaded some files that haven't changed (on S3 or locally) alongside the new ones. [code block] These files were just downloaded with the first `sync`. The local modified time & size match S3's values. While I never rule out the possibility of user error I don't see an obvious cause. The first S3->Local sync completed normally, I run it again and it redownloads _some_ files every time that haven't changed. Not all, just some. And it's the same files redownloaded every time. My cli version is `aws-cli/1.2.13 Python/2.7.6 Darwin/10.8.0` This may or may not be related to issue #599, but I won't personally make that call.
aws-cli should set default region to EC2 instance region
Recently we provisioned an EC2 instance with the aws-cli installed that is using IAM roles. We forgot to set the AWS_DEFAULT_REGION environment variable and got an error stating that the default region was not specified. I am proposing that aws-cli should be able to assume the given region of the EC2 instance. This would eliminate one step of adding the environment variable to the system. If the aws-cli needs to talk to a different region, it can always use a different profile or override the region. I am sure there are implications to this that need to be thought about.
ci-kubernetes-e2e-gci-gce-statefulset: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-statefulset/3134/ Multiple broken tests: Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster {Kubernetes e2e suite} [code block] Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster {Kubernetes e2e suite} [code block] Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster {Kubernetes e2e suite} [code block] Failed: Test {e2e.go} [code block] Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048 Failed: [k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster {Kubernetes e2e suite} [code block]
[k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-prod-parallel/2086/ Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite} [code block]
Rate exceeded
Description I have 8 times ${cf:..} in my serverless.yaml and I get 3 out of 5 times "Rate exceeded" while running serverless. I suppose I've reached a limit for cloudformation's API calls (DescribeStack for instance). Is there any chance to avoid this error except of increase my limits? Why doesn't serverless calls the api only once for all stacks? Or at least only once per stack? Last but not least: Which limit do I reach? I don't know which one mentiond on aws limits For bug reports: What went wrong? I run "serverless deploy -v" and I get an error "Rate exceeded" What did you expect should have happened? deploying without that error What was the config you used? [code block] What stacktrace or error message from your provider did you see? [code block] For feature proposals: What is the use case that should be solved. The more detail you describe this in the easier it is to understand for us. If there is additional config how would it look Similar or dependent issues: #3339 Additional Data Serverless Framework Version you're using: 1.14.0 Operating System: Fedora / Linux Stack Trace: Rate exceeded Provider Error messages*:
kubernetes-e2e-kops-aws-updown: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-kops-aws-updown/2102/ Run so broken it didn't make JUnit output!
aws sync hangs
I am consistently seeing aws cli fail during sync to s3 with the following command: aws s3 sync --size-only --page-size 100 /mnt/ebs-volume/image/ s3://bucket-name ubuntu@ip-10-0-0-246:~/www$ aws --v aws-cli/1.11.56 Python/2.7.12 Linux/4.4.0-64-generic botocore/1.5.19 It runs well for the first gig and then hangs. This is a 50 gig filesystem: Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~4 file(s) remaining (calculating...upload: ../..//img_2630_thumb.png to s3://bucket/image.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~3 file(s) remaining (calculating...upload: ../../img_2630.png to s3://bucket/img_2630.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~2 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~2 file(s) remaining (calculating...upload: ../../img_2628.png to s3://bucket/img_2628.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~1 file(s) remaining (calculating...Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~1 file(s) remaining (calculating...upload: ../../image/img_2628_thumb.png to s3://bucket/img_2628_thumb.png Completed 1.0 GiB/~1.0 GiB (1.8 MiB/s) with ~0 file(s) remaining (calculating... And then it just sits there. I'm really not sure what to check at this point as the cli is not very verbose.
aws ssm put-parameter performs an HTTP GET request when the value param is an url
When you try to put a parameter into ssm-param-store with an url on the value `aws-cli` perform a HTTP GET request to the value. [code block] [code block] [code block] [code block] [code block]
ci-kubernetes-e2e-gci-gke-autoscaling: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-autoscaling/214/ Multiple broken tests: Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite} [code block] Issues about this test specifically: #33793 #35108 #35744 Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite} [code block] Issues about this test specifically: #34102 Failed: [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] {Kubernetes e2e suite} [code block] Issues about this test specifically: #33891 Failed: [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite} [code block] Issues about this test specifically: #33754 Failed: [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] {Kubernetes e2e suite} [code block] Issues about this test specifically: #34581 Failed: [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSize
S3 - RequestTimeout during large files
I'm trying to upload a large file (9 GB) and getting a RequestTimeout error using `aws s3 mv ...` I haven't fully tested it yet, but it seems like if I run the command over and over it will eventually work. Here's the debug log from a failed attempt: https://s3.amazonaws.com/nimbus-public/s3_backup.log I'll post back if I determine that retrying the command several times works or not. aws version: aws-cli/1.1.2 Python/2.7.3 Windows/2008ServerR2
Windows AWSCLI64PY3 MSI not shipping aws.exe anymore breaks scripts
The new CLI version on Windows does not install the 'aws.exe' file anymore, instead it's aws.cmd. This breaks bash scripts running in Windows because bash does not resolve .cmd files as executable. Would you consider shipping an .exe wrapper? It's nice being able to execute .sh scripts on Linux and Windows unchanged, but now this forces us to change our calls to 'aws.cmd' rather than just 'aws'
ci-kubernetes-e2e-gci-gke-reboot-release-1.5: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-reboot-release-1.5/348/ Multiple broken tests: Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #34123 #35398 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on {Kubernetes e2e suite} [code block] Issues about this test specifically: #33407 #33623 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33874 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33405 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33882 #35316 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33703 #36230 Previous issues for thi
[k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-staging-parallel/6307/ Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite} [code block]
TearDown {e2e.go}
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-kops-aws-updown/485/ Failed: TearDown {e2e.go} [code block]
ci-kubernetes-e2e-gci-gce-reboot: broken test run
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-reboot/751/ Multiple broken tests: Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33703 #36230 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on {Kubernetes e2e suite} [code block] Issues about this test specifically: #33407 #33623 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards {Kubernetes e2e suite} [code block] Issues about this test specifically: #33405 Failed: [k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart {Kubernetes e2e suite} [code block] Issues about this test specifically: #33874 Previous issues for this suite: #36947 #37179
PyYAML requires python-dev dependency
Hi, It seems the latest release of `awscli` requires PyYAML as its dependency. I noticed that this dependency breaks my CI test in Ubuntu 14.04. It returned the following error: [code block] I'm wondering, is it possible to use `awscli` via `pip install awscli` without adding extra `python-dev` dependency? Or if it's required to use extra dependency, I think it's a good idea to put it in the documentation. Thank you!
Up {e2e.go}
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-federation/4526/ Failed: Up {e2e.go} [code block] Previous issues for this test: #33357 #33377
kubernetes-e2e-gce-federation: broken test run
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-federation/1554/ Run so broken it didn't make JUnit output!
Unicode characters in s3 command: 'ascii' codec can't encode characters in position 32-33: ordinal not in range(128)
I've seen similar questions asked for different aws-cli products, and the answer always has to do with the locale, though the asker, like myself, generally has their locale in order. [code block] How can I force using a different codec instead of ascii?
Fail to install aws-cli via sudo pip install awscli
I'm on El Capitan OSX. Got following error: Installing collected packages: six, python-dateutil, docutils, botocore, pyasn1, rsa, awscli Found existing installation: six 1.4.1 DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project. Uninstalling six-1.4.1: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 211, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 311, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 640, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 716, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 125, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip/utils/init.py", line 315, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Framewo
Deploy many micro/nano services to one API Gateway
This is a Feature Proposal Description I'd like to deploy many microservices to one API Gateway, but currently I'm not able to as the template is creating a new Gateway for every service. Would it be possible to change the behaviour? I may create a Pull Request for this if you agree Topic on the forum: http://forum.serverless.com/t/multiple-services-behind-a-single-api-gateway/191
HTTP API Support
Support HTTP API: https://aws.amazon.com/blogs/compute/announcing-http-apis-for-amazon-api-gateway/ --- Proposal of a configuration schema within a Serverless Framework _Note: This proposal is "live" (updated in place whenever there's an agreement on changes to it)_ [code block] In addition support for importing Open API objects can be added as: [code block] Implementation roadmap Implement and publish HTTP API support in following stages: 1. [x] Basic routes support, no cors, no authorizers, no open API objects import (#7274, #7331 & #7383) 1. [x] CORS support (#7336) 1. [x] JWT authorizers support (#7346) 1. [x] Access logs (#7385) 1. [x] Support existing HTTP API's (#7396) 1. [x] Support `timeout` setting (#7401) 1. [ ] (eventually) Open API object support - possibly leave out for community to implement (when demand shows)
Tracking issue for AWS CLI v2
Version 1.0.0 of the AWS CLI was tagged just over 5 years ago. In that time, a lot has changed. We've received tons of feedback from customers. We've learned what features customers enjoy, and what parts of the CLI we'd like to change. There are changes we've wanted to make, but needed to wait until the next major version of the AWS CLI due to either the backwards incompatible nature or the large scope of the change. Well, we're here now. This issue is to track all the work for CLI v2. We wanted to do this work on GitHub right from the beginning. Even though we're in the early stages of development, community feedback will play an important role in its development. Any AWS CLI v2 related issue/PR will be labeled with v2. We'll also reference this issue for any AWS CLI v2 related PRs so everyone can see all the changes we're making. * All the development work will be in the v2 branch. If you have an idea for CLI v2, whether it's a new feature or a change we couldn't make in v1 of the AWS CLI, please feel free to open a GitHub issue and let us know. You can also go through any issue labeled as CLI v2 and +1 features that you'd like to see us implement. Blog: https://aws.amazon.com/blogs/developer/aws-cli-v2-development/