When migrating some old applications to a Kubernetes(k8s) cluster provisioned by kops, a lot of things might break and one of them is the missing policy for the node.
By default, nodes of a k8s cluster have the following permissions:
ec2:Describe* ecr:GetAuthorizationToken ecr:BatchCheckLayerAvailability ecr:GetDownloadUrlForLayer ecr:GetRepositoryPolicy ecr:DescribeRepositories ecr:ListImages ecr:BatchGetImage route53:ListHostedZones route53:GetChange // The following permissions are scoped to AWS Route53 HostedZone used to bootstrap the cluster // arn:aws:route53:::hostedzone/$hosted_zone_id route53:ChangeResourceRecordSets, ListResourceRecordSets, GetHostedZone
Additional policies can be added to the nodes’ role by
kops edit cluster ${CLUSTER_NAME}
Then adding something like:
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": ["dynamodb:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["es:*"],
"Resource": ["*"]
}
]
Then it will be effective after:
kops update cluster ${CLUSTER_NAME} --yes
The new policy can be reviewed in AWS IAM console.
Most lines were copied from here: https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md
🙂
