How to do authorization using RBAC

Kubernetes is the most used and reliable orchestration tool and even a household name for developers as it simplifies many aspects of running a service oriented application infrastructure. API server is considered core of kubernetes control plane as it validates all the resources. All the requests sent by clients first goes to the API server and then any CRUD (create, read, update, delete) operations are allowed after authentication. So if we can filter these incoming requests to API server we can almost fully protect our cluster as API server is called gateway of the cluster.
In this Article we will talk about how we can protect our EKS cluster using Role Based Access Control (RBAC) an authorization step used to validate what and where a user can access Kubernetes Resources. First let’s start with knowing what Amazon Elastic Kubernetes Service (EKS) is all about.
Amazon EKS and its features
Amazon Elastic Kubernetes Service (EKS) is a managed kubernetes service that provides the facility to run kubernetes on premises and in AWS. With Amazon EKS we can automatically manage scaling, reliability, EKS addons, high availability of AWS infrastructure as well as AWS security integration with cluster management with IAM, Role Based Access Control (RBAC) and AWS Virtual Private Cloud (VPC).
EKS maintains the resilience for the Kubernetes Control Plane by replicating it across multiple Availability Zones. While using some plugins from the kubernetes community EKS allows full compatible integration of existing applications to integrate with EKS. As mentioned along with various other benefits of Amazon EKS one of the major security edge is integration of AWS IAM with RBAC to control the lateral movement of users inside the clusters to implement zero trust security approach toward the cluster.
How to do authorization using RBAC
Authorization in Kubernetes follows the rule of giving the least privilege, i.e. any account should not be given full access but a limited access within a specific namespace. Certain rules and policies are made for each user for their operations on cluster.
By default the cluster creator gets the admin privilege. Users can be given group based roles or individual roles for their movement and responsibilities. After that these roles are bind to the respective users – this is called role binding.
RBAC authorization uses the rbac.authorization.k8s.io API group to drive the authorization decision. AWS provides external service feature to have normal users in Amazon EKS environment using aws-iam-authenticator for the authentication layer which uses IAM credentials for authentication to cluster.
This is maintained by Kubernetes Special Interest Group (SIG). The webhook service also consult the aws-auth ConfigMap to verify that IAM identity matches with the cluster user. Service accounts are managed by Kubernetes which are bound to certain and are created during creation of clusters. Amazon EKS uses a token authentication webhook to authenticate the request sent, but final say is taken by RBAC for authorization.
Some of the roles we can create using RBAC are:
1. Cluster Role – these rules are applicable at cluster evel, i.e. no namespace is set by default to it.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: Power-User
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
2. Roles – roles are specified in a particular namespace and wrt to the policies it is assigned to Users. These are namespace specific Roles.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: namespace1
name: User-space1
rules:
- apiGroups: [""]
resources: ["pods" , "services"]
verbs: ["get", "watch", "list"]
Roles are then bind keeping different aspects, Role Binding is grant of Roles to users at namespace level whereas Cluster Binding grants access cluster wide. Role Binding can even reference a Cluster Role and Bind that to the namespace of Role Binding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pods-using
subjects:
- kind: Group
name: Lead
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: Power-User
apiGroup: rbac.authorization.k8s.io
How to do this in Amazon EKS?
The integration between IAM and RBAC is the aws-auth ConfigMap. This helps us to use the power of IAM principles (roles and users) and kubernetes (user and groups) which defines the full authentication and authorization steps to grant access to client.
Creating Namespace
It’s a popular line that what happens in vegas stays in vegas. We can consider namespace as our kubernetes vegas which is regarded as an logically isolated place from each other. We can have multiple namespaces in a single cluster which helps in team management, security and performance.
kubectl create namespace DemoNamespace
kubectl run --generator=run-pod/v1 nginx --image=nginx -n DemoNamespace
Creating Users in IAM
AWS provides lower cost high level data security to their users by controlling the user access to resources in AWS. Here we are integrating IAM so we will create an user which is a must to start with the configuration. Our user is tony which we are creating using a script: tony.sh:
IAM_USER=tony
aws iam create-user --user-name $IAM_USER
cat << EoF > $IAM_USER.sh
export AWS_SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name $IAM_USER \
--query AccessKey.[SecretAccessKey] --output text)
export AWS_ACCESS_KEY_ID=$(aws iam list-access-keys --user-name $IAM_USER \
--query AccessKeyMetadata[0].AccessKeyId --output text)
EoF
Mapping the user to the cluster
ConfigMaps enable you to separate your configurations from your pods and components, which helps keep your workloads portable. Adding the user to the ConfigMap using the below kubectl command and map in the user section.
kubectl edit configmap/aws-auth -n kube system
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::<YOUR_AWS_ACCOUNT_NO>:role/InstanceRole-us-west-1-workers_asg1
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: | <-- ADD THIS section
- userarn: arn:aws:iam::<AWS ACCOUNT NO>:user/tony
username: tony
kind: ConfigMap
metadata:
creationTimestamp: "2022-3-23T14:23:54Z"
name: aws-sample
namespace: DemoNamespace
Creating custom role
As mentioned above in the role binding we will create the roles in the same way which will be having the permissions to get, watch and list in the defined namespace:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: DemoNamespace
name: pod-User
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list","get"]
- apiGroups: ["extensions","apps"]
resources: ["deployments"]
verbs: ["get", "list"]
these YAML files will be applied by:
kubectl apply -f <YAML_FILE_NAME.YML>
Doing role binding
Role binding is the concept to bind the role to the user to finally grant permissions for restricted movement in the cluster:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-User
namespace: DemoNamespace
subjects:
- kind: User
name: tony
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-User
apiGroup: rbac.authorization.k8s.io
kubectl apply -f <YAML_FILE_NAME.YML>
Now all the environment is configured. It’s time to test the setup:
source tony.sh #To test the script
kubectl get pods # Opps! not working,This User have only DemoNamespace level
kubectl get pods -n DemoNamespace # Woohoo ! this will work
kubectl create configmap my-config --from-file=path/to/bar #This will not work as create verb is not allowed even for configmap.
Kubectl delete deployment/nginx -n DemoNamespace #This will work
As shown above these are some of the test cases that will help you test the configurations you have made and the bindings you have done.
Conclusion
So in this article we discussed the importance of Role Based Access Control in Amazon EKS and how we can achieve it. Using RBAC to grant permission is regarded as one of the best way to implement zero trust security model. RBAC though regarded as one of the core security resource is fully dependent on the configuration. People tends to create faulty configurations which results in compromise in cluster’s security leading to high amount of data breach.