Multi-Cluster Architecture With Rancher

There are situations where a single Kubernetes cluster is unable to handle the application load or properly distribute the application to end-users. In such instances, multi-cluster Kubernetes solutions are ideal for distributing the work across multiple clusters.
We can make it happen even with multiple cloud providers ,which distributed among multiple regions.
First I will go through some of advantages using multi cluster.But no need to worry I’m not taking one side.I will tell about the advantages of using one large cluster.It’s up to you to decide based on the requirements.
Imagine a situation you really need to consider about the availability and .Instead of having more load on one cluster ,now you can distribute it exponentially ,hey you got a customer review ,`less latency with faster performance` ;)
Multiple cluster is a door to isolate those application ,yes that kind of cruel.But we have to obey to some standards too ,like policies and regulation in different regions. Now organizations can target specific clusters to meet different requirements and gain regulatory compliance relatively easily.
Let’s save some money..Users are not vendor-locked into a true multi-cloud deployment. When new features or cost savings are available with new cloud providers, we can seamlessly transfer applications, with minimal modifications, between different providers or Kubernetes clusters.
So what are the ways to really implement this.First need to consider about the Kubernetes multi-cluster configuration ,after that application architecture.
we can use Kubernetes-centric federated method or network centric method to implement multi cluster.
mostly people tend to go with network method due to kind of less maturity in centric federated method. network approach is based on mesh networking principles and has adapted service mesh concepts to the infrastructure.
But what are the tool that we can use for Kubernetes-centric federated method ? Shipper ,Admiralty and Kubefed are some of them .
there are tools that support configuring or managing Kubernetes multi-cluster environments ,like rancher ,fleet and google anthos.

we are gonna look at how Rancher handle it.

wait ..before get in to the implementation I would like to see how , it ‘s look like when working with multiple clusters.
Just have a look and let’s move on:
configuration from a KUBECONFIG file typically contains the definition of your cluster, such as:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ...
server: https://172.19.113.9:443
name: gettingstarted
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ...
server: https://172.19.218.42:443
name: demo
contexts:
- context:
cluster: gettingstarted
user: username
name: username@gettingstarted
- context:
cluster: demo
user: username
name: username@demo
current-context: username@gettingstarted
users:
- name: username
user:
auth-provider:
config:
client-id: k8s
id-token: eyJhbGciOiJSUzI1NiIsInR5cC...
idp-certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS...
idp-issuer-url: https://containerauth.int.mirantis.com/auth/realms/iam
refresh-token: eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSld...
name: oidc
as we can see from a single user account accessing the both clusters.For switch between these clusters, it’s most convenient to use contexts, because they include both the cluster and use information. we can set these contexts by using the kubectl config command, as in:
kubectl config --kubeconfig=mykubeconfigfile set-context appcontext --cluster=gettingstarted --namespace=app1 --user=username
Real business Start here then:)
“In the cloud, servers are cattle, not pets.” So when you have a lot of cattle, you need a rancher to help manage your herds. Yp ,talking about rancher :)
At the same time i would like to give some initiative about Longhorn.Why it is necessary ?This is kind a bonus tip.
Since k8s node are designed to be ephemeral so they can fail at any moment, this works well for stateless applications like apps picking up data off a queue and doing the task at hand. But for some applications, we need our data to persist, in this case, we might not want to store our data on one node that can fall over, but on persistent storage instead. This is where longhorn comes into play.

Longhorn is a cloud-native distributed block storage solution for k8s. It is integrating very well with managed cloud services and allows us to back up data to buckets and different Availability zones. our backup data can even be stored in a different cluster so data can be preserved if the entire cluster fails.
Okay..since we are gonna use RKE(Rancher k8s Engine) just familiar with what is it really.For now think it as a wrapper around k8s, to make your k8s implementation a little easier. ( RKE runs k8s in Docker containers, rather than having to install and configure k8s by hand)
we can see all the cluster currently available on right after you logged in and now click on add cluster:

Choose AWS EKS:

Provide necessary details like AWS keys:(it’s not hard to find click right click top corner which mostly your name > security credentials>access keys)

after click on create this one tike little amount of time :). By the way we can create own custom clusters too(by clicking on existing nodes).

once you created give a name and import(by clicking on other clusters):

go back to home page you can see what you have done :).
But as i mentioned before ,it’s up to you to decide what’s best for you.Let’s go through the conditions where single cluster getting lead(think it like bit stretchy one).
running a single k8s cluster between clouds….okay ,that’s mean we have control the inter-node traffic over a public network.really ?
but why,..we need to provide direct connectivity between all nodes over a single private subnet.
We have a solution , what about mesh VPN?
VPN will encrypt all your traffic and provide a flexible subnet where all your nodes can communicate directly and securely. With a mesh VPN, the subnet can live anywhere.