Deployment
Last updated
Was this helpful?
Last updated
Was this helpful?
We deploy with . In order to deploy your own network you have to and get a kubernetes cluster.
We have tested two different kubernetes providers: and .
There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
Open minikube dashboard:
This will give you an overview. Some of the steps below need some timing to make ressources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
Follow the below. If all the pods and services have settled and everything looks green in your minikube dashboard, expose the nitro-web
service on your host system with:
At first, create a cluster on Digital Ocean.
Download the config.yaml if the process has finished.
Put the config file where you can find it later (preferable in your home directory under ~/.kube/
)
In the open terminal you can set the current config for the active session: export KUBECONFIG=~/.kube/THE-NAME-OF-YOUR-CLUSTER-kubeconfig.yaml
. You could make this change permanent by adding the line to your .bashrc
or ~/.config/fish/config.fish
depending on your shell.
Otherwise you would have to always add --kubeconfig ~/.kube/THE-NAME-OF-YOUR-CLUSTER-kubeconfig.yaml
on every kubectl
command that you are running.
Now check if you can connect to the cluster and if its your newly created one by running: kubectl get nodes
If you got the steps right above and see your nodes you can continue.
First, install kubernetes dashboard:
Get your token on the command line:
It should print something like:
Proxy localhost to the remote kubernetes dashboard:
You have to do some prerequisites e.g. change some secrets according to your own setup.
Change all secrets as needed.
Those secrets get base64
decoded in a kubernetes pod.
Switch to the namespace human-connection
in your kubernetes dashboard.
This can take a while because kubernetes will download the docker images. Sit back and relax and have a look into your kubernetes dashboard. Wait until all pods turn green and they don't show a warning Waiting: ContainerCreating
anymore.
Create letsencrypt issuers. Change the email address in these files before running this command.
Create an ingress service in namespace human-connection
. Change the domain name according to your needs:
Check the ingress server is working correctly:
If the response looks good, configure your domain registrar for the new IP address and the domain.
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
If everything looks good, update the issuer of your ingress. Change the annotation certmanager.k8s.io/issuer
from letsencrypt-staging
to letsencrypt-prod
in your ingress configuration in human-connection/ingress/ingress.yaml
.
Delete the former secret to force a refresh:
Now, HTTPS should be configured on your domain. Congrats.
This setup is completely optional and only required if you have data on a server which is running our legacy code and you want to import that data. It will import the uploads folder and migrate a dump of mongodb into neo4j.
Prepare migration of Human Connection legacy server
Create a configmap with the specific connection data of your legacy server:
Migrate legacy database
Patch the existing deployments to use a multi-container setup:
Run the migration:
Grab the token from above and paste it into the login screen at
If you want to edit secrets, you have to base64
encode them. See .
Follow and install certmanager via helm and tiller:
Create a secret with your public and private ssh keys. As the points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with ssh-keygen
and copy the public key to your legacy server with ssh-copy-id
: