Below you can see the process and skills I used for creating this website
The main purpose of this website is a creative and engaging way to showcase my skills and experience and what I can do. Below you will see how I made this site, why I made it and what I used to make this site
I think it is a great tool to showcase yourself using a website, it can be so much more engaging than a CV no matter what template you use. Also doing this as a project, helps me gain more knowledge and skills and helps with a continuous improvement style of working.
My first-ever website was running on a Windows machine using IIS and it wasn't even a virtual machine. My Second ran in a virtual Ubuntu environment using an apache2 server, I chose this method as I was learning Proxmox and VMware at the time.
Fast forward to now this site's files are in a private GitHub Organisation Repo and are incorporated into a Private Organisation Docker Hub Image.
By Deciding to Self-Host I have had to go through many steps, deciding how to build the website, where can i reliably store the files of the website and what about backups or a place for trying out different styles and formats that won't affect my main or Production files and website, what do I use to serve the information and how do i establish a reliable link for DNS
So How Did I Do It
Admittedly I used a program called Mobirise which is a drag-and-drop style website builder but I used this one in particular as it exports the files and doesn't keep it on their platform with a randomly generated URL that you will never remember.
When I First make Changes the Project Files, HTML pages, CSS Code and Resources are all stored locally on a Local NAS In a specific raid pool that can support two drive failures at once before any data is lost. Then Depending on what has been changed or created I push the code and resources to the PPE (post-production environment for testing/sanity check before going live) or PRD Github Repo for the site (The PRD repo once updated will update the live site).
Additionally, the NAS Storage pushes a backup to an S3 Bucket Daily at midnight so I can achieve the secure 3-2-1 storage rule
The Docker Image I created is very basic as it only needs to support a static website and utilises Alpine as the base image due to how small it is.
It is created in such a way that every time the image is pulled it will also remove the files it had stored for the website or delete the example pages if it hasn't been run before, and then clones the files from GitHub and copies them into place for the webserver to be able to access them and serve them on container port 80.
Doing it this way means I never have to update the image to update the website simply re-pull it or recreate pods.
So now you know how I made the site where its files are and that a docker image is running the site you're currently reading but what's running the docker image?
Well, it's technically two different platforms in a way, there is a 99% probability that this version is running on my virtual K8S Cluster however as a precaution and backup of sorts it is also running on a docker host through portainer.
But why? My role at Metric Gaming had a lot of K8S exposure as their entire platform ran through clusters so I wanted to create my own to experiment, learn, and utilise for my projects. This is a great thing as well as K8S I also have my own O365 and Google Organisation accounts and JumpCloud as an IDP for both and managing all the users. I copied the business infrastructure on a small scale so I could test solutions without risking causing any disruption to production.
Now the docker host is the most likely to always be on and reachable (Since the K8s Cluster is also used for testing learning it may not always be up) so if the load balancer doesn't detect the K8S Host it sends the request to the simple docker host and realistically the real-world scope of this website shouldn't require more than one container.
My DNS of choice is Cloudflare apart from the usability of their platform the main reason is tunnels.
Cloudflare's tunnels can be hosted in docker containers which means I don't have to have open ports on my firewall as they tunnel out from my network.
Since my ISP connection is residential I don't have a static IP which is another great reason to use tunnels instead of Dynamic DNS solutions.
And this gave me another opportunity to experiment with K8S I Have 4 Domains and split their traffic over different tunnels but with K8S it gave me the opportunity to learn how to get pods to automatically increase so if nothing is happening on the domains it will go back to a single pod for each tunnel but if that's pods resources hit 75% it will create a new pod and so on up to 10 pods which for my use case is more than enough.
They are of course also hosted in the docker host as a backup in case the K8S cluster is down for any reason.
Mobirise.com