Get public TCP LoadBalancers for local Kubernetes clusters
X
Go
When it comes to managing Kubernetes clusters, especially when they're tucked away in someone’s home lab or a private cloud setup, exposing services to the outside world can feel like a real head-scratcher. Well, say hello to "inlets," a nifty operator that takes the headaches out of getting public TCP LoadBalancers for local Kubernetes clusters. Imagine using a managed Kubernetes engine that lets you expose services just by tagging them with "LoadBalancer." The cloud works its magic, routes traffic, and boom—you’ve got ingress to a previously internal cluster. But what about when you're running things locally, be it on your laptop, Raspberry Pi, or any on-premises setup? This is where inlets steps in, bridging the gap between local and public exposure by provisioning a VM on the public cloud and running an inlets server on it. Once inlets-operator is up and running in your cluster, it starts the inlets client and connects it to the inlets server. The original service then gets updated with the public IP address, just like it would in a managed Kubernetes environment. Need to delete or tweak the service? The cloud VM will respond accordingly. Talk about working smarter, not harder! Installing inlets is straightforward, supporting a plethora of cloud providers and even coming with a Helm chart. A single command can turn that `<pending>` LoadBalancer into a service with a real IP. Plus, the creation of Tunnel Custom Resources takes automation one step further. If you’re playing nice with other LoadBalancers, inlets has you covered. You can choose to tunnel specific services with annotations, or you can set it up to tunnel all LoadBalancer services except for selected ones. For the cherry on top, IPVS users can declare a Tunnel Custom Resource in place of the LoadBalancer field. The folks behind inlets have thought about who could benefit the most from its capabilities. Whether you’re running a private cloud, self-hosting applications, testing, sharing work with colleagues, or integrating webhooks and APIs, inlets removes the typical hassles like opening firewall ports or configuring dynamic DNS. Compared to other solutions, inlets offers unlimited connections, freedom to use your own DNS, seamless integration with Kubernetes, and compatibility with various IngressControllers or Istio Ingress Gateways. Plus, it brings cost-effectiveness to the table—exit-servers are spun up in your preferred cloud, many times at the lowest possible cost. For example, it's around $5 a month with DigitalOcean, including generous bandwidth allowances. You also get the ease of trying out "provider plans," with compatible rates across Google Compute Engine, DigitalOcean, Scaleway, AWS, Linode, Azure, and Hetzner. The beauty is that you pay only for the VMs you need, often chosen for their low cost and high bandwidth. Contributing back to this thriving project is easy, with the community open to fresh input and collaboration. Alternatives in the ecosystem like MetalLB, kube-vip, and Cloudflare Tunnels suggest how inlets can complement different setups while standing out in its ability to expose services publicly. For those who are constantly fine-tuning their homelabs or privately managed clusters, inlets offers a practical, flexible, and budget-friendly solution to expose your Kubernetes services to the world. Inlets serves a unique and critical role in making local and private Kubernetes clusters just as accessible and flexible as their managed counterparts. It’s like having your cake and eating it too—easy local development with the ability to make it publicly accessible, all while keeping costs and headaches to a minimum. So, whether you're a seasoned Kubernetes pro or just tinkering in your homelab, inlets is your go-to solution for making the private, public in the simplest way possible.