Web server hacking

Building Calliope: A Technical Journey Through MacStories’ Big Software Project

[ad_1]

Last week the MacStories team launched Project Calliope, an enormous new software project that we’ve been working on tirelessly for the last year. If you’ve been following along, you’ve heard us describe Calliope as a CMS; but from a software-engineering perspective, it’s actually a whole lot more. While we introduced Calliope as the foundation of our all-new Club MacStories and AppStories websites, we have much bigger plans for the new platform going forward. This is the foundation for the next generation of MacStories, from the website itself to many special projects in the future.

We’re extremely proud of what we’ve created here, and as the sole developer of Calliope, this post will be my deep dive into the more technical side of the project. Fair warning: this will be easier to follow if you’re a software developer (particularly a web or back-end developer), but I’ll be doing my best to give understandable explanations of the technologies involved. I also just want to talk about the journey we took to get here, the challenges we faced along the way, and the factors that drove us to this particular set of solutions.

Around WWDC 2020, Federico and John briefed me on some ideas they’d been thinking through for a new Club MacStories web app. They wanted to expand the Club with some new subscription tiers, make our email-only content available on the web, and enable full-text search of the back catalog. I’d already been working on a new internal MacStories image uploader web app (slowly, on the side from my main job at the time), and we decided that the new Club site would be a good next project. The timeline would be pretty long since this was side work, but the new project idea seemed fairly straightforward.

Then in late 2020, the pandemic spelled the end of the startup I’d been working at. I contacted Federico and John the same day, and in a fit of perfect timing, we were able to work something out. I came over full time to MacStories and started work on the newly code-named Project Calliope. While a new subscription-only podcast was added to the plate, we still figured we could get it launched some time in early 2021. I started picking technologies.

Prior to this project, my background was primarily as a cloud engineer. I’d had some ideas for an ambitious, scaleable yet cost-effective back-end platform for years, and had already been dabbling in building it for our internal MacStories Images web app. It started with my insatiable interest in a back-end technology called Kubernetes; an interest that was conveniently enabled by a new Linode service offering.

Kubernetes

It’s a little much to go into Kubernetes here (okay it’s a lot much), but at a very high level, it’s a modern way to deploy back-end services onto a pool of servers. It’s best used with a “microservices” architecture (where you create many tiny services which each take care of a particular concern) rather than a more traditional “monolithic” architecture (where you write one giant service which pretty much does everything all at once). Calliope was built from the ground up to work in this paradigm, and our full platform consists of eight separate (micro)services and three separate database deployments, all managed seamlessly by Kubernetes.

Kubernetes has been around for a while, but it has generally been prohibitively expensive to run at a tiny company. This is especially the case when you want to run a “managed” Kubernetes deployment, which means a third-party company handles the Kubernetes controller so that you don’t have to (this can significantly decrease the complexity of using Kubernetes). Enter, Linode.

Linode launched Linode Kubernetes Engine, or LKE, in mid 2020. This is a managed Kubernetes deployment, and in an extremely Linode-like move, they don’t charge their customers for the server that runs the Kubernetes controllers. You only pay for what you directly run your services on.

I’m a huge Linode fan, and have been running my personal server projects with the company for many years, but here’s a disclaimer: Linode has been and continues to be a sponsor of AppStories. That said, we’re getting no special deals for using their technologies, and I personally made the decision to use LKE because I believe it to currently be the best managed Kubernetes offering for the price. We wouldn’t bet the future of MacStories on a service that we didn’t believe in.

With LKE, we’re able to afford a professional-grade managed Kubernetes installation. For many months of testing and one successful launch week, LKE has been rock solid for us.

Services

So Linode runs the servers and Kubernetes manages deployments, but what about the services themselves? We use three open-source services and five of our own, plus two different Redis installations (one Redis Cluster and one Redis Sentinel, for those interested) and one MariaDB database.

Open-Source Services

To manage incoming traffic and load-balancing between services, we run the incredible Ambassador Edge Stack. This Kubernetes-native, open-source software project manages mapping URL paths to particular back-end services, and load-balancing incoming traffic across multiple service instances (among other things). For example, on launch day, we were running five different instances of our API service. Ambassador spread incoming traffic out across all five instances of this service, even though all of those requests were for the same URL. Ambassador has an enormous amount of excellent features, and if you’re looking for a load balancer, I highly recommend checking it out.

Collecting metrics is a key part of successfully managing any software platform, but perhaps especially for one built with a microservices architecture since problems can occur from so many different sources. We use the open-source project Prometheus to collect metrics from throughout our system, and the open-source project Grafana to graph these metrics in an understandable way. Using these services, we can monitor CPU and memory usage (as well as many more metrics) for all of our services at a glance. When a problem occurs, a spike on a line graph will generally point us immediately in the right direction.

Internal Services

Internally, we’ve built our platform on five services: an authentication service, a proxy service, an API service, a web server, and a Discord bot. On launch day, we deployed 18 separate service instances from these five sources to give ourselves full confidence that the platform would withstand any unexpected traffic bottlenecks. A week later, we’ve scaled down to just 8 total service instances across far fewer servers since things have been very stable, and no particular concerns were identified.

We built our back-end services on Node.js, and our front-end website and web server on React via Next.js. Since everyone immediately follows that by asking me if I use JavaScript…

[ad_2]

Read More:Building Calliope: A Technical Journey Through MacStories’ Big Software Project

Products You May Like