All articles

Dogfooding: How Tggl uses Tggl to release features

Dogfooding: How Tggl uses Tggl to release features

Naming things and managing cache are hard, but releasing features properly is harder, a different kind of "hard". While technical challenges are solved with sheer expertise and dedication, releasing features is a mix of technical, planning, communication, and execution. In this post, we will share our release process at Tggl and give you a behind the curtain look at how we use our own product to tackle those big challenges.

Challenge 1: having a safety net

Releasing new features can be stressful for many reasons, but the most common one is the fear of breaking things. Monitoring is a great way to get notified of those issues, but knowledge is not enough: what do you do when things go wrong? Having a clear process that you can follow is key to handle those delicate situation properly without having your judgment clouded by stress. You need a safety net, and you should have it in place before you need it.

"Just rollback!" says the engineer at the back of the bench. It's easy to say, but it's not always that simple. Databases migrations, cache invalidation, and other side effects can make it hard to roll back a feature by simply deployment a previous version of the code. This is why we need to plan the rollback before we release a feature.

Big red button

In those high stress situations, having a big red button that you can press to instantly disable a feature is a lifesaver. We call this a "kill switch". Ideally, that kill switch does not take minutes to deploy but seconds, which is why deploying cod eis impractical. At Tggl we use a simple feature flag for each feature we release, and before we go live, we make sure that the flag can actually kill the feature properly. For us, making sure that we can kill the feature instantly is more important than the feature itself.

It's a simple concept, but it can save you a lot of headaches. While most of our kill switches are temporary (meaning that we remove it from the code after a week) some of them are kept in place for a longer period of time. We simply mark them as permanent in our feature flag system, and we can remove them when we are confident that the feature is stable.

Challenge 2: testing without impacting real users

"It works on my machine", probably the most common phrase in software development. But does it work in production? We all know that local environment widely differ from production, which is why it is not a great indicator that the feature will work for real users. That's why we have staging environments, built to be as close as possible to the real thing without impacting our precious users. But staging is not production: different architecture, different environment variables, different data, different load, different providers, different everything. The only way to know is to test in production. But how do you do that without impacting your users?

You probably know the answer by know, we simply use the feature flag that we already have in place (see challenge 1 above) to deploy the code in production, but with the flag turned off. It is much easier to guarantee that a feature is disabled than to guarantee that it works properly.

Internal usage

This is where we eat our own dog food. We are our own beta testers, we simply enable the feature for all users with a @tggl.io email address and start using the feature ourselves every day. This way, we can test the feature without impacting our clients. We can also test the feature with real data, which is a big plus.

When it comes to actually releasing the feature to our clients, we know that it works because we have been using it for weeks. We are confident that the feature is stable, and we can release it with peace of mind. We know that we will not have any unpleasant surprises because we were testing the real thing all along.

Challenge 3: getting early feedback

Another source of headaches is to know whether you are releasing the right feature to your users or not. Are you delivering a feature that matters? Is it what people had in mind when you did your user research? This problem is as old as software development itself, and it's not going away anytime soon. The best way to tackle this problem is to get feedback early and often.

Call for beta

After dogfooding new features for a few weeks, we publish a release note in the "what's new" section of the app and call for a private beta. We enable the feature for the users who want early access via the Tggl dashboard, and we ask them to give us feedback. We also reach out to a few clients that we know requested the feature and enable it for them as well. We are not looking for bugs, we are looking for feedback on the feature itself.

Those first feedbacks are crucial to understand whether we are on the right track or not. We can quickly iterate on the feature and release a new version to the beta testers. We can also decide to kill the feature if the feedback is overwhelmingly negative, but luckily it never happened so far.

Challenge 4: load testing

We process a substantial amount a request on our servers every day, any new feature that we release can have a big impact on our infrastructure. We need to make sure that the feature is not going to push our servers to their limit, and we need to keep our response time as low as possible since our client rely on us. The problem is that load testing is hard, it is close to impossible to predict if a solution can handle a given load precisely because new challenges arise at higher volumes that you cannot reproduce at low volumes.

A common approach is to replicate the infrastructure in a staging environment and run load tests there. The issue is that you have to simulate production-like traffic, which is hard to do. You can also copy and route real traffic from production to your staging environment, but at that point your cost starts to skyrocket just because managing such a setup is hard and requires engineering resources.

Unsurprisingly, we use Tggl to tackle this problem. We simply enable the feature for a small subset of our clients and monitor the impact on our servers. We can then progressively release the feature to more clients and monitor the impact. This way, we can have a good idea of the impact of the feature on our infrastructure without having to run expensive load tests, all with the safety of being able to turn the feature off at any time by de press of a button.

Another approach that we often use is traffic sampling. For feature that do not have to be on or off for a given client, we can instead sample a small percentage of the incoming requests and ignore the rest. We used it for our analytics feature, only tracking a small percentage of the requests and monitoring our database load. Using Tggl we could slowly increase the percentage of requests that we track right from our dashboard until we reached 100% of traffic successfully. We wrote an in-depth case study on traffic sampling that you can read here.

Challenge 5: technical debt

The big downside of feature flags is that you add conditional code to your codebase knowing that eventually this code will become obsolete when your feature is released, where one branch of your condition will be dead code. The hard part is usually to track said dead code and plan its removal rapidly.

Rollout lifecycle

Once again, we rely on Tggl for managing our technical debt automatically. The stage of our flags is automatically updated based on the monitoring data it receives. When a flag is rolled out to 100% of users it is moved to the "Completed" stage, letting us know we can remove it from the code, and when a flag is removed from the code and not evaluated anymore it is moved to the "Legacy" stage, letting us know that we can safely archive it from the dashboard.

Every week during sprint planning, we review the flags that are in "Completed" and create a ticket for removing them from the codebase. We also review the flags that are in "Legacy" and archive them directly. This way, we keep our codebase clean and our dashboard up to date.

Dogfooding, love and use your own product

For us, dogfooding is really important and has been part of our DNA since day 1. We want to build a product that we love and use and believe that the best way to build a great product is to use it ourselves every day. By using tggl internally, we’re able to approach product development with a deep sense of empathy and ownership.

This mindset has allowed us to stay very close to our clients and understand their pain directly. We can prioritize feedback based on importance (that we can gauge better), not based on who has the loudest voice. We don’t have to rely solely on external feedback to understand what works and what needs improvement—we live it. This gives us a much clearer picture of what’s essential, what needs refinement, and what truly delivers value.

Being a direct beneficiary of our product also means that we’re more invested in its success. We’re not just building features for the sake of it; we’re building features that we know will make our lives easier and our work more efficient. This makes us more motivated to build a great product and more invested in its success.

The dangers of dogfooding

While dogfooding is a powerful way to refine our product, it has its downsides. One big risk is getting too focused on our own needs and missing what really matters to our customers. Just because something works well for us doesn’t mean it will for everyone. We are not our customers, in fact most of our clients are larger companies with different needs and constraints.

There’s also the risk of over-fitting our product to our needs, we miss issues that pop up in different customer environments. What works smoothly for us might be a headache for someone else. That’s why it’s important to balance dogfooding with regular, honest feedback from our users. It keeps us grounded and ensures we’re building something that works for everyone, not just for us.

Conclusion

Releasing features is never easy. At tggl, we rely on our own product to tackle these challenges. By using feature flags, we’ve built a safety net that allows us to deploy new features confidently, knowing that we can turn them off instantly if anything goes wrong.

We also beta test our features in production with a select group of clients, allowing us to gather early feedback and ensure we’re building the right solutions. In addition, we conduct load tests in real-world conditions to make sure our infrastructure scales smoothly by sampling traffic without overwhelming our system.

Most importantly, we stay close to our clients. While we use Tggl internally every day, we make sure we’re not just building for ourselves. We listen carefully to customer feedback, ensuring that our product evolves in a way that benefits everyone, not just our own team. This approach keeps us grounded and helps us build a product that serves a wide range of needs, without falling into the trap of overfitting it to our own use cases.

In the end, using Tggl to release Tggl features has made us more agile, more responsive to feedback, and better equipped to handle the challenges of software development. It’s a strategy that keeps us close to our customers while ensuring we continue to innovate and improve.

The easiest way to start with feature flags

No credit-card required - 30 day trial included