Telemetry sounds like a tough technical term, and if you look it up on the internet it mostly has a dry definition. It's often connected to industries that feel a bit detached from us like marine navigation or telecom systems or space travel. However, if you take a closer look you will notice that it is not so distant, it's just been rebranded for the 21st century under the name data driven. And we've all heard of data-driven companies like Tesla, Uber, Facebook, Google and many more, but also other segments like data-driven healthcare, programming, factories, learning, journalism etc.
The main goal of all these is collecting small bits of information. And while collecting from a few sources is not an issue, imagine collecting hundreds of those bits from thousands or even millions of those sources.
“The single raindrop never feels responsible for the flood.”
— Douglas Adams
However, those small bits quickly add up and that is what makes it valuable. I'm certain many people reading this are aware that websites use Google Analytics or similar services to track user navigation and a host of other data. Well, that is what's called telemetry. And using that data to optimize your content and user interactions and to understand your traffic is what is what being data-driven is all about. You no longer guess or use your gut instinct to figure out what works but rely on data to power your decisions.
How this helps
- It allows you to set tangible goals
- Helps you act faster
- Introduces structure into your process
- Takes out the guesswork
Whether you are tracking electricity consumption in your home or monitoring a large scale application deployment having clear-cut targets and goals is key to success. Key performance indicators (KPI for short) is the most often used term for defining these targets. For example, here at DataGekko, we track all our system performance metrics and some of our KPIs are of query latency and resource utilization. For a long time, these two were a huge driving force behind our work and we used these as both the output of our work and input on what we did. Drilling deeper into individual cases we found pieces of our system that introduced some nasty latency issues early on which weren't up to our standards. So what we did is focus our work on improving this piece by piece. The value of these metrics was that whenever we deployed an update we could track how the changes affected our system in real time and gather instant feedback and we would know whether we were doing a good job. It's a constant feedback loop.
Data also helps you act faster, much faster than you would otherwise at least. A common use case is that an issue is often detected only long after the fact and when there is a customer report. But what if it is some system pretty much on autopilot and you don't interact with it on a regular basis, but is still critical. Imagine having deployed a large cluster of temperature sensors around the city. With thousands constantly pushing updates, it is easy to miss a few that are no longer appearing in the list after a while. Turns out a year later you get back to that chunk of data you collected only to find that certain areas are no longer covered - seems like crows are eating your sensors. Sadly time travel is still a tough nut to crack and you have to wait for another year to have the full data. However, setting uptime monitors or other checks and regularly monitoring those you could have noticed that after a short while active sensor counts would start dropping. Replace them today and you lost a day instead of a year worth of data - not to mention the sensors.
You no longer work based on internal inputs but based on hard data as evidence. You can set objectives and key results (OKR) for the quarter or sprint you are planning and then measure impact based on those. Based on your targets you can map out a plan to tackle individual points that directly contribute towards your goals. The best part that after doing so, you can later also measure the impact of the work based on how well you met your targets.
No more guessing
Knowing what is happening at the time of making decisions is critical to focus your work and resources on the important stuff. Deciding on what features to build in a new startup is hard. While user feedback and feature requests can give the impression of what is in demand, oftentimes we build out a feature only later to discover it is not used or used badly. Running A/B testing with a simplified proof of concept of the feature allows you to test the waters in terms of usage and usage patterns. You can then make decisions based on utilization of the feature or even improve it if you realize the users are having a hard time navigating and utilizing it. This allows you to carefully prioritize and map out work before investing too much into it.
Subscribe to DataGekko
Get the latest posts delivered right to your inbox