News & Insights for the AI Journey|Sunday, April 21, 2019
  • Subscribe to EnterpriseAI Weekly Updates:  Subscribe by email

Analytics at Scale Requires Out-with-the-Old Spring Cleaning 

Shutterstock / McIek

It is human nature to hold on to things after they’ve outlived their usefulness. The history of IT and technology is replete with similar examples of technologies that just won’t die. Analytic is no different. Organizations cling to old technologies that reflect an old way of doing business, and this is holding them back. Here are three reasons to do some spring-cleaning with your analytics tools.

You Need the Unlimited Plan

Remember the old days when you had months to plan for marketing campaigns and new releases, ordered the hardware, and set it up with plenty of time? Oh how things have changed. Now it is expected that there will be vast differences in traffic within days or hours of the launch.

Take for example how the South-by-Southwest conference in Austin completely overwhelmed the new ride sharing apps that replaced Uber and Lyft. In today’s world you have to anticipate this kind of craziness, and design for it. You also have to apply the same logic to your tools. If your tools can’t scale with your business, then they are useless when you most need them. And if your tools are not working, then you can’t solve problems for your users, which means you are squandering the investment you have made in your application.

This Isn’t the Architecture You’re Looking For

Scale isn’t the only problem confronting your old tools. To achieve scale, you are most likely changing the way you design your application architecture. Instead of unwieldy, multi-tier applications (think the old 3-tier Web, App, DB models), you can now take advantage of highly distributed microservices, containerization technology like Docker, and even server-less models like AWS Lambda.

How are traditional analytics dealing with this? Not very well. Inevitably, management tools are optimized to manage technologies that were prevalent when they were first designed. First , this is clearly evident in the fact that most analytics tools charge per server (servers are so 2010), and jump through all sorts of hoops to accommodate for things like containers. For us “old” folks, this is just like the transition from large, big-iron servers to smaller generic servers in the early 2000’s, where vendors were charging for software per-core rather than per server.

Secondly, as organizations move toward microservices, they are also moving toward more reliance on open-source platforms for scale (Kafka, Cassandra, etc.) and their own custom code written on top. The old method of relying on your analytics vendor to “instrument” (e.g. out-of-the-box) your application just doesn’t cover everything. To start, most open-source platforms self-instrument – meaning that they know how to send their important metrics and logs using well known methods.  Most importantly, nobody can really instrument your code properly except the people who wrote it. They know how it works, so they know how it will break. And because of the adoption of DevOps that goes along with this, the developers “carry a pager” or are at least directly accountable for the performance of their code. That is a great motivator to do a better job of instrumenting your application.

Your analytics tools need to be on board with this trend and not only make it easy to get your custom data in, they can’t penalize you by arbitrarily limiting that data based on an outdated view of application architecture (e.g. only X number of metrics per server, etc.). I talked with an organization that was sending 150K custom metrics per server compared to the average I see of about 1K per server. And that works for them. And it should.

Cloud-posers Are Out

Lastly, let’s just get it out of the way. Like servers, cloud-resistance is so 2010. Just admit it. You love the cloud. In all seriousness, by one estimate 80 percent of IT budgets will be committed to cloud platforms and services with two years. The problem is that most analytics tools today were designed to be deployed in a data center you own, and they definitely weren’t designed for working across multiple cloud providers. Your analytics tools need to work like your application does - everywhere. They need to be able to collect data from anywhere, on any platform. Most vendors try to solve this by running their software for you on a cloud platform. The problem is that it still works the same way, with the same limitations. Don’t be fooled.

At the end of the day, it might seem like you have plenty of time to carefully select the right approach for analytics. But just like a frog in boiling water, it’s easy to miss the trend. Everything will work well until it doesn’t. Your fantastic marketing campaign crashes your application, and your analytics tool. You lug your analytics tools to the cloud only to find that they are completely out of place. The reality of sea-changes in technology, such as we are seeing now, leave the past generation of tools behind.

So as you look for the analytics tools for your business, make sure they will scale with your business and not hand you ready-made solutions that don’t actually meet your core needs. They also need to do more than make the right noises about the cloud. This requires, in both a technical and business sense, gathering all of the data you actually need to solve problems. Bottom line: make sure you’re not stuck with a pile of very expensive junk, because you’ll be the one paying the bills, not your analytics vendor.

Ben Newton is principle product manager at Sumo Logic.

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This