Why Synthetic Monitoring and Testing is Not Enough

Why Synthetic Monitoring and Testing is Not Enough

Application development is increasingly dependent upon APIs, often integrating with many different services. To address uncertainty in these dependencies, many developers use synthetic tests to monitor their API activity. This common tooling helps you discover when an API is up or down, and whether it’s returning the expected results. While useful, synthetic testing cannot mimic the reality of today’s applications.

When your application uses multiple services, your tests will miss:

  • Internal microservices that don’t have public endpoints
  • Errors specific to real usage that you haven’t foreseen
  • Changes to your application after the tests are written

In this post, we’ll share more about these three scenarios, and explain why real-time monitoring of live API calls is the best solution.

Your Microservices Might Be Untestable

Traditional synthetic testing isn’t even an option for many enterprises today. If you’re building internal APIs that are not Internet-accessible, such as popular microservices architectures, you’ll be restricted to only public interfaces. Sometimes that is enough, but for sufficiently advanced use cases, it typically means you’re leaving much of your software untested.

Most synthetic monitoring tools run as SaaS. You log in to a dashboard and set up your series of API calls and criteria. Then, at defined intervals, the monitor calls the endpoints—typically from cloud providers in various locations—to ensure the synthetic tests pass. To achieve this test with microservices requires that the calls can be made from within your systems, which may not be accessible by public DNS.

You may be able to use synthetic monitoring with simpler scenarios:

  • An internal API running a mobile app or other frontend
  • Calling a small number of third-party APIs

In both of these cases, the endpoints must be accessible to the Internet. With some manual effort, you could create tests to periodically mimic these client-side calls. With this approach, you can only monitor pre-determined request and response data to ensure it matches expectations. Not only does this testing miss issues with internal services, it also only gives you a partial view of your external calls.

Synthetic Monitoring Only Tells Part of the Story

The problem with synthetic monitoring is in the name: it’s synthetic. You are only testing what you’ve explicitly included. Many errors are dependent upon real usage situations—if you haven’t accounted for it, your monitoring will not catch it.

Synthetic tests can miss:

  • Problems related to your application’s network
  • Edge cases based on the credentials you use
  • Bugs that only arise with dynamic request content
  • Latency not captured in a one-off test
  • Error messages sent with 200-level status codes
  • Issues you have not considered with your tests

These issues only become worse if your application depends on multiple third party APIs. We found over half of developers consume more than five APIs. With the many potential points of failure, you can see why it’s tempting to set up monitors. Yet, we discovered most developers don’t monitor the external APIs they call and may not have access to a sandbox that can be easily tested.

To understand the full picture into API performance requires total visibility to what your application is experiencing. There are situations where synthetic tests make sense, but you cannot depend on them to capture all the issues with your application.

If it’s important for your customers to have an experience with limited errors, it’s critical that you go beyond synthetic tests.

It’s Hard to Maintain Synthetic Tests

Where synthetic monitoring makes sense is for APIs that you provide publicly or for approved partners. The surface area of tests is limited to a single API with some well-defined use cases. Third-party API monitoring is more difficult, especially if you try to cover some of the scenarios synthetic tests can miss. External APIs often change without notice, so tests can give you alerts when things aren’t as you expect. Then comes the hardest part: maintenance of those tests when your own app is also changing.

Let’s say you’ve gathered the details of every API call you make during a typical user’s experience with your application. Next, you translate those use cases into synthetic tests, so you can be alerted when it catches errors. You’ll be missing atypical user paths, but that may be a trade-off you want to make. You can publish your tests and monitor them regularly, feeling somewhat confident. As long as you don’t change anything.

Once you’ve built an application, it’s not done. Most software requires bug fixes, user experience enhancements, and new features. Inevitably, you’ll need to adapt how you make your calls to external APIs. Building comprehensive synthetic tests for an application that calls multiple APIs is difficult in the first place. Keeping it updated with the exact calls as your code evolves might be an impossible task.

Ideally, you could add some process to sync monitors with your real calls. For example, include synthetic test maintenance in your code review. An unfortunate side effect could be slower development cycles. All in the name of tests that can’t even capture every scenario.

Use Real Monitoring for Real Visibility

We’ve bumped into all these issues, in our development work and with Hoss customers. Whether consuming your own microservices or third party APIs, our goal is to give you deep visibility and better customer experiences. Keep track of performance based on real scenarios, not synthetic tests. You can enable error alerts and reduce the amount of time spent debugging your integrations.

Try Hoss for free and save yourself the headache of synthetic monitoring.