Types of Monitoring Solutions
When it comes to monitoring APIs, you essentially have two options:
- Use an external service (outside of your tech stack) to synthetically generate requests against your API at regular intervals, logging the result
- Use a snippet of code in your API server runtime to send a log of the request made, as well as response times and result to the external service
Logging requests external to the API server
The upside to monitoring API performance from an external server is that it's 'set and forget'. You get notified if things catch fire, otherwise you get a graph tracking your API's performance. On top of this, it tends to be much cheaper than logging every single request, and there's less chance of leaking sensitive information to an external service.
The biggest win here in my opinion is that an external request to your API closely matches what your frontend clients would experience - the external testing service would have to go through the same DNS, SSL, CDN/Caching as your real users.
However, the downside is that this type of monitoring (also known as Synthetic Monitoring) isn't a "real view" of what your users see - for example, a real user would have most of your assets cached, and expect certain pages to load faster than others, while a synthetic monitor just requests the same resource without locally caching results. This difference is particularly noticeable with GraphQL, where most GraphQL clients will heavily cache requests, speeding up the user experience on the front-end.
You also need to ensure that the synthetic monitor has access to your API (for example, setting the same custom headers that your front-end uses to authenticate itself against your API).
Logging all requests from within the API server
On the other hand, there's logging requests from within the API server itself via a code snippet. My initial reaction to this kind of logging is "Great. Yet another snippet of code to include...".
If my API server goes down (what i'm most interested in trying to detect here), the logger goes down with it. There's also the risk of other developers deleting the code snippet unintentionally.
The biggest win for Internal Monitoring is of course that you get a "real" view of your API's usage - the data coming out here are from real users suffering from 10 second response times.
However, you'd have to manage API keys to send your logs to the external service, ensure the logs don't contain sensitive information, and make sure your API server is able to make outbound trips to the internet.
Despite the positives of internal monitoring (in terms of capturing real-user data), OnlineOrNot will be building an external monitoring tool that doesn't require logging all API requests to your service.