Bad Microservice Implementations

Published at Jan 24, 2026

I don’t particularly mind a well built microservice architecture with a use case that justifies it. However, I have seen many small companies who barely rack up a few requests per minute jump at the chance to implement a microservices architecture, often to their detriment. For one, they are throwing money at the void, and often the void bites back. The mental overhead of shepharding so many different services can be immense, especially with a small team. Here are some reasons as to why I think that’s the case.

Relationships in disparate databases

In smaller companies, you tend to focus on a very specific and particular business area. You haven’t had time to branch out yet. What bites many developers in the ass is that their data tends to be very relational. Of course, for a big company who has so many business units, one software can be genuinely totally unrelated to other pieces of software they own. But for most small companies, all you have is your singular app and your hopes and dreams. Their user management system is tangentially related to their SaaS note taking app, and their note taking app is semi-related to their calendar scheduling system. See what I mean? One of the stated goals of microservices is to maintain the relative independence between services, such that one service does not take down another service. Hence, it is best practice for each service to take charge of their own database. But if the user database is so disparate from the note taking database, and the relationship between them tends to be maintained via crummy vibe-coded move quick break fast logic in terrible JavaScript, how would it prevent the programmers from introducing logic errors that cause the data to become corrupted. In the middle of orchestrating a complex procedure involving multiple databases, what if you delete the user account but fail to delete the notes? While in the best case it might take up extra space, it is a form of data corruption. There is no effective rollback mechanism, the user would need to create that logic by themselves, when a relational database is designed to keep relationships tight-knit and watertight. It would scream at you if you delete your user before your notes, and it presents fluent APIs that keep your cascading deletions atomic and in a single transaction. You can’t wrap API requests in a transaction can you? You’ll end up with relationships pointing to nowhere, and lead to code with the correct goals but incorrect assumptions to crash at runtime due to corrupted data.

Depends-on relationships between microservices are hard to muster

Imagine a natural disaster ravages the data center your services are colocated in. Thankfully with your backups, you are able to restore your services one by one. However, as each service depends on each other intrinsically, there needs to be a directed acyclic graph of what microservice should come online first before the others, and trying to manage these complex network of 100 or so dependencies would be hard for any person. Thankfully, there are software designed for this use case, but it induitably adds an extra layer of complexity in a period where you are already very stressed. Bugs in your configuration that specifies what depends on what might also be discovered in this critical juncture, if you ever forget to update that. In practice, once your infrastructure gets big enough, avoiding dependency cycles is also not trivial. You need adequate planning ahead of time to avoid this issue, and many might just call it quits at this moment and just copy and paste the code from their microservice dependency, breaking the cycle, but oblivating the entire point of microservices in the first place. This is honestly not that big of a deal if you have set up everything correctly, but with a single service it is much more easier to reason about your infrastructure.

Extraneous data

And if you are using good old REST APIs for microservices to communicate with each other, you might end up sending more data than what you need to each service, increasing latency. Make it too granular, and if you need more data you got to make a lot of requests, increasing latency. This is where people reach for technologies like GraphQL, which can allow for users of the API to only request for the exact data they need. However, that increases complexity and adds an extra layer of indirection that can increase the maintenance burden of the app while adding more points of failure where things can go wrong in production, not to mention increase the potential attack vectors for your project. This is an even bigger concern if you are also using GraphQL for your client to server communication to keep your API consistent, as certain microservices do use user-facing APIs to query for data. This is where denial of service attacks are very much possible with malicious queries, especially if you have GraphQL servers that are not sufficiently innoculated from it.

Everything is a microservice

People can also take things too far. Simple logic which doesn’t involve communicating with a database becomes a separate microservice, all in the name of sharing logic and not repeating yourself, adding unnecessary latency when it should just be its own package or library that can be shared between microservices, even though it might increase code size and hence startup latency especially in a serverless architecture. You can preach against this however you want, but some practices are so entrenched in certain workplaces it is hard to fight against.

Client wrappers

To reduce the amount of boilerplate code required when sending API calls and make the interface consistent, people need to write code generators or a simple client wrapper to allow for easy communication, which while not necessarily the case can lead to the proliferation of bad patterns like http 200 for error, since it’s just easier to adhere to your own pattern and treat HTTP as an implementation detail.

Microservices are basically NoSQL (did I ever tell you how much I hate NoSQL for relational data)

Badly designed API interfaces for microservices can be slow the same way a NoSQL database can be slow. NoSQL databases generally have a reputation for having a relatively simple interface. SQL databases by virtue of a query language can request an answer to some of the most complex queries within a single response. The truth is network latency really is the killer of backends that are written in a clean and performant manner. Without good APIs, you could write your backend in the fastest language possible, and use the best hardware you could ever have, and it would be no match for a well written Django Python backend reliant on an SQL database if the underlying queries made to its other dependencies are large in number. Take for instance aggregation queries. MongoDB has a decent aggregation API, but instead let’s focus on key-value databases like DynamoDB, or god forbid Firestore: a key-value store masquerading as a document-oriented database due to how poor its API surface area is. If you ever want to do a complex aggregation, you can either memoise the result by adding an extra column to each row, adding extra logic which can become brittle over time while maintaining the correctness of this data. You might also need to send thousands of requests between the server and the database, increasing the time required to query the data due to latency, while also unnecessarily taxing both the backend server and the database, racking up network transfer and compute costs. What if instead of trying to keep your API surface area minimal like a simple key-value store, make it feature rich like an SQL database, providing API abstractions to operations you could technically do, but allows you to do it within one of a few more requests? That would be the ideal. But now you are contributing to API sprawl. In a complex web app, you might start to get hundreds or even thousands of separate API endpoints. What would be a good happy medium? These are questions you need to tackle as you move to microservices, and sometimes that is extra bandwidth that could be used to add features, fix bugs, make your app better and more ergonomic to draw in more users, especially if you are a sprawling startup.

Compiled languages can solve (most) of your problems

With the advent of compiled languages like Go and Rust, the constant worry of having monoliths that are slow to scale up and take a lot of memory might be a problem of the past, especially if you are working in a Greenfield project with the flexibility of choosing your tech. With these kinds of services, the concept of scalable monoliths might genuinely become possible at scale, and they aren’t even as different and difficult to write as traditional interpreted languages which power web apps. Most companies also do not require the level of granularity where you need to control the scaling for each service. You could theoretically micro-optimise the shit out of your infrastructure, but in many cases it’s simply not worth the time.

Microservices eventually become a monolith

And once you get tired of all these microservices, you start to structure and consolidate your microservices into multiple packages/libraries, and what do you know. You have a monolith!

I don’t hate it, they are just often not used in the appropriate cases

With all I just said, I don’t hate microservices in general. I just hate the misuse of microservices, and bad implementations of microservices especially by people who are not well versed in microservices design and some amount of distributed systems knowledge is as bad if not worse than not having a microservice at all. A bad implementation can make your app slower, more brittle, and buggier than what a monolith would. You are much better off compartmentalizing your code into modules.