A strange, and maybe funny, phenomenon that I notice after 10 years working in this industry is that developers can get really offended when someone calls out their solution as a hack. I’ve observed this in different companies, even some people who I consider calm and composed can get their attitude changed once I tell them that their solution is hacky. People tend to be very civil as long as I politely criticize their work in a friendly manner, but once the word “hack” is said, it’s as if I was personally attacking them. English is not my first language, so I have no clue if the word “hack” has such a negative meaning. To me, the word “hack” in software development context simply means using a tool in an unconventional way to achieve something quickly, and I certainly do not mean to insult anyone. A hack is usually employed as a shortcut to solve a problem with very discernable drawbacks. I am sure people have their reasons to choose to hack, be it time constraints or lack of proper tooling.
As an example of a “hack”, consider this scenario: we have application A fetching an S3 object upon starting, and then saving this object in its local memory. There’s another application B that can modify this S3 object, and we want this modification to be reflected in application A as soon as possible. Both applications are deployed using Kubernetes. To sync the changes made to the S3 object in application A, we can hook the S3 object events with Kubernetes API using one of these 2 approaches:
- We can call Kubernetes’
DELETE
API, or execute thekubectl delete pod
command. Assume that application A is managed by a Kubernetes controller, the deleted pod will be replaced by a new one, which will fetch the updated S3 object. - We can update the Kubernetes config of application A with a random key - value, which will trigger a new rollout replacing current pods with new pods, which fetch the updated S3 object.
However, this approach is hacky because of a few reasons. Firstly, deleting pod and forcing a deployment rollout are Kubernetes’ operational mechanism, and they are not meant to handle application-level data synchronization. These concerns are ideally kept separate. Furthermore, this introduces a dependency on Kubernetes’ control plane for application logic, which is not transparent or intuitive from the code’s perspective, making the applications harder to reason about.
Instead of relying on Kubernetes to do the job, we can actually consider these more appropriate alternatives:
- Polling S3 objects: application A can periodically fetch the S3 object contents. This approach is applicable if it’s ok to have a delay in application A to get the latest change, and if data inconsistency is not an issue (application A’s instances will poll at a different timestamp).
- S3 event notification: application A could implement a webhook for S3 to send the modified object to. This incurs some overhead on application A, but the sync is direct and straight forward.
- Use a message queue (like SQS) to publish the change: amazon S3 can also trigger an SQS message to notify application A about the changes of the S3 object. This is similar to the previous approach, but with a message queue application A can consume the data at its own rate.
- Use a remote cache layer: instead of storing data locally, application A could store the S3 object data in a shared cache that application B can write to when it wants to modify the S3 object.