Algorithm interviews are the norm in software engineering. They provide a good enough approximation on whether an
engineer makes good architecture and design decisions.
As example, let’s do a quick study of whether to use DynamoDB or RDBMS to store data for a project.
- Implemented as a distributed hashtable.
- Distributed by design
- Very little operational overhead. There’s no need to create multiple DB users; No need to study query plans; No need
to think about how much disk space, RAM, or CPUs to assign to your DB; Only one option for table design (flat tables).
- Scales well. Since partitioning and distributed hashing is baked into the design.
- Restricted in how you can use the tables.
- Inefficient at range queries; Best used in a K/V like manner.
- Implemented (typically) as a BTREE with indexes and joins
- Typically non-distributed
- Efficient at ranged queries
- Very flexible. Anything you can do with DynamoDB, you can effectively do in RDBMS efficiently with enough elbow
- Flexibility in query patterns. Your normalized tables can most likely efficiently support many query patterns
- Can enforce data consistency. You have the option to pay in synchronization costs to gain data consistency.
- High operational overhead. Need to manage roles and permissions; May required heavy query optimizations; Need to
provision DB, perform vaccums, replication, and do data migrations; Requires standard and best practices to limit
- DB Schema updates is “best practice” and is often tedious
For certain user experiences, it’s best to choose to pay in operational overhead so you can have efficent
queries. Speed, afterall, is really important to users. However, for some user experiences, look ups are cheap and using
DynamoDB can save one a lot in engineering time and salaries. It requires a kind of analytical mind to make the right
evaluation between these two choices.
Sadly, from my experience, many programmers aren’t good at this type of analysis. They tend to pick from gut instinct,
which puts their software product on shaky foundations. In an algorithm interview, what I am looking for is your ability
to take basic facts about software and extrapolate long term consequences of different decisions. If you don’t fare well
with fundamental data structures and algorithms, you are going to have a very hard time with evaluating libraries, data
stores, APIs, and third party services. You become a liability on an engineering team.
Keep this in mind when you are asked an algorithm problem. You’re not being tested for cleverness, knowledge, or coding
prowess. You’re being evaluated for your ability to think critically. That’s why we do algorithm interviews.