I want to love DynamoDB. I love that it just scales (disk usage and processing power). I love that it is tightly
integrated with AWS’s IAM model, so I don’t have to deal with user/role/permissions management.
But DynamoDB does some weird things by design. For example, only the primary key can be unique. If you want a table with
multiple unique attributes, for example, a
Users table where both the
email are unique, you’ll have to do
weird things like this.
To do this, insert extra items into the same table, with the [primary key] attribute set to the attribute name and value from the item, delimited by a hash sign. The new table looks like the following example…
The documentation actually tells you to create multiple records per user! Now your
Users table just became more
complicated to think about. Quick: We have 10,000 users - how many records are in the table and how much space does it
use? How many IOPS are required to delete a user? Did you take into account a query to find all related records? How
does it work with regards to eventual consistency? Can you tell me how response time is affected by using transactions?
Another weird thing is optimistic locking.
Note that for batchWrite, and by extension batchSave and batchDelete, no version checks are performed, as required by the AmazonDynamoDB.batchWriteItem(BatchWriteItemRequest) API.
For optimistic locking (via conditional expressions), your data model works as long as you don’t use batch APIs. It’s
like being told your SQL joins will work as long as you don’t use prepared statements. OK, it’s not as bad as that, but
it seems equally arbitrary.
I’ll still be using DynamoDB for simple data models because it takes care of opertional things for me. But it is a weird
PS - Feel free to replace DynamoDB with any NoSQL database. They all have similar “weirdness” issues.