We had that discussion a few time with the team, and we always came back to the same conclusion:
- for the API inputs ISO8601/RFC3339 (because it’s just an obvious format, when you use an epoch, you have to document the format: seconds or milliseconds, and users quite often do mistakes with timezones.... ISO8601 makes all this just simple and obvious
- for the API responses, short answer: it depends. When it’s only events, usually ISO8601. When it’s a large collection of records, e.g. a time-based serie with hundreds or thousands of records, epoch is just more efficient (and we make sure to make this obvious in our OpenAPI specs)
- for the data storage, whatever is the most efficient (for storage/indexing/queries), usually epoch (e.g. in Dynamo), but when the datastore support date format as first class citizen, it makes sense to leverage it (e.g. PostgreSQL timestamp with time zone format, it’s only 8 bytes and very convenient!)
And regarding epoch format (seconds, versus milliseconds), is usually depends on the data itself. When it’s to keep track of human-based interaction (user A did Foo on Bar at this timestamp), using second is perfectly fine. When the goal is to record device activity (we’re an IoT company with a lot of data streams), usually milliseconds is required (when you have a device send 20 values each seconds, it’s just common sense ;-)
As a rule of thumb, I’d say the goal is always to make it as efficient and error-free as possible. (Hence the choice of ISO for the API: as humans write code to call our API, ISO is self-documented and removes all the questions, assumptions, and therefore risks os misuse)