I was reminded that I should explain things in a more structured manner:
<body id="collection-5a987f3071c10b34a3d7460a"
To me this is a classic collision concept that is happening.
how can we get a deterministic (reproducable) result for dynamic content? without collisions of IDs in the same namespace?
element-id = md5(json_serialize(attribute-list))
is a classic solution to get a reproducable clear id. most likely with a version_id increment so the method remains monoton.
Why would we want to do that? Because every saved document can be transported over multiply layers in the internet. And even setting the correct cache-invalidation headers sometime will get stripped by company proxies.
So it's a manual form of cache busting.
<body class="a b c d e f"
This is a CSS composition model. The composition model has it's advantage over a OOP like model like BEM by letting you compose attributes. What is the problem with a composition model? it's the question of 1:n or n:1 do i have n small attributes that change 1 thing or do I have 1 attribute that changes n things.
To an automated code generator who cannot know your intend the composition approach seems more reasonable, since he doesn't have to know where you current element is.
Also if you're using a code generator you don't care about elegance anyways, otherwise you would learn it.
The data tags I don't want to go into, because this was a common practice and we can argue here about the semantic model of it or the declarative parsing and where the state should be persisted or how things should be identified ....
To the point .... using hashes over document sums + functional css composition allows you to reason in a general manner. This is something Mark mentioned a good economic choice between result and work effort also it allows you to remove a lot of complexity out of the generation process and to avoid other problems.
I guess you have some timestamps/hashes attached to all the css and js files to avoid proxy cache header problems .... etc.
No one in his right mind would write code like this on purpose by hand .... but no one writes the compiler output by hand as well ....
I would not use such a system .... but I wrote such systems .... more specialized so they were 'less ugly' - whatever that means .... but I know why they look that way.
Things that could happen to reduce the so called bloat, would be compiler optimization passes where certain classes, if used constantly, can be in-lined into specific collection classes by a classic NFA to a DFA transformation or something similar. Than you probably would need to hash them as well to avoid collisions, but it would reduce the code overhead.
j
stuff ;)