Search posts, tags, users, and pages
I have to ask Jason! What are the drawbacks of your suggestion? Seriously. Can you explain the negative side too?
THAT is a excellent question, and one I actually have difficulty answering. It is in fact WHY I am asking this, what are the downsides that make people not use it... and to be honest for me most of them hold water like a sieve.
It COULD be a bit more typing/code but that is easily mitigated by creating helper functions. My own library (that I emphatically don't call a framework) does this. In the end it ends up as much or less code so this argument really doesn't hold water to me -- admittedly I say the same thing about vanilla javascript vs. the majority of frameworks. The examples are all card-stacked but by the time you end up implementing anything useful you've written as much of your own code as you'd have had without the framework. I think one of the biggest hang-ups is assigning attributes after a createElement, but even that again is easily averted with a bit of care.
There is the issue of the xxxxCurrentElement methods not existing in older browsers, but a workalike polyfill fixes that problems and in some browsers even seems to be FASTER or as fast as the native implementation. (I'm still trying to make sense out of how a brute force polyfill can be faster than the native implementation. I noticed the same issue with \__Element.classList) . Even jQuery provides wrappers for them though NOBODY seems to bother using them (in my experience, YMMV). I was walking the DOM in JavaScript using good old Element.firstChild, Element.lastChild, Element.nextSibling, and Element.previousSibling before jQuery was even a twinkle in Resig's eye... but again even though I rag on jQuery a lot, it HAS wrappers for those, again the problem is the lack of people using them... well that and the fact that for some jacked up reason so many methods are both read and write wrapped up in one.
I do know some people get confused over the fact that non-element nodes even exist -- the difference between Element.nextSibling and element.nextElementSibling alone confusing people starting out. Much less that some early browsers made nodes out of comments and attributes. (thankfully the latter was axed almost as fast as it was introduced)
The biggest hurdle I can think of is just thinking in terms of elements as nodes on a tree and NOT as the markup that created those nodes, and the idea of making nodes that have NO corresponding markup.
But lets' take a simple real world example. You have a list of <items> sent as a XML response via AJAX you want to plug into a new UL inside and existing element that we've already grabbed into a myElement variable.
var itemList = x.responseXML.getElementsByTagName('item');
myElement.innerHTML += '<ul class="newItems">';
for (var i = 0, item; item = itemList[i]; i++) {
myElement.innerHTML += '<li>' + item + '</li>';
}
myElement.innerHTML += '</ul>';
That's the approach most people would use as vanilla JavaScript. (well, except most people don't understand 'for' well enough to leverage null evaluation on assignment to squeeze more speed out of iterating a nodeList!) Naturally of course each 'item' is NOT sanitized here so if a user snuck in a <script> tag you're screwed and would need extra sanitation, but it's the approach most people seem to understand...
Though honestly this is part of why I don't think you should ever send markup from server-side as part of a ajax response.
But here's how I'd do that if no helper functions are in place.
var
itemList = x.responseXML.getElementsByTagName('item'),
newUl = document.createElement('ul');
newUL.className = 'newItems';
for (var i = 0, item; item = itemList[i]; i++) {
var newLI = newUL.appendChild(document.createElement('li'));
newLI.appendChild(document.createTextNode(item));
}
Sure it's 58 bytes more code, but it also runs faster since we bypass the parser entirely also resulting in a much lower memory footprint. It's not sufficiently enough extra code for me to sweat it. Of course the drawback here is that you cannot actually pass markup from the server since we're forcing it to be a textNode. That's not a bad thing!
Though SOME folks would claim the latter is slower, that's because they benchmark ONLY the JavaScript and NOT the parser. It is effectively impossible to benchmark the parser from JavaScript as in some browsers it runs in parallel, others it doesn't even start until AFTER the current JavaScript execution terminates. Hence why changes to the DOM or the markup do not appear on screen until AFTER scripting execution ends. You have to actually benchmark for how many over X amount of time can be done WITH pauses to let the parser and renderer 'do their thing' as well as monitor actual CPU and memory use at the application (browser) level to do a proper 1:1 performance comparison.
... and it's terrifying watching what excessive innerHTML in event driven situations does to the memory footprint of a tab.
My own library, elementals.js is designed to do this out of the box with its _.make function. Using that I end up with this:
var
itemList = x.responseXML.getElementsByTagName('item'),
newUl = _.make('ul.newsItems', { last : myElement });
for (var i = 0, item; item = itemList[i]; i++) {
_.make('li', { content : item, last : newUL });
}
That's having a library or framework help you with using the DOM. Though laughably if you sent as JSON instead of XML this structure:
[
'ul.newItems', { [
[ 'li', 'item1' ],
[ 'li', 'item2' ],
[ 'li', 'item3' ]
// , etc, etc...
] }
]
You can build directly on the DOM with _.Node.write
_.Node.write(myElement, JSON.parse(x.responseText));
To me, THAT's what working with the DOM is, and why I don't grasp whats so hard about it or why so few people even try to use it.
ACTUALLY thinking on it there is ONE hang-up that stands out. Legacy IE will botch setting or changing type="" on form elements if they are already attached to the DOM. This is a common problem people have not just creating DOM elements, but working with existing ones placed there by the markup... hence the 'trick' that if in legacy IE you want to change an input from 'text' to 'hidden' or vice-versa, you store the existing one, make a new one that's disabled on the DOM, and then parentNode.Replace between the two of them copying values as needed.
There are a few other things that can also go bits-up in IE if you don't do your DOM attach LAST -- but the answer to that is to just to put elements on the live DOM AFTER you set any desired attributes.
But that's an IE specific problem that once you know about it is easy enough to avoid.
I'm actually arguing with myself if in my own library I should add detection of "first, last, after, before, replace" on "Node.write" and "make" to force them to be done last... trying to keep the codebase tiny but at the same time preventing people from making obvious errors is important. Tough choice when I'm imposing a code size limit on it and only have around a quarter of a K left inside that max after minification and gzipping. I always try to practice size targets and right now I have set a hard ceiling of 8k after minification+gzip.
One of those situations of "It's only a problem in legacy browsers and if the people using it don't read the instructions"... do you really add code to fix that?