I faced some code that uses lodash/fromPairs to build an indexed object:
import fromPairs from "lodash/fromPairs";
function lodashReduce(arr) {
return fromPairs(arr.map(item => [item, `my item is: ${item}`]));
}
But I know that ES6 has syntax that allows to do the same:
function es6Reduce(arr) {
return arr.reduce(
(obj, item) => ({ ...obj, [item]: `my item is: ${item}` }),
{}
);
}
In this codesandbox I wrote an example of thoses two implementations that uses this array as a parameter:
const arr = ["item1", "item2", "item3"];
And both functions return this object:
{"item1":"myitemis: item1","item2":"myitemis: item2","item3":"myitemis: item3"}
Can you tell me the version you prefer to read and use? Can you justify it in a few words?
I did a huge mistake on the native implementation because I love the spread/destructuring syntax (and love is blind).
Of course, it is always better not to create an object in a loop:
function BETTER_es6Reduce(arr) {
return arr.reduce(
(obj, item) => {
obj[item] = `my item is: ${item}`;
return obj
), {} );
}
Enguerran
I craft softwares
I have to admit I prefer the ES6 version simply because it actually uses the word "reduce". I have no idea at a glance what fromPairs does.
That being said, if the codebase already uses lodash extensively, I believe it's better to be consistent so that the reader (which includes the author!) isn't constantly switching context.
the ES6 Version, because I just used to the "look" of the spread operator (i love it). Since ES6/ES7 has that much stuff integrated I tend to not use lodash anymore aside very specific tasks.
Whenever there's a native implementation, to blazes with the framework unless you need to polyfill for legacy browsers.
... and even then just polyfill the ACTUAL implementation in legacy browsers instead of using a proprietary framework method.
But to be fair, 99.99% of the time my answer to EVERYTHING is "to blazes with the framework" given what utter trash they tend to be across the board -- though I'll admit lodash is one of the less offensive ones.
-- edit -- though thinking on it, I'd probably use neither approach but that's because I code so often for legacy browser support, AND often have to optimize for speed. BOTH of them are slow due to the function callback overhead and shoving of data around.
THIS would probably be the fastest and most compatible approach:
function arrayToObj(arr) {
for (
var i = 0, result = {}, item;
undefined !== (item = arr[i]);
i++
) result[item] = 'my item is:' + item;
return result;
}
... and yes, "undefined !== (item = arr[i]) " is FASTER than an actual length check, even if you store the length. If you know none of your iterable values are loose false, you can go even faster by simply checking the assignment -- I do that on collections all the time.
Of course if I were coding for modern only, 'for of' becomes a far better choice:
function arrayToObj(arr) {
var result = {};
for (let item of arr) result[item] = 'my item is:' + item;
return result;
}
Since it's no larger in total code than the reduce/map/arrow version. (if you count the include line, it's actually smaller than either lodash or the reduce, AND faster!)
As 'convenient' as the various each/reduce/whatever methods on Array are, they tend to be slower thanks to the callbacks than just coding a flipping 'for' loop.
... and the increase in code size (a whole whopping 50 bytes for the long/legacy version, ohs teh noes) is often worth the increase in execution speed and compatibility. ESPECIALLY if you cut out the bloat of the framework nonsense. When you can do it even faster in less code for modern browsers, all the better.
Seriously, ease up on stuff that wastes overhead on callbacks for something a simple FOR loop can do.
-- edit edit -- oh, and those backtick strings? Cute toy, but slow and even more wasteful of time than string addition. Nice if you're making a HTML template system server side, not a great choice for situations where you're just plugging in one variable in a static fashion.
Ben Buchanan (200ok)
I make some bits of the web.
ES6. Always use a native solution, particularly when it's this close. Adding a library means more code to maintain (even just updating imports and monitoring their API for breaking changes can take time), more LOC sent down the pipe, more LOC parsed and executed, etc.