I have used the async library in the past to do this using async.parallel, how can I do this using native JS whilst following the ES6 standards as well?

Write your answer…

4 answers

Something like this?

myArray.map(async entry => {
  // do something async
})

This is much as parallel as you can get now.

You can collect the results with Promise.all().

Spot On1

Hashnode is building a friendly and inclusive dev community. Come jump on the bandwagon!

  • 💬 A beginner friendly place

  • 🧠 Stay in the loop and grow your knowledge

  • 🍕 >500K developers share programming wisdom here

  • ❤️ Support the growing dev community!

Register ( 500k+ developers strong 👊)

First, any implementation using forEach,map etc will at best be 'wrappers' as they are themselves not async & parallel..

I can foresee a path where you might achieve parallelism by sub dividing your iterable (array or whatever) into sub-parts and then 'sequentially' iterate on those sub-parts and then combine them within a promise or another object.

That being said I liked the question so I did a strange experiment. This is sequential, but unique in the sense that it uses both microtask queues:

Array.prototype.asyncIterate = async function(f,i,l){
    i = i || 0;
    var copy = i ? this : this.slice(),
        d = copy[i];
    l = l || copy.length;
    await (async function(copy,d,i){window.requestAnimationFrame(function(){
        f.call(copy,d,i,copy);
    })}(copy,d,i));
    if(i === l-1) {
        return true;
    }
    return copy.asyncIterate(f,++i,l);
}

now you can:

var x = [1,2,3,4];
x.asyncIterate(function(d,i){console.log(i)});
//0
//1
...

The above will not mutate the array it is called on, and will return a resolved promise to true in the end. Did not do a performance test :) Eventually if you make another function and slice the main array into subparts and call asyncIterate you might achieve the effect you want.

Also don't change native prototypes, I just did it to explain this case only.

Clap1
Spot On1

In parallel? Currently, native JS does not have SIMD instructions or similar to iterate over data in parallel. What you can do, however:

const arr = [
  { foo: 1 },
  { foo: 2 },
  { foo: 3 },
];

// blocking (sync)
arr.forEach(
  obj => console.log(obj.foo)
);

// non-blocking (async)
arr.forEach(
  obj => setTimeout(() => console.log(obj.foo), 0)
);

Well, if you want to iterate over an array in parallel (and possibly do an operation on each item), and want to use WebDev and not limit yourself to JS only, you can do so by using WASM, which allows you to either use native SIMD (C, C++) or multithreading (C, C++, Rust).

The concept of a single "correct way" is a flawed one -- in JS there are always multiple ways of doing things and each is best suited to a specific type of data and a specific type of output.

That said, I'm wondering how you expect anything done in parallel in an inherently single threaded language. I mean you can fake the behavior, and there's the specification train wreck of promises, but if this is data you already have there's no point in trying to parallelize anything.

So I'd be throwing a question back at you -- what's the data and what are you doing to it? THAT determines the proper answer.

Show all replies

Siddarthan Sarumathi Pandian To me that is still too 'vague' a description of what is being done. I would still be asking WHY not sequential? ...and even if so how and what that data is and WHY it is being processed out of order / parallel would be the determining factor of how I'd go about it.

What is being put in and what's being done with it is how you make your choices. Simply saying "API call" and "objects" is uselessly and pointlessly vague. WHAT objects? WHAT API? What mechanism of passing? What processing?

Reply to this…