I recently needed to install software and send commands to more than a thousand servers.
The machines were already split into dozens of groups, with one deployment node in each group acting as a relay. Even so, I was still sending commands to those deployment nodes one by one from my local machine and waiting for each result, which wasted a lot of time.
At that point it became obvious that once deployment reaches a certain scale, serial execution is just too slow. So I started experimenting with parallel task execution in PowerShell.
PowerShell provides Start-Job, which launches a background job in the current session. Once a job is started, it runs independently and does not block the current shell. Even long-running work can continue quietly in the background.
You can inspect job state with Get-Job, wait for completion with Wait-Job, and retrieve output with Receive-Job.
Based on that, I wrote the following simple script:
1 | # define 6 tasks |
At the top, the script defines six separate script blocks. For testing, each one just sleeps for a different amount of time.
These blocks are placed in $script_array as the task queue. $parallel_count controls how many jobs may run at the same time, and it should not be set to 0.
Since different tasks finish at different times, the script polls the running jobs. As soon as one finishes, it immediately pulls the next task from the queue and launches it, so task slots do not sit idle.
I measured the total runtime with different parallelism settings:
parallel count = 1: 70.0920091 seconds
parallel count = 3: 29.3396782 seconds
parallel count = 6: 21.4452266 seconds
So yes, parallel execution really does reduce the total time quite a bit.
This was still a rough test script, and I had not yet used it at large scale at the time. In real production use, more edge cases would surely show up.