Jump to content

How to asynchronously run scripts ?


Recommended Posts

Hi everyone.

 

Context: I am writing a workflow based on a Script Filter, but for external reasons the results come pretty slowly and one by one. As far as I'm aware it's impossible to stream results directly with the script filter object. So, what I wanted to do, is to fetch the results asynchronously, write them in a file, and have the Script Filter just look at the file. Using the rerun option to 'refresh' results regularly I could make it work.

 

So my question is: How to asynchronously run scripts in a workflow? Just launching the script but not waiting for it to end before continuing the workflow.

It could be a python or bash script, it could be called by a 'Run Script' object or directly in the script filter; it doesn't really matter.

I've tried the two following things that I thought would work, but Alfred still waits for the other script to finish before continuing.

 

import subprocess
subprocess.Popen(["./other-script.sh"])

 

#!/bin/bash

./other-script.sh & echo -n "continue"

 

Thanks a lot,

 

Leo

Link to comment
  • 3 months later...

I misread the second post at first, thought you hadn’t come up with something, and coded a quick idea. I’m curious to see what you came up with but since I already made it and it’s only an example and not a full workflow, I’m sharing it anyway for reference.


The first thing I’d recommend is checking if you really need the full script to run in the background. It might be that you can get huge speed gains by making a section of it run in parallel. In an older version of Short Films, before the Grid View existed, the user’s machine would do all the processing of downloading thumbnail images then crop them. The first versions of this were slow and as part of the workflow I added a launchd agent to do the operation periodically¹. But then I backgrounded just the download and crop tasks and the speed boost was massive, to the point the agent wasn’t that important². I never investigated this thoroughly, but I bet the sequential web requests were the bottleneck. When you get a file from a server you’re not just waiting for the data to download, there’s also time taken to do a handshake and negotiate the secure connection. These add up. When sending each task to the background, multiple connections could start at the same time, reducing the delay³.


But let’s say the code is already as fast as can be. Another approach is to use two objects (⤓ Example Workflow). You’re already having to save data and read it anyway, and Alfred objects can fork to any number of other objects, so there’s no reason you have to run the slow code and display the information in the same object. This has the added benefit that termination of your “view” object (the Script Filter) does not necessarily mean terminating the slow script, allowing you to quit and return later to see more data as it is available.

 



¹ An approach I still use with some workflows where you want the data to be cached even before you run the workflow.


² The current workflow doesn’t use it at all, but the approach in general is quite different.


³ If doing this today, I’d try to optimise it even further by having curl download every file in the same connection instead of creating multiple.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...