Jump to content

Script Filter delay


Recommended Posts

Hi everyone.

 

I have a bash script that looks up my bookmarks on Pinboard.io and returns correctly formatted xml.

 

Although the script outputs to stdout immediately, Alfred doesn't pass the output to the script filter until the whole (longer running) script has finished and returned. Is there a way to avoid this delay?

 

Cheers,

Teo

Link to comment

Please post the workflow/code somewhere.

 

There's little use in guessing what might be wrong with something we've never seen.

 

Thank you for your answer.

 

The script (links to screenshots below) is a simple bash call to a command line app (https://github.com/NeoTeo/PinboardDailies).

 

The command line app works as follows:

1. Checks for a local cache of the xml that Alfred understands and outputs it immediately if found.

2. Sends a query to Pinboard (a bookmark site with an api) and waits for the result.

3. Writes the result to the local cache ready for next time.

 

It appears that Alfred does not use the app's output until the app exits thus voiding the advantage of using the cache.

 

https://ipfs.pics/QmSFvUcptQDSRsXxZGsFJESYJz1A2FkjJBm3qKfWNJNjcr

 

https://ipfs.pics/QmWxqqFsnnfJRD8SMiciRUEYYeSQuGzxtiBZDfQok2xDW5

 

https://ipfs.pics/QmWsPg7e9ieh91BAcCXtxbty7kF68D6gkuDACQfBGRjet1

Link to comment

Alfred has no way of knowing if your program has finished sending output if it's still running. You could try closing STDOUT, but fundamentally, you don't want a Script Filter running after it has sent its results.
 
If you need to run a longer-running process, you need to do it in the background.
 
Also, I don't see you checking the age of the cache anywhere. The code seems to hit the Pinboard API every run. Alfred will try to run your script after every keypress, so that's a really bad idea (and why you want your Script Filter to finish as quickly as possible). Not least because you're not allowed to fetch all posts from pinboard.in more than once every 5 minutes.
 

Finally, every workflow has its own directories for storing its data in (set in the alfred_workflow_data and alfred_workflow_cache environment variables). You should use the cache directory, rather than putting stuff in /tmp.

Link to comment

The background process is the main thing. If you can do the updating in a separate process, your workflow will be stupidly fast. Like instant.
 
Perhaps you could use nohup to start the update process at the same time as your workflow? 

# Detach from parent process and IO streams
nohup ./myapp --update &> /dev/null &
 
# Actual workflow script
./myapp "{query}"

There's another Pinboard workflow that uses cron or launchd to update the cache frequently, while the workflow itself only uses the cached data.

Link to comment

The background process is the main thing. If you can do the updating in a separate process, your workflow will be stupidly fast. Like instant.

 

Perhaps you could use nohup to start the update process at the same time as your workflow? 

# Detach from parent process and IO streams
nohup ./myapp --update &> /dev/null &
 
# Actual workflow script
./myapp "{query}"

There's another Pinboard workflow that uses cron or launchd to update the cache frequently, while the workflow itself only uses the cached data.

 

Two scripts/modalities is a great idea, especially if I can't get the app to spawn an independent process to do the updates.

Edited by Teo
Link to comment

A thread won't work. That's in the same process, so you'll have the same problem. It has to be a separate process, it must detach from its parent process and STDIN, STDOUT and STDERR, or Alfred won't consider the workflow complete.

The way my Alfred-Workflow library starts a background process is a "proper" double fork, which is how you start a real daemon. That's probably overkill, and I did it that way basically because I already had the code. nohup would probably work just as well.

Edited by deanishe
OP edited the sentence I was referring to
Link to comment

A thread won't work. That's in the same process, so you'll have the same problem. It has to be a separate process, it must detach from its parent process and STDIN, STDOUT and STDERR, or Alfred won't consider the workflow complete.

The way my Alfred-Workflow library starts a background process is a "proper" double fork, which is how you start a real daemon. That's probably overkill, and I did it that way basically because I already had the code. nohup would probably work just as well.

 

I know, I corrected it to process :)

I'm going to try the modal flag & nohup approach. 

Thanks again.

Link to comment
  • 1 month later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...