Jump to content

Multiple outputs to script filter


Recommended Posts

How do I return multiple items as output of a script filter? Upon execution of the script, I'd like to output something immediately. After let's say 5 sec/successful network call, I'd like to append some more results. I'm trying to run something like this:

This is what's happening:

After 5 sec, I see Desktop in alfred search results.

This is what's I'm expecting to happen:

Desktop should be appearing immediately in alfred search results. After 5 sec, Downloads should appear below Desktop.

import time
xml1="""
<?xml version="1.0"?>
<items>
<item uid="desktop" arg="~/Desktop" valid="YES" autocomplete="Desktop" type="file">
<title>Desktop</title>
<subtitle>~/Desktop</subtitle>
<icon type="fileicon">~/Desktop</icon>
</item>
</items>
"""
print(xml1)

time.sleep(5)

xml2 = """
    <?xml version="1.0"?>
<items>
  <item uid="downloads" arg="~/Downloads" valid="YES" autocomplete="Downloads" type="file">
    <title>Downloads</title>
    <subtitle>~/Downloads</subtitle>
    <icon type="fileicon">~/Downloads</icon>
  </item>
</items>
    """
print(xml2)
Link to comment

Technically, it's not feasible to try to start showing the user results before the Script Filter script has finished outputting them. Alfred won't show anything to the user until your script exits. To update the list of results in Alfred, tell Alfred to run the script again.

 

So, you can return your list of defaults or cached results the first time, set rerun = 0.1 in the JSON feedback to tell Alfred to run the script again in 0.1 seconds, then fetch updated results on the second run.

Link to comment
Posted (edited)

Thanks @deanishe I've looked into rerun and variables/state variables. I'm trying to achieve this.

When user enters keyword, show them the defaults & start a script in the background(https://unix.stackexchange.com/a/130902/291926). The script makes some network calls and when we have the output of the network call, I'd like to display those results. Basically, I'm unable to replicate the wait feature.

 

This is my script in alfred.

TIMER=$((timer));   #default value will be 0
if [[ $TIMER -le 1 ]]  # TIMER<=1
then
   if [[ $TIMER -eq 0 ]]   # TIMER==0
   then
   {./test.sh}&
   fi
cat << EOB
{

"rerun" : 1,
"variables": {
	"timer": 1
},

"items": [
	{
		"title": "Default title",
		"subtitle": "Default subtitle"
	}

]}
EOB
fi

 

This is my test.sh

#!/bin/bash
sleep 5
cat << EOB
{
"variables": {
	"timer": 4  //Any variable other than 0 or 1 is fine
},
"items": [
	{
		"title": "Fetched title",
		"subtitle": "Fetched subtitle"
	}

]}
EOB

 

Also, alfred isn't supporting bash's background multi-processing. Alfred is waiting at this line  {./test.sh}& instead of making it run in background. Please let me know if you have any ideas.

Edited by SumanthCulli
Minor edits
Link to comment
Posted (edited)
1 hour ago, SumanthCulli said:

Alfred is waiting at this line  {./test.sh}& instead of making it run in background.

 

You misunderstand what's happening. Alfred doesn't (can't) change bash's behaviour in any way. That's still running in the background. The difference is that Alfred doesn't do anything with the output until your script exits, which isn't until after the subprocess has finished because you aren't detaching it properly.

 

If you want Alfred not to wait for subprocesses, you need to nohup them (or otherwise ensure they won't die when their parent does) and detach from STDOUT. STDOUT being open is a clear signal to Alfred that more output may come, so it will wait for any background processes that are still attached to it.

 

Many (most?) of my own workflows work the way we're talking about, showing whatever data they have as fast as possible, and fetching the new stuff in a background process. It works very well. You're just not implementing it quite right at the moment.

Edited by deanishe
Link to comment
Posted (edited)
21 minutes ago, SumanthCulli said:

Is there anyway to achieve this simply using Alfred.?

 

What do you mean?

 

21 minutes ago, SumanthCulli said:

I don't think rerun  will help me.

 

How else do you plan to get Alfred to fetch the second set of results? Regardless of what you do, you can't output two sets of results during one script execution (i.e. to the same pipe Alfred gave your top-level process) because then the XML/JSON output is no longer valid.

Edited by deanishe
Link to comment

 

1 hour ago, deanishe said:

Many (most?) of my own workflows work the way we're talking about, showing whatever data they have as fast as possible, and fetching the new stuff in a background process. It works very well. You're just not implementing it quite right at the moment.

Can you please share such workflows

Link to comment
3 minutes ago, duplex143 said:

I'll go thru this and figure out myself.

 

The Python library is a bit over the top, tbh. You don’t need to do a double fork. Just make sure the subprocess doesn’t exit when the parent does (nohup, setpgid), and that it isn't attached to STDOUT.

 

8 minutes ago, duplex143 said:

Didn't expect I was getting help from special forces!

 

:D

Link to comment

Excuse me for asking too many questions. 

How do you kill child processes you create when you choose Terminate previous script in alfred's run behaviour ? 

User enters input hello and you start a background process. Before it's completed, user types  world. Now alfred terminates the previous process. How do you kill the child process you created earlier? If you terminate it, you'd do some graceful termination in your code. But if alfred terminates, what happens?

Link to comment
2 hours ago, duplex143 said:

How do you kill child processes you create when you choose Terminate previous script in alfred's run behaviour ?

 

I very rarely use the option, tbh. Turning that on can lead to hammering servers with several requests/second, and then you end up getting a 429 Too many requests response to the one request you needed to succeed. And if I’m pulling data from a local source slow enough to benefit from terminating the previous script, I’ll usually try to cache the data and display from cache instead.

 

3 hours ago, duplex143 said:

How do you kill the child process you created earlier?

 

My libraries require you to assign a name to any background process you start, and they record its PID, so you can kill a background job by name if you need to.

 

3 hours ago, duplex143 said:

But if alfred terminates, what happens?

 

I don't even know what signal Alfred sends, tbh. Assuming it's not SIGKILL, I guess you'd need to record the PIDs of any background processes you start and signal them when your top-level process receives a signal from Alfred.

 

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...