Jump to content

Using the new External Triggers in Alfred


Recommended Posts

Hey all,

 

These new External Triggers seem like a pretty cool thing. Here's a tiny, useless workflow that shows a simple implementation of them. Basically, it uses Alfred as a "notifier." You can invoke it with a keyword from inside the workflow with "notify." So, basically, think of how you might call it from another workflow.

 

Again, this is just for demonstration purposes. I'm really interested to see how people will use this new, cool feature.

 

You can find a repo here: https://github.com/shawnrice/alfred-notifier

 

Or you could download the workflow here and check out its innards.

Link to comment

Hmm. So what's the USP of these external triggers, then?

 

I think it is a way to make an external application (like Hazel)  to interact with a workflow directly, without showing Alfred and without clutter the user interface with unnecessary keywords (an external keyword is only for that). Anyway, I’m still learning about it and its advantages.  

Link to comment

I think that it's one of those things that has the potential to change how we use Alfred in a radical way. Although, I don't know what that is yet (hence, I started this topic).

 

I know that it provides a way for Workflows to interact with each other, if we stay solely in the context of Alfred.

 

With Hazel, you could automate mundane tasks that work with Alfred. And, whenever we use Hazel, we could use that new cron workflow for things. So, say if Stephen wanted his Zotquery workflow to have it's sqlite database clone up to date with Zotero's, he could add an external trigger to the "update cache" argument and then call it, say, once an hour or once a day via Alfred Cron. Carlos' Evernote workflow could start interacting with (again) Stephen's Evernote Wiki workflow in that if something is added to one, then it queues something else up in another.

 

These aren't particularly outside-the-box ideas, but I'm confident that others are there.

 

I guess that it also paves the way for something more like a "workflow" suite in which you could create several workflows that are related to each other and can function on their own but then can extend each other as well. But, again, I can't think of anything particularly brilliant to create here.

 

I could imagine that if I put an external trigger on my "caffeinate" workflow, then other workflows could turn caffeinate on if they're in the middle of a longer process and then turn it off when they're done (although, I'd recommend that they check to see if it was on first, and, if so, how long, and then make sure that the timer is restored afterward). So, with that, I should probably extend that workflow to accept the parameter to "keep awake until a certain process is done" so that people can plugin into it with their own process.

 

I might be able to extend WHW so that people could tap into it with their own help commands or something. (BTW, I'm still rewriting it and am currently stuck on UI issues, mostly in regards to navigation when someone has, say, 100 workflows installed — certainly not me! — but it support help files for single workflows, and I want to make it so that workflows can provide their own help file that will override WHW's standard output.)

 

I do think that it would be good practice for us to put in external triggers on most of the "run script" workflow objects for now just in case other people could make use of them even if we don't have any ideas how. So, maybe that makes it so that we consider all of our scripts to be not just "workflows" but also (potentially) useful APIs for workflows.

Link to comment

Right.

Perhaps I simply haven't been struck by inspiration or haven't understood it yet, but colour me a shade of meh.

Am I right in thinking it's a write-only API, extremely similar to the existing one, but with a RETURN thrown in, that takes measures to ensure the correct workflow is called, but is no help if a user has changed the keyword?

If this is a first step on the way to allowing the results of calls to Alfred to be returned to the caller (instead of being passed to the connected action in Alfred), then COME ON! If not, well, meh…

Edited by deanishe
Link to comment

Right.

Perhaps I simply haven't been struck by inspiration or haven't understood it yet, but colour me a shade of meh.

Am I right in thinking it's a write-only API, extremely similar to the existing one, but with a RETURN thrown in, that takes measures to ensure the correct workflow is called, but is no help if a user has changed the keyword?

If this is a first step on the way to allowing the results of calls to Alfred to be returned to the caller (instead of being passed to the connected action in Alfred), then COME ON! If not, well, meh…

 

I'm actually excited about the external triggers for one reason that, I'm not sure a lot of people have thought of yet.. with external triggers you have an easy way of creating multiple steps/inputs for a workflow without having to make a keyword that is always accessible. For instance, I'm messing with my Plex workflow again that I never finished. For it, you need to specify an ip address and port for the server that you wish to use. In my situation, I have a keyword that is 'plex server' that calls an external trigger, tied to a keyword object (with no actual keyword set). I set the title and subtitle for it only. This allows me to input data in this step without having to worry about a bunch of other garbage showing up in the results. After the input, it saves that data, then calls another external trigger to set the port.

 

This will also allow me to do things like browse my Plex library. By using the 'plex browse' keyword, I'll always start at the root but when i select something, it calls an external trigger (that isn't always accessible) that will let me call a script filter from the previously selected context.

 

You could also allow easier secondary input to result actions by calling an external trigger. That way you don't have to worry about someone accidentally actioning the keyword that is used to grab input for the next step. Here's a good example. It's a Rename File workflow that uses an external trigger to pop Alfred up and allow you to enter the new file name.  There isn't an actually keyword for it though so you can't accidentally action it during normal use.

Link to comment

Hmm... I actually think the non-user-facing abilities can be nice (à la David), as well as the "suite" abilities (à la Shawn). I personally have been tinkering with ways to create a suite of workflows that can work together, and I think that this feature will greatly help (tho I haven't been able to fiddle with it much yet, aside from the basic test case in the examples). For instance, I want to integrate the bibliography management capabilities of ZotQuery with the PDF abilities of both SEND and Skimmer without creating some massive, bloated workflow. Or, I'm currently working on integrating Zotero stuff with Evernote stuff, and linking my two workflows for those apps. 

 

On a note similar to Carlos's, I'm also thinking about the ways in which this can open up Alfred workflows as a software distribution platform. For example, in the past, if I put together a script I thought was helpful, I would make a Gist and let people grab it, save it, and use it as they please. But I got a fair number of questions on how to set the script up, how to make it executable (not in any Unix way, just in general), and how to integrate it's functionality into other workflows. Now, I'm basically packaging all of my "helpful" scripts as Alfred workflows, but a number of them have functionality that could be useful outside of the Alfred context and integrated into other, larger workflows on the Mac. For example, within my Wikify workflow is a script for document-specific snippets. Define snippets and their expanded text as you write, then post-process the text to expand. This works great for notes, where you don't know key-terms going in and you probably wont use those terms in great quantity afterword. I can now bundle my workflow such that a user who downloads it can access that functionality within another Applescript, which is doing something completely different from sending Markdown to Evernote for wiki-goodness. With a little tweaking, I can update the processing script to tell the difference between an internal workflow call and an external call, and output things differently in each case. I'm just spit-balling here, but I think that we could start to open up the more general functionalities that our various workflows have as "features" to be used in other contexts. For example, I could tap into Alfred-Workflows filtering abilities within some other project without having to copy over the code. At the very least, I for one am going to start thinking about my workflows somewhat differently, trying to articulate, isolate, and provide access to functionality in a workflow that I think could be more generally helpful. By opening up certain functions within our workflows in this way, I think we could find another level with which we could create truly interesting "workflows" where different things are glued together without simply copying code together and creating a whole new workflow. 

Link to comment

Dean, yes! The return function is perfect. I think we can pull that off. Give me a bit to whip something up.

 

David, the Rename File workflow is perfect as it is much more API-ish than anything else, meant to be called only by other workflows. Perfect.

Stephen, the distribution thing is awesome. And I'm with you in that I'm still in "test-case" mode right now (hence, original workflow attached). And these external triggers are a great way to reduce workflow bloat. And the distribution thing sounds pretty great; I'd love to see you abstract it to make it more of a pseudo-protocol-ish-type-thing.

Link to comment

You could also allow easier secondary input to result actions by calling an external trigger. That way you don't have to worry about someone accidentally actioning the keyword that is used to grab input for the next step. Here's a good example. It's a Rename File workflow that uses an external trigger to pop Alfred up and allow you to enter the new file name.  There isn't an actually keyword for it though so you can't accidentally action it during normal use.

 

Apparently, you can do this with script filters now. Don't enter a keyword, just set the placeholder text or something. The script filter will never be directly, which will result in much less clutter.

Link to comment

So, I was hoping that an external trigger calling a script filter would just capture the XML produced, but it just invokes the script filter instead. For the more explicit "return" function, a feature request was recently put in.

 

Right now, however, we could make a very "hacky" version of the "return" function. Basically, what it would require is for us to "tack on" a suffix — or something — in the argument that goes to the script (filter) that would identify that the call came from the external trigger (unless someone has a better idea of how to do this sort of thing). Then, if that modifier was there, we'd redirect all output from the script to a tmp file that the "caller" could read. Initially, you might just have to put some sort of 'sleep/wait' within the call function to make sure the tmp file has been populated before you call it. Otherwise the "called workflow" could invoke some sort of callback that would call the original one again somehow. I could imagine this as either just using Applescript to call the original workflow again, or it could direct itself again to an external trigger — watch out for infinite loops; Alfred catches some of them with ET, but I'm sure that we can find a way to make them anyway.

 

The other option would be to emulate how lock files work. So, if you want to call an external trigger and receive the output, then you'd place a file in some location (Caches/Alfred.../Workflow Data/ETExchange/called.workflow.bundle ?). The workflow called would then first check for the "lockfile"; and, if it exists, it would redirect everything to a tmp file. After that, it would call the external trigger of the original workflow to continue. So, this would actually be creating a sort of handshake protocol.

 

While this sounds like a pain in the ass, I think we could make it bearable by writing a standard way to do it with just a simple library that could be reused and never really written again.

 

I know that the redirection for PHP wouldn't be too bad because of output buffering. Then, standard practice would just be that we use the output buffer on everything that can potentially be called by an external trigger. The last step would be just to check where the output should go and either echo it or send it to the file.

 

Thoughts? 

Link to comment

I'm actually excited about the external triggers for one reason that, I'm not sure a lot of people have thought of yet.. with external triggers you have an easy way of creating multiple steps/inputs for a workflow without having to make a keyword that is always accessible. For instance, I'm messing with my Plex workflow again that I never finished. For it, you need to specify an ip address and port for the server that you wish to use. In my situation, I have a keyword that is 'plex server' that calls an external trigger, tied to a keyword object (with no actual keyword set). I set the title and subtitle for it only. This allows me to input data in this step without having to worry about a bunch of other garbage showing up in the results. After the input, it saves that data, then calls another external trigger to set the port.

 

...

 

Interesting approach. Right now, to avoid useless keywords in multiple steps workflows, I use an icon or an unique character plus the text. Here is an example from Evernote workflow:

 

alfredkey.png

I’ll have to play more with the external trigger.

Link to comment

So, I was hoping that an external trigger calling a script filter would just capture the XML produced, but it just invokes the script filter instead. For the more explicit "return" function, a feature request was recently put in.

 

...

 

So if a return is implemented then any workflow could build its own dictionary/library or you could make a huge library workflow just to make things easier for other workflows. :)

Link to comment

So if a return is implemented then any workflow could build its own dictionary/library or you could make a huge library workflow just to make things easier for other workflows. :)

 

I'm totally a fan of a communal effort of a "library workflow" that we just call API.

Link to comment

The hack around getting the results would work. Especially if you use a predefined directory for such files and have watchman watching the directory. Then the calling app would simply subscribe to the watchman list and be notified when it is ready. I would then use a Hazel definition to clean up old results.

 

I just recently found watchman and it is nice. You can install it with "brew install watchman".

 

Really, if your workflow is designed with this in mind, you would not need the extra parameter. Simply pipe the script to Alfred and tee it into the results file. That way, the workflow is always doing both and can easily have the tee removed without affecting code. No rewrite needed when the full functionality is available.

Link to comment

So if a return is implemented then any workflow could build its own dictionary/library or you could make a huge library workflow just to make things easier for other workflows. :)

 

I quite literally dreamed about this last night. :o 

 

I have myself something like 6 workflows that I've written that all use Alfred-Workflow, and each one has it's own copy. I hate redundancy like this. Imagine a world where there was an Alfred-Workflow workflow, and you only needed to have that one workflow, but then all of your other workflows could tap into its functionality....

 

I'll go back to dreaming now.

Link to comment

I have myself something like 6 workflows that I've written that all use Alfred-Workflow, and each one has it's own copy. I hate redundancy like this. Imagine a world where there was an Alfred-Workflow workflow, and you only needed to have that one workflow, but then all of your other workflows could tap into its functionality....

 

So that redundancy is one of the reasons that I wrote the Alfred Bundler, which, unfortunately, doesn't support Python libraries yet — but not for lack of trying.

 

The idea behind the bundler is that a workflow author would need to include only one small file that has two functions: load and install. The first loads whichever assets the author wants, and the latter installs the bundler framework if it isn't already there. Currently, it works well for Bash and PHP libraries because you just need a path and a simple "include" statement. It also works very well with standard "utilities" like Terminal Notifier, Pashua, Cocoa Dialog, etc.... An upshot is that it allows _easy_ access to these utilities for us, and it keeps them separate from the system (they're all stored in the Alfred Data directory). It also allows different versions to live side by side with no conflicts.

 

Python (and Ruby) are entirely different beasts. The ability to keep python libraries out of the system and to use different versions is a wonderful, potential benefit. And, if I actually knew how to write Python, those might already be included. To make Python available, what we'd need to do is, basically, rewrite Pip. So, that's easy. Actually, we need a way to download individual packages from PyPi and grep the dependencies and download those. The problem is the organic growth that python packaging has gone through makes anticipating all the different ways in which dependencies are declared harder than driving through London streets while blindfolded. Does does it well, so if the dependency parsers could be duplicated, then it could work. Basically, all the packages would be stored in the same way that the bundler does now: bundler_data_dir/assets/python/PACKAGE/VERSION, and, then when a workflow would request certain packages, it would create a new directory in the workflow's data directory that would just have the symlinks to each package in the way that python demands, and, lastly, we'd just set that with sys.path (or whatever the exact call is).

 

The downside of the bundler is that a first run of the workflow might take a bit longer because the bundler might have to be installed and it might have to download assets requested. However, if workflow A asks for Terminal-Notifier, when workflow B asks for it, TN will already be there, creating no lag time whatsoever.

 

If you want to see the bundler in context, my Cron workflow implements it in order to use Terminal Notifier and Pashua (as well as the BashWorkflowHandler library).

 

So, Stephen, do you want to extend the bundler to sort out this Python package problem?

Link to comment

The idea behind the bundler is that a workflow author would need to include only one small file that has two functions: load and install. The first loads whichever assets the author wants, and the latter installs the bundler framework if it isn't already there. Currently, it works well for Bash and PHP libraries because you just need a path and a simple "include" statement. It also works very well with standard "utilities" like Terminal Notifier, Pashua, Cocoa Dialog, etc.... An upshot is that it allows _easy_ access to these utilities for us, and it keeps them separate from the system (they're all stored in the Alfred Data directory). It also allows different versions to live side by side with no conflicts.

 

Ok, can you help me understand things a bit more clearly. Since I barely know bash and don't know PHP, it's hard for me to follow the code. I understand the idea of having all workflow dependencies in one place, so there isn't redundancy with multiple workflows. I think its needed. I also understand putting all of these dependencies in the data directory for Alfred. So, for Python the bundler needs to do 2 things:
  • download/install Python packages
  • load that package within some workflow script
 
This seems to be what the PHP version does, right? But why "rebuild Pip"? Couldn't you just pip install the package and use the --target param to send it to the proper location within the bundle directory tree?
 
I'm imagining a somewhat different set-up for Python. Instead of __load(), what if the Python version used __exists(), which you place at the top of the script. This function will check the bundler directory tree for the passed package names (you could pass a list of dicts, each of which could have bundler's 4 possible params). If the package is in the directory, the function simply returns True, else it leverages pip to install the package to the directory, and then returns True. Before the functions returns True, however, it adds the Python bundler path to the sys (sys.path.insert(0, $bundler_dir)). In the workflow's script, therefore, you could place the __exists() function at the top, before the importing (even wrap the script in an if... block). Then use the standard importing syntax to import any required packages. 
 
In this way, the Python section of bundler's directory tree would look and function like (for example) /Library/Frameworks/Python.framework/Versions/2.7/lib/; that is, it holds all of the packages or modules downloaded. Then you only need to insert the path to that directory to the sys.path. This makes sense to me, but what say you? 
Edited by smarg19
Link to comment
To break down what the bundler does in plain(?) English and not in any syntax...

 

The file included with the workflow contains those two functions. The __install just installs the bundler if it's not there (which I think is pretty rad in the way that a tool installs itself). The __load function follows whatever language specific necessities are needed to load the asset. So, utilities should always be loaded by their path because that's all you need to know to invoke a utility.

 

For PHP and Bash libraries, it's similar because all you really need is the path. The PHP __load() function, however, takes it one step further and does a foreach loop to require_once('file.php') for each necessary file to include because that's the simplicity of loading PHP libraries.

 

Now, actually, all the __load() function does is talk to a file in the bundler folder that does more of the heavy lifting. So the PHP file calls a function from alfred.bundler.php that has the more specific PHP bindings. This let makes it so that the bundler can change in the backend, but every workflow using it will still work because the output will be the same between major (named) versions.

 

So __load just tells the bundler: hey, load these files. So within the context of Python, it could call alfred.bundler.py and not need to have any returned information other than "true/false" because you can just do "import module.py" within the function, so this would take place in the alfred.bundler.py file, similarly to alfred.bundler.php. (Disclaimer: anything I say about how Python works should be taken with a grain of salt because I don't really know the language).

 

So, what we need to do with Python is to be able to download them to a specific directory structure (DATA/alfred.bundler-aries/assets/python/PACAKGE/VERSION/FILES), and then find a way to import those, knowing that conflicting versions might be living side by side. Hence the idea of creating a specific symlinked directory per workflow.

 

Right now, each asset that is downloaded has a file put in it that's called "invoke" which simply tells the bundler what to do with it. So, for Terminal Notifier, it just returns the path to the actual binary file inside Terminal-Notifier.app; with Pashua, it just returns the path to Pashua.app. With Python, it might just be a list of requirements just like a regular sort of requirements.txt file, and then the bundler would just check the directory to make sure the packages have been downloaded, construct the symlinked directory, and add that to the sys.path so that the user, after that, can just do a regular "import alp, requests, whatever else here."

 

My understanding of Pip is that when you do `pip install --install-option="--prefix=$HOME/Desktop" gluttony` it will install Gluttony and networkx to the Desktop. However, it will install a convoluted directory structure that includes:

 

bin/gluttony

share/doc/networkx-1.8.1/examples.....

lib/python2.7/site-packages/Gluttony-0.8-py2.7.egg-info

lib/python2.7/site-packages/networkx-1.8.1-py2.7.egg-info

lib/python2.7/site-packages/gluttony/THE FILES WE ACTUALLY WANT

lib/python2.7/site-packages/networkx/THE FILES WE ACTUALLY WANT

 

And that's with one dependency.

 

So, it might be a bit hard to determine this programmatically, because we're not quite sure what's going to appear there. (Again, possible lack of Python knowledge). The structure needs to be tested with packages that do not have .egg files and the other number of convoluted packaging schemes that Pypi supports, and we want to minimize downloading duplicate dependencies.

 

Hence, the idea that I was pursuing was to query the package via PyPi's API, and download the appropriate version via cURL. After that, unpack it and figure out what dependencies it has and then download them in the same fashion.

 

Name spacing potentially poses another problem for us, but I know only enough to understand that it is a potential problem.

 

Does this help explain it?

 

It's a fun problem to play with.

Link to comment
So, the basic problem (so far as I can tell) with a Python version of this logic is that you cannot import a module from another script. So, I can't have script A import a module that script B will use. You will always have to explicitly import the module within script B. This means you couldn't load the module in the bundler script. The best you could do is check if it exists where it should be, and if not, install it. 

 

Also, the funky-ness of pip installed modules isn't really a problem, because the Python import function handles all of the parsing for you. So, any module or package installed via pip by definition is structured such that you can use the import function to get it if its path is in sys.path

 

Given these restrictions, I think my initial thinking still makes sense. 

Edited by smarg19
Link to comment

Okay, I'm with you on the thinking, but how do you ensure that different versions of the same package can live next to each other?

 

Also, I went back to playing around with Pip. Here is an interesting way to do it, at least initially from the command line:

 
pip install --install-option="--prefix=$HOME/Desktop/PyPi" --install-option="--install-purelib=$HOME/Desktop/PyPi" Gluttony
rm -fR "$HOME/Desktop/PyPi/bin"
rm -fR "$HOME/Desktop/PyPi/*.egg-info"
rm -fR "$HOME/Desktop/PyPi/share"
 
We could always grab the version number from the .egg-info file, but those don't always exist, right? Also, what do we do with the "data" directories that some might have?
Link to comment

I'm actually excited about the external triggers for one reason that, I'm not sure a lot of people have thought of yet.. with external triggers you have an easy way of creating multiple steps/inputs for a workflow without having to make a keyword that is always accessible. For instance, I'm messing with my Plex workflow again that I never finished. For it, you need to specify an ip address and port for the server that you wish to use. In my situation, I have a keyword that is 'plex server' that calls an external trigger, tied to a keyword object (with no actual keyword set). I set the title and subtitle for it only. This allows me to input data in this step without having to worry about a bunch of other garbage showing up in the results. After the input, it saves that data, then calls another external trigger to set the port.

 

This will also allow me to do things like browse my Plex library. By using the 'plex browse' keyword, I'll always start at the root but when i select something, it calls an external trigger (that isn't always accessible) that will let me call a script filter from the previously selected context.

 

You could also allow easier secondary input to result actions by calling an external trigger. That way you don't have to worry about someone accidentally actioning the keyword that is used to grab input for the next step. Here's a good example. It's a Rename File workflow that uses an external trigger to pop Alfred up and allow you to enter the new file name.  There isn't an actually keyword for it though so you can't accidentally action it during normal use.

 

 

I’m using your approach in one of my workflows setup that require two steps for each custom item available and I really think the setup is now easier to understand and even more flexible.

 

Thank you again for sharing it with us.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...