malkomalko Posted September 13, 2014 Share Posted September 13, 2014 I'm starting to get into writing my own custom workflows, and one thing I can't find after reading the forums, is if it's possible to do some type of heavy lifting for a script only once. An example of this would be at the start of a workflow, pulling in some data, and then using what the user types in to filter that data out to update the display. Right now I have it setup and it's running all parts of the script with every key change, but I think it'd be faster to do the bulk up front and then just have the query update the data in memory. Is this possible? Link to comment
smarg19 Posted September 13, 2014 Share Posted September 13, 2014 (edited) Hmm... Is something like this possible? Sure, but it will need to be built and structured by you. You won't get any help from Alfred directly. Since I don't know what language you are working with, I can't be specific, but I can talk through how I did something similar in Python. I have a workflow (Pandoctor) that works as an interface for pandoc. One of the things I needed to do was to grab and store the user's pandoc installation info. Basically, I knew that there was a lot of information that the workflow would require. Instead of grabbing it each time the workflow runs, I wanted to grab it on the first run and then store it for quick retrieval on all other runs. To do this, I create a Python Object for all of the data about pandoc I would need. The initialization of this object includes a self-reference: def __init__(self, wf): """Initialize [font=courier new,courier,monospace]pandoc[/font] object. """ self.wf = wf self.me = self.pandoc() self.me contains all of the data that I need about a user's pandoc installation. To get that data, I use a method called pandoc(). The logic is fairly simple:Check to see if the data is already stored, If it is, just return it immediately. If it isn't, run all the property methods and save the results as a JSON file I had a number of methods for grabbing each of the relevant pieces of information (version, path, options, etc). I made each of these methods properties and had a primary method that would run them all on first run. Here's that method (PS, I'm using the great Alfred-Workflow Python library): def pandoc(self): pandoc_data = self.wf.stored_data('pandoc') if pandoc_data == None: data = { 'path': self.path, 'version': self.version, 'outputs': self.outputs, 'inputs': self.inputs, 'options': self.options, 'arg_options': self.arg_options } self.wf.store_data('pandoc', data, serializer='json') pandoc_data = data return pandoc_data Now, each of the property methods I made accessible as well (so instead of calling Pandoc().pandoc['inputs'], I can call Pandoc().inputs). To make this also fast, I used the same logic. Here's an example: @property def path(self): """Find path to [font=courier new,courier,monospace]pandoc[/font] executable. """ try: return self.me['path'] except AttributeError: if os.path.exists('/usr/local/bin/pandoc'): return '/usr/local/bin/pandoc' else: from distutils.spawn import find_executable pandoc_path = find_executable('pandoc') if pandoc_path: return pandoc_path else: raise RuntimeError("Pandoc is not installed!") The property itself checks to see if it can just read the data from the stored file. If it can't, it will generate it.So, how does this work as a whole? On the very first run of the workflow, when I initialize the Pandoc() object, it will set self.me to the result of the pandoc() method. This means that the pandoc() method has to run. When it runs, it will try to read the stored JSON file and find that it doesn't exist. When it sees that it doesn't exist, it will start forming the JSON dictionary of all the relevant data I need. First, it will execute the self.path property method I have above. When that runs (here on the first run), it will try to read the info from the stored file, but that doesn't exist, so it will raise the AttributeError. I catch that error, and proceed with manually grabbing the path to the pandoc executable and returning it to the pandoc() method above. This is the same logic for all of the other property methods. Now, once pandoc() runs all of the property methods, it stores the data in JSON format to a file and then returns the data. But, on every other subsequent run of the workflow, whenever I call the Pandoc() object and it sets the self.me attribute, it will simply immediately read the data from the stored file. This means that the first run will be a bit slower (hide that in a configuration script), but every other run will be blazingly fast. So, that was a bit indepth, and I don't know if it makes any sense to you (if you don't know Python, I'm sorry, it's all I got). But, the basic underlying point is that it is indeed possible to set a workflow up such that you have an automatically self-generating set of data for use in the workflow. No matter what you do, you will need to use this kind of self-referential logic to achieve this type of result, but it should get you where you want to go. Edited September 13, 2014 by smarg19 Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now