Jump to content

Recommended Posts

Curious if you've tried using this for saving multiple keys? I'm getting requests to incorporate multi-org support in my Slack workflow, but Slack requires a seperate token for each organization. I was thinking I could just do a check, but then I realized if you need to re-auth an organization you'll have to replace the existing.

 

I see based on the library I can use store/d data to serialize it for later grabbing. Just not sure if that's going to be the best way to do this, but I'm curious to hear input and feedback.

Edited by frankspin
Link to comment

I'm afraid I don't quite understand what it is you're trying to do or what you think would be overwritten/replaced.
 
If you just want to store multiple accounts/datasets in parallel, you might create a dictionary saved via the settings/stored data API that maps account names to a prefix, then use the appropriate prefix for the account in your caching/data/Keychain keys:
 

# `account name:prefix` mapping, e.g. {'Personal': 'account1', 'Work':'account2'}
accounts = wf.settings['accounts']
# or accounts = wf.stored_data('accounts')
 
account = 'Personal'
prefix = accounts[account]
 
api_key = wf.get_password('{}-apikey'.format(prefix))

def wrapper():
    url = 'https://api.example.com/v1/whatever.json'
    r = web.get(url, params={'api_key':api_key})
    return r.json()

some_cached_data = wf.cached_data('{}-thedata'.format(prefix), wrapper, max_age=0)
...
...

Thus, some_cached_data would be stored under the key (filename) account1-thedata for account Personal and account2-thedata for account Work.

Edited by deanishe
Link to comment
  • 1 month later...

A user of mine reported this today; fairly certain I have the latest version of the library installed (at least, whatever version pip gives me):

Starting debug for 'LastPass Vault Manager'

[ERROR: alfred.workflow.input.scriptfilter] Code 1: 23:35:18 workflow.py:1598 DEBUG Loading cached data from : /Users/fmr/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/com.bachya.lpvm/vault_items.cpickle

23:35:18 workflow.py:1951 ERROR 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)

Traceback (most recent call last):

File "/Users/fmr/Dropbox/settings/alfred_2/Alfred.alfredpreferences/workflows/user.workflow.127EFC78-D2F5-47D5-BD4A-B4C80FBCFB73/workflow/workflow.py", line 1946, in run

func(self)

File "lpvm.py", line 246, in main

search_vault(wf, vault, args.query)

File "lpvm.py", line 141, in search_vault

results = _search_vault(wf, vault, query)

File "lpvm.py", line 40, in _search_vault

match_on=MATCH_ALL ^ MATCH_ALLCHARS

File "/Users/fmr/Dropbox/settings/alfred_2/Alfred.alfredpreferences/workflows/user.workflow.127EFC78-D2F5-47D5-BD4A-B4C80FBCFB73/workflow/workflow.py", line 1780, in filter

value = key(item).strip()

File "lpvm.py", line 88, in search_item_fields

return ' '.join(elements)

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128)

23:35:18 workflow.py:1969 DEBUG Workflow finished in 0.790 seconds.

Similar error as has been posted before, but I don't see any non-ASCII characters in the user's path.

Any thoughts, or other information I can provide? I'm checking to see if the cached data that he's retrieving doesn't contain non-ASCII characters.

Thanks!

Edited by Aaron B.
Link to comment

It's always a good idea to provide a link to the workflow you're talking about. The non-ASCII path bug has been fixed.
 
The problem is elements in search_item_fields contains encoded (non-Unicode) strings.
 
It's not non-ASCII characters that are the problem (sooner or later one of your users will be using them, so you have to handle them correctly), but that you're mixing Unicode and encoded strings in the workflow.
 
The root problem is that your LastPassVaultManager.py module uses encoded strings throughout, while lpvm.py uses Unicode. You will have problems when you mix the two. In particular, you're not decoding the output of subprocess.check_output to Unicode. Subprocesses return bytes (usually UTF-8-encoded strings).
 
You need to convert the encoded strings to Unicode with Workflow.decode() or unicodedata.normalize('NFC', unicode('<subprocess output here>', 'utf-8')) (they do the same thing). If you don't convert the output to Unicode, your workflow cannot handle non-ASCII strings.
 
You should put from __future__ import unicode_literals at the top of LastPassVaultManager.py and make sure to decode all the output of subprocesses to Unicode as described above.
 
This might help explain it a bit better.

Also, I just downloaded your workflow to see, and the offending function is now on line 121, not 88, so it would appear your user is using an older version.

Edited by deanishe
Link to comment

It's always a good idea to provide a link to the workflow you're talking about. The non-ASCII path bug has been fixed.

 

The problem is elements in search_item_fields contains encoded (non-Unicode) strings.

 

It's not non-ASCII characters that are the problem (sooner or later one of your users will be using them, so you have to handle them correctly), but that you're mixing Unicode and encoded strings in the workflow.

 

The root problem is that your LastPassVaultManager.py module uses encoded strings throughout, while lpvm.py uses Unicode. You will have problems when you mix the two. In particular, you're not decoding the output of subprocess.check_output to Unicode. Subprocesses return bytes (usually UTF-8-encoded strings).

 

You need to convert the encoded strings to Unicode with Workflow.decode() or unicodedata.normalize('NFC', unicode('<subprocess output here>', 'utf-8')) (they do the same thing). If you don't convert the output to Unicode, your workflow cannot handle non-ASCII strings.

 

You should put from __future__ import unicode_literals at the top of LastPassVaultManager.py and make sure the decode all the output of subprocesses to Unicode as described above.

 This might help explain it a bit better.

Also, I just downloaded your workflow to see, and the offending function is now on line 121, not 88, so it would appear your user is using an older version.

Thank you for the detailed explanation, Dean. I will implement your suggestions and let you know if I run into any further trouble.

Link to comment

Got everything in place. For what it's worth, one additional thing I had to do was explicitly convert any "passed-along" text (meaning text that is passed as part of {query} to the next stage of the workflow) to UTF-8.

 

Thanks for your help, again, Dean – much appreciated.

Link to comment

one additional thing I had to do was explicitly convert any "passed-along" text (meaning text that is passed as part of {query} to the next stage of the workflow) to UTF-8.

 

Yes, indeed you do. I will update the Unicode page of Alfred-Workflow's docs to make clearer what you need to encode and decode.

 

Glad to hear it's working!

Link to comment

Hi again, Dean,

 

I'm running into a reported issue within my workflow that seems very sporadic; I can't reproduce it all that well, but more than one other person has, so I want to give it a look.

 

Since it uses your library, would you be willing to take a look?

 

Here's the thread: http://www.alfredforum.com/topic/5356-script-filters-via-python-seem-to-sporadically-not-work/

 

Thanks so much!

Link to comment
  • 4 months later...

Woo, haven't posted here in a bit.

 

Just to follow up to the multiple keys question I asked a while back, I did come up with a solution. It involces saving each key comma seperated, then splitting them out. Not the most ideal solution, but it's working for me.

 

A situation I'm running into now though is properly iterating through my for loops. In every other work flow this is working fine, but I think because this one is relying on a search endpoint it's hanging me up. What I'm trying to do is let someone enter a search query, get the results, then display all the results. For some reason it's only displaying 1, then after a few seconds updates to show the last result. I can't seem to figure out why it's only showing the one. I've tried using sleeps to prevent it from querying the search before someone finishes, but that isn't working.

import sys
import argparse
import requests
from workflow import Workflow, web, PasswordNotFound


def slack_keys():
    wf = Workflow()
    try:
        slack_keys = wf.get_password('slack_api_key')
    except PasswordNotFound:
        wf.add_item(title='No API key set. Please run slt',
                    valid=False)
        wf.send_feedback()
        return 0
    keys = slack_keys.split(",")

    return keys

def search_slack(keys, query):
    wf = Workflow()
    search_results = []
    for key in keys:
        api_key = str(key)
        slack_auth = web.get('https://slack.com/api/auth.test?token=' + api_key + '&pretty=1').json()
        if slack_auth['ok'] is False:
            wf.add_item('Authentication failed.'
                        'Try saving your API key again',
                        valid=False)
            wf.send_feedback()
        else:
            results = requests.get('https://slack.com/api/search.messages?token=' + api_key + '&query=' + query +
                              '&count=10&pretty=1').json()
            if results['messages']['total'] > 0:
                for result in results['messages']['matches']:
                    if result['type'] == 'message':
                        search_results.append({'text': result['text'], 'channel': result['channel']['name'],
                                               'user': result['username'], 'team': slack_auth['team'],
                                               'link': result['permalink']})
            else:
                search_results.append({'text': 'False', 'team': slack_auth['team']})

    return search_results

def main(wf):

    parser = argparse.ArgumentParser()
    parser.add_argument('query', nargs='?', default=None)
    args = parser.parse_args(wf.args)

    query = args.query

    search_results = search_slack(keys=slack_keys(), query=query)

    for results in search_results:
        if results['text'] == 'False':
            wf.add_item(title='No search results found',
                        subtitle=results['team'],
                        valid=False)
        else:
            wf.add_item(title=results['text'],
                        subtitle='%s - %s - %s' % (results['user'], results['channel'], results['team']),
                        arg=results['link'],
                        valid=True)

        wf.send_feedback()

if __name__ == u"__main__":
    wf = Workflow()
    sys.exit(wf.run(main))
Link to comment

The reason it's showing only one result is because you're calling wf.send_feedback() in the for loop.

It gets called after the first iteration and Alfred has its set of results. Once you call wf.send_feedback(), the workflow is done as far as Alfred is concerned.

A few other observations:

Is there a reason you're using both workflow.web and requests? If you need requests, there's little point using workflow.web, too: it's just a crappy version of requests.

You don't need to do wf = Workflow() in each function. The wf object created in the if __name__ == u"__main__" clause is global. You can just use wf in the functions.

You make two HTTP requests for each API query (one to verify auth, one to get the actual data). Doesn't the API return an authentication error if you try to call it without being authorised? The workflow would probably run a fair bit more quickly if it only needs to make one request per API call.
 
You could also speed things up a fair bit by making the API call for each key in parallel in threads (if the API permits that).

Things are going to go wrong if the password isn't set because slack_keys() returns either a list (when it works) or an integer (when it fails), but the calling function doesn't check the returned data and just assumes it's a list. You should probably just return an empty list instead of 0.

It'd make more sense to return an empty list or let the PasswordNotFound error propagate and handle it in the calling function.

Personally, I'd move all the feedback-generation code to main() instead of each function outputting its own errors to Alfred. It makes the code easier to reason about if all the feedback code is in one place, not spread throughout the code. It's kinda bad form to have your functions doing two unrelated things (i.e. retrieving data and sending output, but only sometimes).

Edited by deanishe
Link to comment

Thanks for the feedback. I'll go through and make some adjustments.

 

 

Is there a reason you're using both workflow.web and requests? If you need requests, there's little point using workflow.web, too: it's just a crappy version of requests.

 

For some reason that search query was not working under .web so I was using requests

 

 

You make two HTTP requests for each API query (one to verify auth, one to get the actual data). Doesn't the API return an authentication error if you try to call it without being authorised? The workflow would probably run a fair bit more quickly if it only needs to make one request per API call.

 

It will return 'ok': False but I use some of the info in the auth check in the results I'm returning. The normal calls don't return data like team name so I have to make that call anyway

 

 

It's kinda bad form to have your functions doing two unrelated things (i.e. retrieving data and sending output, but only sometimes).

 

I had a feeling this might look bad. I was splitting them with the intention of eventually trying to turn some of it into a wrapper to learn OOP a bit.

 

 

You could also speed things up a fair bit by making the API call for each key in parallel in threads (if the API permits that).

 

I realize iterating through a loop for each key probably isn't the best idea and in some cases speed would be a huge help. I'll have to look into how to properly do threading.

Edited by frankspin
Link to comment

With regard to the GET parameters, workflow.web isn't very smart. The best thing to do is pass the GET parameters as a dictionary in the params arg, using a bare URL: web.get('https://slack.com/api/search.messages' params=dict(token=api_key, query=query, ...)) .

requests is much smarter in this regard: you can pass GET parameters in the URL and/or via params. In either case, I'd recommend passing query via params as then both requests and workflow.web will ensure it's properly URL-encoded.

Bummer you need to make two requests per API call :(

Regarding threading: that can be a very tricky topic. In this case, you might be able to whip up something fairly simple using the Pool class from multiprocessing.dummy and its apply_async() or map_async() methods:
 

from multiprocessing.dummy import Pool

pool = Pool(5)  # allow 5 simultaneous connections
results = {}

for url, params in urls:
    future = pool.apply_async(web.get, (url, params))
    results[url] = future

pool.close()  # accept no more jobs
pool.wait()  # wait for queued jobs to finish and threads to exit

# Populate `results` from the return value of the futures
for url in results:
    future = results[url]
    results[url] = future.get()

# Do what you want with results here...

With regard to OOP, it doesn't make any difference. I didn't explain the issue very well. It's not so much that the function/class/whatever combines speaking with Alfred and speaking with an API, it's that the responsibilities aren't clearly separated and the code isn't properly layered.

If there's an error, your functions bypass main() and talk directly to Alfred, which is otherwise the job of main(). That's a recipe for confusion. To be clear, it's not a huge deal for your functions to add an error message to the output with wf.add_item(), but they shouldn't be calling wf.send_feedback(). Effectively, that terminates the workflow and the lifetime of the workflow is main()'s responsibility.

To give an analogy: you're the head of procurement (main()) and it's your job to collate all the internal orders and place the monthly order with your supplier (Alfred). You have several members of staff each responsible for different departments' orders. Normally, when they've collated their individual departments' orders, they bring them to you and you consolidate them into a single monthly order to the supplier.

One of your team members (search_slack()) has an annoying habit of drafting his departmental order, but instead of giving it to you, he sometimes contacts the supplier himself and places the monthly order, but only for his stuff. And he doesn't even tell you when he's done this, so everyone else carries on preparing their orders without knowing it's too late to place them and they're wasting their time.

Although that might be the right thing to do sometimes, he isn't in a position to know whether that's the case, and he's clearly overstepping his authority.

In the script, if one of the API keys is invalid, search_slack() reports this, but also silently closes the reporting channel, so all the work done with other, valid API keys is silently thrown in the bin. Equally, any other keys that are invalid will also have their error reports thrown in the bin.

Edited by deanishe
Link to comment
  • 1 month later...

Just released a new version of Alfred-Workflow with a very important change.

Thanks to Owen Min, Alfred-Workflow will now prevent Alfred from killing your Script Filter in the middle of writing to a file. Previously, the workflow could be left in an invalid state (e.g. empty, invalid settings file).

web.py is now easier to use: It will properly combine any GET parameters specified in the function call with any already in the URL.

Also added a rudimentary Dash docset to the repo.

Edited by deanishe
Link to comment
  • 8 months later...

BTW, I made a bugfix release v1.17.2. This turn off STDERR logging, as large amounts of output (~100kB) cause a deadlock and the workflow will no longer run.

 

If you have workflows that are hanging, upgrade to this version of Alfred-Workflow. Hopefully, the issue will be resolved in the next Alfred release.

Link to comment
  • 2 months later...

Fairly large update today with v1.18 adding explicit support for Alfred 3.
 
The new features are:
 
Workflow variables (Alfred 3 only)
 
Set workflow variables via Script Filter feedback
 
Advanced modifiers (Alfred 3 only)
 
Modifiers can now override arg and valid, in addition to subtitle, and also support workflow variables.
 
Alfred 3-specific updates (Alfred 3 only)
 
Added support for .alfred3workflow files in GitHub releases. These files are ignored under Alfred 2 and given priority under Alfred 3. This allows you to support Alfred 2 and Alfred 3 versions of a workflow from the same GitHub repo.

Link to comment

Although I think it usually best if options are accessed through workflow feedback and keyboard controls, would it be possible to add support for notifications with response buttons to Notify.app?  For some options you might want to deliberately slow down a choice if the script/workflow is about to do something that will take a lot of resources. In the apple developer documentation examples (see (void)showNotificationAlert) it doesn't look like it's that hard to create notifications with buttons.

Link to comment
  • 2 weeks later...
  • 2 months later...
  • 1 month later...

Released v1.25 with an extremely important bugfix for Sierra. Please see the notice at the top of the OP for details.

 

I've also added a new feature: session IDs and session-scoped caching. A session ID is valid as long as the user is using your workflow. If they switch to a different workflow or close Alfred, the session ID is reset.

 

The Workflow3.cache_data() and Workflow3.cached_data() have a new session argument. If session is set to True, the cache will magically [1] expire when the user stops using your workflow or Alfred.

 

It's awesome for data like a list of tabs/windows for Application X. They're slow to fetch, but you don't know how long you can cache them for, as they're liable to change shortly after your workflow runs (or as a direct result thereof).

 

  1. It's currently not very magical at all. It works by prefixing the cache filename with the session ID, so it fills your cache with lots of files. There is a Workflow3.clear_session_cache() method, but that's currently very dumb and deletes all session-scoped data. I'll add some smarter cleanup code once I figure out where the best place to put it is (hopefully, so you won't have to run it manually).
Edited by deanishe
heil grammar
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...