Jump to content

deanishe

Member
  • Posts

    8,759
  • Joined

  • Last visited

  • Days Won

    522

Everything posted by deanishe

  1. The get-pip.py file needs to be extracted/installed and a simple wrapper script (the pip executable) created (or maybe not—I think you can import pip and use it that way). Probably the cleanest solution is to point to a pip-installer wrapper instead of get-pip.py itself. I'll look into it at the weekend if Stephen doesn't beat me to it.
  2. Pfft. Just a different flavour of the same crap.
  3. I'm not sure it's a great idea to leave it up to the author. That makes the framework something else to worry about as much as a useful tool. Best to specify that errors may occur and what the bundler will do in that case (throw a PHP error they should try to catch, populate an optional $error variable). What I did in my Python framework (pinched the idea from a Ruby one) is to provide a wrapper function to call the workflow code. The wrapper catches any errors and displays an error in Alfred. The bundler could similarly "hijack" execution, display an error and exit.
  4. I guess that answers my question. Could use exit codes to do that in the bash version. -1 = unspecified error, -2 = network error, -3 = user (i.e. input) error. Languages wrapping that could turn the exit codes into proper errors/exceptions. What's the standard PHP way to catch/handle errors?
  5. What happens if there's no 'net connection or the server isn't responding? I know. Just joshing.
  6. If it isn't indexed, you'll have to write your own workflow that displays the results of those find calls. A very basic example (using my Alfred-Workflow library) would look like this: from __future__ import print_function, unicode_literals import sys import hashlib import subprocess import argparse from workflow import Workflow, ICON_WARNING CACHE_RESULTS_FOR = 10 # seconds log = None decode = None def find(dirpath, query): cmd = ['find', dirpath, '-maxdepth', '1', '-name', '*{}*'.format(query)] output = decode(subprocess.check_output(cmd)) return [l.strip() for l in output.split('\n') if l.strip()] def main(wf): parser = argparse.ArgumentParser() parser.add_argument('-f', '--folder', help="Search in this folder") parser.add_argument('query', help="Search for files with this name") args = parser.parse_args(wf.args) dirpath = args.folder query = args.query def wrapper(): return find(dirpath, query) key = '{}::{}'.format(dirpath, query).encode('utf-8') m = hashlib.md5() m.update(key) key = m.hexdigest() results = wf.cached_data(key, wrapper, max_age=CACHE_RESULTS_FOR) if not results: wf.add_item('No results found', valid=False, icon=ICON_WARNING) for path in results: wf.add_item(path, valid=True, arg=path, icon=path, icontype='fileicon') wf.send_feedback() if __name__ == '__main__': wf = Workflow() log = wf.logger decode = wf.decode sys.exit(wf.run(main)) Here's a full workflow to get you started. I've created one Script Filter that searches ~/Downloads. You can change that path or duplicate the Script Filter to add other directories.
  7. Yup. Of course, we'd want to write the Python wrapper around it first… I don't think any bash code should be necessary. If it is, I'm out! I hate bash
  8. You're right. It first installs itself, but in the directory you specify, not the system Python. IMO, the pip install should be taken care of by the bundler library, as with other utilities. Of course, then your cool script will no longer run without bundler.
  9. Also, I'd change a few of the variable names to make their purpose clearer, e.g. target_path -> start_dir, and provide a clearer error when you raise BundlerError. Something like "Couldn't find Bundler directory. Is it installed? Get it from http://..."
  10. Again, I haven't run the code yet, but a couple of things stand out. Why are you creating empty files here and there? Directories I understand, but what's the purpose of creating an empty get_pip.py file? If something goes wrong between touching that file and saving pip into it, the bundler will believe that pip is installed when it isn't. Is it necessary to extract pip's embedded zip? Can't you just run the downloaded get-pip.py with subprocess: subprocess.call(['/usr/bin/python', '/path/to/get-pip.py', '--target', '/path/to/lib/dir', '-r', '/path/to/requirements.txt']) Also, you're saving the hash and modtime of requirements.txt before it's been installed. That's bad because if the install fails (e.g. the computer or PyPi is offline), on the next run the bundler will think everything has been successfully installed due to the presence of the info.txt file. (Shouldn't that be called info.json?)You need to check whether pip ran successfully and only then save the metadata. What to do if pip fails is another thorny question. Delete everything and start again or just try to run it again?
  11. The workflow's no help to me, as I use MailMate, but I can't express how happy I am to see a workflow by you. "Dogfooding" is by far the best way to create great products. I'd love to see lots more workflows from you and Andrew, and if I could only have one "feature request" ever, it would be to have some workflows written by you guys.
  12. Sure, that would work from bash scripts (no help if you use zsh, for example), but that's still assuming that folks have set http_proxy in their bash environment, which I'd say is probably rather unusual. Proxies are normally set via OS X's Network preference pane, and that does not propagate the setting to the shell (unlike Linux). It's also profile-specific (so you can use a proxy at work, but not at home, for example). I've looked into it (briefly), and trying to figure out the active network profile and its proxy server is a real PITA. This really is something Alfred needs to do. It applies to every workflow, and is much easier for Alfred to figure out via Cocoa than for us script-bound plebs. I've just added it as a feature request.
  13. As things stand, workflows that require web access are basically just going to break on a machine that can only access the web via a proxy server. Alfred should set the http_proxy environmental variable before calling any workflow. Currently, it is not only necessary for every workflow to be "proxy aware" if it wants to work on all machines, but it is also extremely difficult to determine the proxy server from a script, as it depends on the current network profile and Apple's command line utilities don't provide a straightforward way to get the proxy address. By setting the http_proxy environmental variable before running a workflow's script, Alfred could make using the proxy entirely transparent to Ruby/Python workflow authors (both languages' standard network libraries will automatically use the proxy specified in http_proxy, as will curl), and much easier to use for workflows written in other languages (they can grab the environmental variable rather than trying to coax the same information out of OS X themselves). Essentially, workflow authors shouldn't need to have to worry about network settings, as far as that can be avoided, not least of all because it represents something that currently needs to be implemented separately in every single workflow that needs web access.
  14. Haven't tried running it yet (it's late, I'm drunk), but the code looks well-written and on point. I had a hard time following exactly what you're trying to do, however (I am drunk). Good work on the comments, but it's a good idea to also document the intent of the code. For example, what is the purpose of specifying the local target to get_requirements vs the alternative of looking under BUNDLER_PATH for requirements.txt? Why is requirements.txt in the bundler's directory? I've figured this out now, having pored over the code, but it'd be great if your comments specified why, not just what. You shouldn't be copying requirements.txt to the bundler directory and comparing its age to that of the one in the workflow directory to determine changes: I might have rewritten/reinstalled the workflow without changing the contents of requirements.txt, but the mod time will have changed. You need to generate the MD5 hash of requirements.txt and store that in the bundler directory to determine whether it's changed. Best of all, store the hash and the modtime and use a changed modtime as a trigger to perform a hash comparison to determine whether the contents of requirements.txt have changed (see below for code). WRT pip, rather than messing about looking for the pip module and installing it from a ZIP file, you should probably just install the get-pip.py script, which is a self-contained version that works exactly the same as an installed pip. We should include get-pip.py as a general bundler utility (like CocoaDialog etc., as there's no point duplicating it in every workflow. Massive kudos on considering the presence of a proxy server, but it is, alas, to no avail and should be deleted. pip will automatically use the proxy server specified in the http_proxy environmental variable in any case (it's a standard feature of the Python libraries it uses). The --proxy argument is essentially a way to override http_proxy or specify one if http_proxy isn't set. Unfortunately, it is almost never set in Alfred's calling environment: it doesn't use your shell environment, but that of launchd, which doesn't include all the cool stuff you've set in your ~/.bashrc, ~/.profile etc., so http_proxy will basically never be set except for the rare user that fiddles with launchd's environment. This is something I've been considering reporting as a feature request/bug with Alfred (it's extremely difficult to determine the proxy settings from a script, as it's dependent on the current network profile, but Cocoa apps can do it easily. Alfred should really set http_proxy before it calls scripts.) I'll try to have a proper look at the code over the weekend: I'm very busy until then ---------------------------------------- To hash a file, do: from hashlib import md5 h = md5() with open('requirements.txt', 'rb') as file: h.update(file.read()) hash = h.hexdigest() # store this value
  15. Sublime's a funny old beast, not being a native app. It does some odd things on Mac sometimes. I'd go with writing a Sublime plugin to call Alfred instead if that's your editor of choice. Real men use MacVim of course
  16. Python dependency management is great! Just use pip. Sorted pip does all the checking for you: if the package is installed in the appropriate version, it does nothing. So calling pip only installs what's missing/needs updating. You only need to call it once, too, with the requirements.txt file, not for every package. The question, as I see it, is what to do when requirements.txt has changed. We can either delete all the packages before running pip, thus ensuring a clean library, or just run pip on the changed requirements.txt, which would be much faster (fewer downloads), but may end up leaving a lot of old cruft in the workflow's library directory.
  17. No pun intended… I think the PHP interpreter is just super slow to load. I suspect it may be because it's so monolithic. With Ruby/Python, most of the functionality is in external (standard) libraries that have to be explicitly imported. I don't think it makes much difference if it's already in memory, but from a cold start… FWIW, I've spent the last few days messing around with bash/zsh. Slowly getting the hang of it, but man is it weird. Took me a couple of hours to figure this out At any rate, shell is way slower than PHP/Python/Ruby for any heavy lifting, but it starts super fast.
  18. Your controller.php isn't working because you're including an AppleScript. include is only for PHP code. You don't need to include the AppleScript, just run it and grab the output. Which IDE/editor are you using? Unless it's a native Cocoa editor, you aren't going to have much joy getting the filepath with AppleScript. It might be a better idea to write a plugin for the editor that calls Alfred with the filepath rather than the other way around (Dash works this way, for example). You can determine the project root easily enough by climbing up the FS tree till you find .git or htdocs. If you take the editor plugin route, it may be possible to ask the editor for the project root, too, and pass that to Alfred as well. Here's a PHP function to get the relative path from one file to another.
  19. The package would be called bundler, the PyPi name—if there is one—would be alfred-bundler, i.e. pip install alfred-bundler but import bundler. @Stephen: you're overthinking things. Our options are very limited, as workflows need to execute fast. When a script calls bundler.init() (or whatever we decide to call the entry point), it can parse info.plist for the bundle ID, check for a requirements.txt, hash it, and compare the hash to the one stored in the workflow's bundler directory (probably, we'd first compare the modtime of the files, and only hash it if that had changed). If there's no cached hash, or it has changed, pip gets (installed and) called with requirements.txt or the string passed to init(). We can't try to intercept import calls or implement an alternative interface because it's very complicated with Python, and as noted, there is no guaranteed correspondence between the name of the library you import and the name of the package that provides it. Installations would be to ~/Application Support/Alfred 2/Workflow Data/alfred-bunder/bundle-id-of-package. (It's so simple, there's little reason to create a new install dir if/when bunder is updated.) It definitely makes sense to look for/expect requirements.txt to be in the same directory as `info.plist, i.e. the workflow root.
  20. I'm not entirely sure what you're referring to there. That would work for utilities, but is essentially impossible for importable libraries on account of the way Python imports/package installations work. WRT utilities, would it be better to re-implement it in Python or should there just be a simple wrapper around the bash version?
  21. Nobody is, AFAIK, but just to be clear. What do you mean by "robust"? requirements.txt is a standard pip file. The syntax is fixed. If you develop your workflow using a virtual env, pip can generate requirements.txt for you. "Hashing" means generating a short but unique "hash" from some data. The same data always produces the same hash, but you can't recreate the data from the hash. It's a way of checking if data is correct and/or has been changed. It's trivially easy to do. By "deep six", I just mean delete. Packages that use C code include the source code and are compiled on installation (whether by pip or using setup.py). A package that's bundled with a workflow was installed (and thus compiled) by the workflow author. That's how I think the dependency installer could work, yes. I presume there'd also be an API for using utilities.
  22. Addendum: bundler would treat pip as a utility to be installed in the same way as CocoaDialog etc. Including it with the bundler library would largely defeat the purpose of the whole exercise (get-pip.py is ~1.5MB).
  23. I've had a wee think about this myself. I'm taking the usage of pip and a single, per-workflow directory in which all dependencies can be installed with pip install --target=/path/to/dir as a given. The bundler would add that directory to sys.path. I still see a few potential problems. Obviously, you can't be calling pip every time the workflow is run: far too slow. So you need to do something else. My current preference would be to require the author to create a requirements.txt file. bundler can hash this and store the hash (see below). If the hash is different to the stored one (or there is no stored one), it calls pip on requirements.txt. The workflow code might look something like this: import bundler; bundler.init() import this import antigravity … (BTW, you should try both those imports if you haven't already.) bundler.init() would automatically look for a requirements.txt file. (You could possibly skip the init() call and run the code at the module level, but that's frowned upon.) Alternately, a string argument could be passed that can be passed straight to pip, e.g. bundler.init('requests>=1 pytz') (which I don't think is always a wise option—see below). It would throw up a dialog via AppleScript to ask for the user's permission/inform them what's going on. If pip succeeds, we store the hash of requirements.txt (in the workflow's designated bundler library directory) so we can not only tell that the packages have been installed, but also if requirements.txt has been altered, i.e. the workflow has been updated. I don't particularly like the alternative of passing a string to bundler.init(), as that would mean duplicating the call in every script that's an entry point to the workflow. Duplication is bad: it leads to errors. Still, we could offer it as an option (with the necessary caveat utilitor) for simpler workflows. If requirements.txt has changed, we deep-six the existing library dir and run pip again, notifying the user etc. Running pip in update mode is possible, but the directory might end up accumulating a lot of cruft. Best to keep everything clean. I see one final potential problem compared to authors' bundling packages with workflows: Some packages include C/C++ code, which needs to be compiled, and thus can't be installed if Xcode/the Xcode command-line tools aren't installed. Asking users to install either of those is a bit much, imo, so we'd have to wash our hands of such packages and tell developers they need to include 'em directly in their workflows if they really, truly have to use them, 'cos we ain't gonna field support requests for that kind of crap. I've seen a couple of workflows trying to install/bundle lxml, and that's a huge can of worms veiled in tears. I think this model could allow us to create a fairly streamlined and sane Python bundler. Whaddya reckon?
  24. See this workflow. It works with most common email programs.
  25. Yeah. I figured it would be a fairly elegant way to add a lot of functionality without messing with Alfred's workflow model too much. It'd also open the way for workflows to grab the results of standard Alfred searches or of other workflows and filter/process them. ctwise can have his globbing patterns, I can have my blacklists, etc.
×
×
  • Create New...