Jump to content

deanishe

Member
  • Posts

    8,759
  • Joined

  • Last visited

  • Days Won

    522

Everything posted by deanishe

  1. I'd say it's a bit too beta to use in production yet. The Python version is getting fairly large now, and a lot of the code might be better off pushed into the main bundler where it can be updated. The pip updater, for example, is epically hacky, as pip seems incapable of updating itself when using --target. Still, always good to have beta testers, and rather you than me
  2. I think I got update.sh fixed now, and the Python bundler seems to be working okay. My work here is done
  3. I've changed the Python bundler to take care of installing/updating Pip itself and to run update.sh every week. Still trying to fix the errors in update.sh.
  4. The "proper" way to do it would probably be to change the workflowdir() method to start looking in the current working directory instead of its own location. I wrote it the way it is in order that the code would still work if called from outside the workflow directory (in Terminal, basically). There may be a way to figure out the directory of the calling script—that would be optimal. If you just pass in a path to the plist and have the library installed outside of your workflow's directory, workflowdir() and dependent methods will be broken. It isn't. Python works this way because it has implicit namespaces (so you don't have to do silly stuff like start your functions with __ or some other prefix to—hopefully—avoid name conflicts). It's really not a problem once you understand how it works (i.e. sys.path). Also, as discussed elsewhere, it allows Python to bundle a huge number of libraries by default without having to load them all every time. Admittedly, it would be great if Python has some concept of versioned libraries. Yeah. This is kinda frowned upon. Forking is the "proper" way to do it, and also rather excessive in this case (IMO). Well, what I wanted to store there was most definitely cache data, but it's no biggie. Oh, I won't be doing it any time soon. I'm just wondering if I should bundle the bundler or recommend bundler as a way to install the workflow library. Probably the former, tbh. I'm using TN (seeing as it's already there). I think setting a module-level flag is the simplest solution (though I don't know why a developer would want to turn it off).
  5. Also, I added code to the bundler to notify the user when Python dependencies are being installed. It takes a good few seconds in the best case, so I figured it makes more sense than dumb silence and an unresponsive workflow. Should I leave that up to the workflow developer instead, or set a flag so they can disable it?
  6. Makes sense design-wise, but I don't see much point in creating the pip executable: it's more dangerous than anything. If it isn't called correctly, it will install stuff in the system Python, which is bad. There isn't really such a thing as a Python asset in the bundler sense: they don't work the same way as PHP/bash. At any rate, I've given the Python bundler its own "bundle ID" and assets/python subdirectory, and I will stick pip in there instead. Yeah, I would do the same. And similarly, I would only call update.sh if other stuff is being installed: Running stuff in the background isn't quite so straightforward in Python (the main script won't exit till all subprocesses are done), and it seems prudent to only run it when a network connection is required in any case. No, the cache directory isn't deleted at reboot. It's just intended for data that can be deleted without messing things up. What is "Alfred Python" (there are several libraries of that name), which "initializer" and why would you want to pass a plist to it? WRT the Python bundler, you should be able to install any package on PyPi and any package on GitHub that has a setup.py file. I'm probably going to add my own workflow library to PyPi soon. I haven't decided yet whether I should include the bundler in the workflow library or recommend installing the library via the bundler… I suppose I could do both.
  7. Err … I could, but the bundler just made it so easy But seriously, it shouldn't be a problem if you think that's better. Thinking about it, I could try to update pip every time a workflow wants to install something. That would hide the performance hit I haven't really given any thought to a proper uninstaller: I just wanted a quick something to help me test better. One more thing that had me scratching my head for a bit: the bundler deletes its cache directory. Could it be modified to only delete the temporary files instead (I wanted to keep some stuff in there)? Have you looked at the Python bundler? It's mostly a rip of your code The only "clever" part is the newly-added function call caching, which I largely ripped from the wiki.
  8. I wasn't thinking of a utility for the user, more something to make testing/development easier. Something significantly smarter would be required for users, and with some kind of UI, to boot. So you're planning an accompanying workflow to manage the bundler? WRT to Python, uninstallation is a piece of cake: run through the workflows directory and grab all the bundle IDs, then delete any subdirectories of assets/python that aren't in the list of bundle IDs.
  9. Obviously, the php calls are irrelevant compared to the time it takes to install stuff. I still have Gatekeeper on, but it seems to go quiet after you've said yes once. Do you know how to reset it? Why does registry.php need to run every time? Can't it create a cache file that can be grepped, and only run if the entry doesn't exist? I've noticed that the first run can take a very long time if neither the bundler nor dependencies are installed. Is there any way we can notify the user sooner? At the least, shouldn't we notify the user whenever something is being installed?
  10. So, I added caching of the calls to bundler.sh to bundler.py: Calling `bundler.init()` 10 calls in 0.0449 s (0.0045 s/call) Calling `bundler.utility("cocoaDialog")` 10 calls in 0.0002 s (0.0000 s/call) I'd say the performance problem's solved for the Python version I also added an uninstaller script. Hope that's okay.
  11. You only need to select Dollars with bash (by default it expands $1, $2 etc., plus a few others, inside double quotes). For Python code, assuming you're using var = "{query}" and not var = '''{query}''' or some such, escape Backslashes and Double Quotes. Leave everything else unselected.
  12. In Python: from time import sleep sleep(2) # seconds In bash: sleep 2 # seconds To determine if the host is already online, you can try pinging it.Bash: ping -o -t 1 192.168.0.1 if [[ $? -gt 0 ]]; then echo "Offline" else echo "Online" fi -o means exit after receiving one response, -t 1 means 1 second timeout.Python: import subprocess retcode = subprocess.call(['ping', '-o', '-t', '1', '192.168.0.1']) if retcode > 0: print('Offline') else: print('Online') How's that for you?
  13. Nice improvement. GateKeeper's still a problem, however. See below. I just pushed some changes to the bundler.py inline documentation, but my commit messages are all messed up. Sorry Pip normally is installed as a runnable program, but it doesn't have to be. The way most Python utilities are installed is that you specify a script name and a function in your library, and then the installer creates a simple wrapper that calls that function when run. If you cat the pip excecutable, you'll see what I mean. Instead of creating the wrapper script, I just import pip instead and call its main() function with the command line arguments. Creating the runnable wrapper would actually make things (slightly) more complicated. For the Python version, there is only the wrapper for workflow authors to include (bundler.py in the wrappers directory). It can't be called alfred.bundler.py, as dots aren't allowed in module names. It uses the bash wrapper to handle utilities and Pip for Python libraries. There's not a whole lot to it. I'd sooner avoid re-implementing the core functionality in Python, but the performance is currently definitely unacceptable. From my machine: Calling `bundler.init()` 10 calls in 0.0388 s (0.0039 s/call) Calling `bundler.utility("cocoaDialog")` 10 calls in 1.7871 s (0.1787 s/call) (bundler.init() is the pure Python code that handles the Python libraries.)The time it takes for one call to get a utility's path is the time I normally aim to have my entire workflow finish and return its results in… After a bit of digging, it turns out that it's gatekeeper.sh that's taking most of the time. With gatekeeper.sh disabled: Calling `bundler.init()` 10 calls in 0.0409 s (0.0041 s/call) Calling `bundler.utility("cocoaDialog")` 10 calls in 0.4689 s (0.0469 s/call) We definitely need to do something about that. Is there some way we could cache the results of the calls to gatekeeper.sh (and the various PHP scripts, but mostly gatekeeper.sh) so they don't have to be called every time?
  14. You can do it easily enough with any supported language and a Run Script action, passing the result to an Open URL Action. With Python, for example, your Run Script would look something like: from datetime import datetime today = datetime.now() print(today.strftime('%y/%m/%d')) And your Open URL Action would have: http://www.example.com/{query} as the URL.
  15. Also, one more thing. bundler.py is documented in such a way that it's possible to auto-generate its documentation using sphinx. I've done so locally, but I don't know what to do wrt publishing the docs. All the documentation for Alfred-Workflow was generated this way. Should I add them to the GitHub repository? (There's a pydoc subdirectory and a build-pydoc.sh file in the rootdir, although the latter can easily be dispensed with.) Adding them to the gh-pages branch presents a problem, as we (I) would have to create a template that matches the other docs. I suppose it might be worth considering creating a standalone (in name only: it would still use bundler.sh) library to publish on PyPi and use their automatic pythonhosted.org option. (I should probably do this with Alfred-Workflow, too.)
  16. Ta. Not sure I want it, though. I have a nasty habit of coding drunk (like now), and I'd hate to break loads of stuff. I can't even do pull requests properly yet… (I'm guessing you didn't see all my messing around with them in real time.) Essentially, it's used as a Python library (the Python bundler adds its parent directory to sys.path and does import pip). OTOH, the standard bundler utility install mechanism makes installing pip marvellously easy (and AFAIK installing it elsewhere would require significant modifications), and it's a one-off kind of thing that wouldn't be used by other workflows. Other workflow authors (let alone users) don't have to know it's in there, soiling the purity of the assets directory. I chose assets/python/<bundle-id> as the logical place to put the workflow-specific install directories, and I'm actually a bit worried about putting the Python bundler's files in there, too. Perhaps I should rename the directory to use the Bundler "bundle ID", i.e. alfred-bundler-aries.python-helpers instead of just python-helpers). The pip one should be: there is no invoke-able command. It can't be called, only imported. I mean, I could create a small callable wrapper in there and use that instead, but it's unnecessary for the functioning of pip. Most properly coded Python "executables" are thin wrappers that load and call library functions that can just as easily be called with the command-line arguments as the executable itself. What's more, if the Bundler is designed to support more generic "assets" than callable utilities (which is my understanding), a callable executable is not a given: I might use it to install an icon collection, for example, for which an invoke command is nonsensical. If you insist Nah. It's probably down to pip not really being a callable utility, and thus not having an invoke-able command. Even though it could, although not necessary. Haha! As if I'm not going to use the crap out of this! Thank you for all the graft on the bundler. Once we're golden master on this, I'm probably going to include it in my Alfred-Workflow Python library, and then there'll surely be all manner of neckbeards demanding I have sex with their hot sisters and shit. What do I (we) do about the pip version? Currently, its JSON file is versioned on the basis of the Python bundler version (i.e. new bundler version, new pip version), but it'd be way better to ensure the latest version of pip is installed. Does the bundler have a way to ensure that, or is it up to you/me/us to keep the JSON file up-to-date? Currently, I have the Python bundler pull a JSON recipe from my fork. If it's now one of the default utilities, what do I have to do in bundler.py to ensure that it's updated? (Note: pip can update itself with pip --update pip. I could set the bundler to run that once a month or something.)
  17. Sent a pull request on GitHub to add JSON file handling to bundler.sh and improve the error handling a wee bit. I've added Pip as a default utility, though the Python Bundler currently points to a URL in my fork on account of it not being available in the main one. I wonder whether it (the JSON file) should be "bundled" with bundler.py, but it'd have to go in a docstring and be saved to a temporary file before each call in order to keep the file count to one. It seems incorrect to add it as a utility because (1) it's not a utility in the same vein as the other utilities and (2) it doesn't work like one (there's no executable: it needs to be added to sys.path and imported). There seems to be something slightly funky happening with the GateKeeper stuff. It occasionally pops up its dialog when installing/calling Pip, which it shouldn't, as it's pure command-line stuff. (Also, isn't there supposed to be some informational dialog that pops up first?) I suspect the problem has to do with the existence of empty invoke files. My personal preference would be not to create these files if they're going to be empty (i.e. the utility can't be invoked). I may have misunderstood how they're used, however. (I haven't read, and don't really want to read the PHP code.) Also, the odd syntax error appears to be being thrown by the bash code. I figure it's down to bad quoting of variables (i.e. running tests on unquoted variables that are empty), and I've added quotes around a few of them, but I'm leery of messing with the bash code too much, as I don't really get the language that well, nor is there a test suite to let me know that I haven't royally messed things up. A couple of things I've noticed (but may have completely misunderstood because bash): Is there a difference between [[ ! -z "$var" ]] (i.e. not empty) and [[ -n "$var" ]] (i.e. is set)? Isn't var=$(echo `cat somefile`) exactly the same as var=$(cat somefile)?
  18. That's a bug in the workflow. Open it up Alfred Preferences, select the Workflow and double-click on the central Run Script action. In the Escaping Options, make sure Backquotes, Double Quotes, Backslashes and Dollars are selected. Then Save your changes. That last option is the one that's messing things up for you. What's happening is that bash is replacing $1 (and $2 etc.) with its command line arguments (of which there are none).
  19. There is a workflow for Messages and a Calendar one, too. There are several others available, too. Try searching the "Share your Workflows" forum for "calendar" or "messages". Unfortunately, it can be pretty difficult to find workflows on here (a forum isn't an ideal platform for a workflow directory).
  20. It wouldn't be less taxing on Alfred because you could end up with an indeterminate number of (Alfred) threads running in the background due to the asynchronous behaviour of the API and dumb apps/workflows making loads of calls to Alfred before the previous one has completed (async is very hard to get right). There's no reason the entire Alfred process has to block while processing your call—it already calls workflows in parallel—but that API call should, imo, be synchronous and block until the output is ready to be returned or an error occurs (which should be returned via the same route the input came in). If the caller needs to continue doing other stuff while Alfred's busy, it's simple enough to start a thread in a synchronous process, and any asynchronous framework worth its salt already has the capability to deal with calling out to external, synchronous processes (it's just another form of IO). Sending the output to an entirely separate external process (as in your example) strikes me as a bad idea (as an API), unless you don't care if the call succeeds or fails. If an error occurs and there's no communication back to the caller, it has no way of knowing if its call to Alfred is just taking a long time or the call failed and there is something wrong/the call should be retried (not that Alfred has yet provided a way for a workflow to know if it's succeeded or failed). Similarly, if the call is taking too long, a caller that still has a connection to the process its calling can terminate that process (Alfred could, for example, terminate the relevant thread(s) if the calling process closed stdout/stderr) and try again. At any rate, it should not be called "callback"
  21. What's the "Basic Setup" of your File Filters? A lot of apps mess up the UTIs for Markdown files. Is it possible that some files have different extensions to others (e.g. .md vs .mmd vs .mkd vs .markdown vs .txt etc.)?
  22. It's because of the heredoc. If you just run osascript -e 'return "This\\ is\\ a\\ string"', you only need to escape the backslashes once.
  23. No the problem is most definitely with the bash script. Trust me: I've read the code The bash bundler scripts accept a path to a JSON file as an argument, but completely ignore it. If you pass one, the bundler won't even try to install anything. It's definitely a problem with the bash code. The offending code is here. As you can see, if $json is empty, it looks for $name.json in its defaults, otherwise it just falls straight off the end and returns an error message. The problem with having an overlap.txt is that it then requires that the user keep all of an author's workflows updated or face having stuff break. There are a few workflows that I won't update because I've edited them to change keywords/behaviour, and re-applying my changes isn't worth the benefit of any new features/bugfixes. Still, it might be worth allowing authors to create their own set of shared stuff. An optional bundleid argument to init() would do it.
×
×
  • Create New...