Jump to content

deanishe

Member
  • Posts

    8,759
  • Joined

  • Last visited

  • Days Won

    522

Everything posted by deanishe

  1. Do you have a good reason why keeping the workflow files in a separate subdirectory is a bad idea or is this just about justifying a whizz-bang build system?
  2. It's not a question of source code vs non-source code. It's a question of files that belong in the distributed workflow vs files that don't belong in the distributed workflow. I have plenty of source code that belongs in the repo and belongs to the workflow project, but has no place in the distributed workflow (SVG files used to generate PNG icons, test scripts, scripts to generate data files for the workflow, README.md). If you don't keep the two kinds of files (source code or not) separate, you're just making work for yourself. So you don't put your workflows' icons in the repo? So your repos don't contain complete, working copies of your workflows?
  3. My guiding principle is to try to keep things as foolproof as possible because when I don't I always end up the confused fool who broke stuff. Simple > complex. Exactly. I appreciate a challenge as much as the next man, but I'd rather there were a point to it. I have a modified version of that on my Mac. Some clever stuff there. Unfortunately, I'm not good about keeping track of what I've installed on my Mac, so nixxing it and reinstalling always takes far too long. It'd be awesome to have a single script to set up a Mac just the way I like it.
  4. What's the proposed model for stdin? Alfred keeps feeding in query on a new line every time it changes? Would that mean workflows can output multiple XML documents?
  5. But I don't have to bother messing around with blacklists or whitelists or travis because I keep all the workflow files in a separate directory… I certainly wouldn't want to ignore the files in .gitignore. That's for files that don't belong in the repo, which is a subset of files that don't belong in the compiled workflow. It all sounds very much like a solution in search of a problem to me. The reason a simple list of ignored files/extensions would work for my workflows is precisely because I keep them organised into subdirectories. If I dumped everything in the root, it wouldn't work because a bunch of scripts designed to help with development would need special-casing for each workflow. On top of all that, the src subdirectory makes my repos much more understandable. You don't need to mentally parse scripts and dotfiles to figure out which bits actually belong to the workflow. I just don't see any benefit is whipping up some all-singing, all-dancing build system to let me make my repos less organised.
  6. I found that one on Stack Overflow, tbh. The text field, toolbar and window are fairly obvious. I guess you would just use trial and error to figure out the group.
  7. Sure there is. You need to grab the contents of the clipboard and then trigger the copy event yourself, though, via AppleScript instead of using Alfred's built-in Get Selection action.
  8. Heh. Didn't realise there were that many. SIGXFSZ: "File size limit exceeded". More like "Epileptic fit at keyboard".
  9. Good news. What would this mean precisely? Alfred waits until, say, 100ms after the user last pressed a key before (killing and) running the Script Filter? Also, how would you kill the process? With SIGTERM?
  10. That reminds me. I uploaded my build script as smargh requested. It's fairly basic and only designed to work with workflows structured the same way as mine, i.e. all the files that should go in the distribution are in a single directory. It will automatically name the generated .alfredworkflow with the workflow's name grabbed from info.plist. Regarding other types of files: You can keep all the non-workflow files in your project root directory. For testing purposes, you can add a test runner to the same directory that creates the requisite test environment (e.g. adjusting sys.path, simulating Alfred's execution environment). I've found that the problem with relying on your build system to filter out any files that shouldn't end up in the workflow distribution is that it's one more thing that you have to remember to update when you change something. It's very easy to forget to exclude some big file you've added or include an essential one (at least, it's something I find very easy to forget). As a result, I try to organise my workflows so that there's a subdirectory that corresponds precisely to the distributed workflow. That way, it doesn't require me to work actively to stop things breaking.
  11. First/subsequent runs make no difference on my machine, so presumably the difference is due to the executable being in the HDD cache (I have an SSD). At any rate, I consistently get 0.03s, 0.04s and 0.13s for Python, Ruby and PHP respectively. That's terrible: most of my workflows finish in about 0.15s. Another reason not to use PHP, tbh.
  12. Yeah, PHP does seem to load more slowly than other languages. I guess that's because it's so monolithic. I can understand wanting a background server process for that reason.
  13. I'm not following what you're proposing at all. How can you always return timely results if you haven't fetched any yet? If you've got a bunch of queries backed up in a queue and only process the newest one, how is that different to what Alfred currently does? You didn't, but Shawn's post you were expanding on did talk about killing other processes. Also, your use of the term "requests" in "drop all other requests" led me to believe you were talking about HTTP requests.
  14. Like wolph says, there's no real benefit to using `virtualenv` with workflows. Its purpose is to isolate your apps dependencies, and seeing as your workflow dependencies should probably be installed in the workflow itself, there's little point to `virtualenv`. There's generally no need to use classes to group together related functions. In Python, you can do that with modules. It still makes sense sometimes, though (Alfred-Workflow uses stateless classes for its default serialisers). I generally disagree with doing it this way. You end up with files in the workflow that shouldn't be in the workflow, often including another copy of the workflow. I think all the files that belong in the finished workflow are better kept in a subdirectory. It makes it easier to compile and distribute the .alfredworkflow file without including unnecessary stuff, and it keeps the .git directory out of Dropbox, which is not a great place for repos. I've seen quite a few workflows that include an additional copy of themselves because they were organised this way.
  15. If the daemon always returns the set of results from the last successfully executed query, how is your workflow ever going to get the final set of results for the final query? The user has finished entering his/her query, but your daemon has already returned an old set of results, and there's no way for it to return the set of results the user actually wants because it won't get called again if the user has stopped typing. In addition, if your daemon kills all existing connections when a new query comes in, there's no guarantee that it will have any old results to return (from a previously successful API call), and if it does, they may well be 5 or 6 iterations old, so you end up with the same situation as now: you're looking at the results for the first two characters of your query while typing the eighth character. You also need to worry about rate limiting, so you're not hammering an API (which may get you banned). Alfred's execution model works as a kind of ghetto rate limiter: your script won't hit the API again till the current request is done. The slower the API, the longer the pauses between requests. If you have a daemon initiating and killing connections on every keypress, you'll end up making several API requests/second. As a rule, the slower an API is, the less amenable it is to being hammered with requests to "speed it up".
  16. Calling back into Alfred an indeterminate time later from a background script sounds like a risky thing to do. It'd be very annoying if a user is doing something else when the script returns. I don't really follow what you're trying to do with the server. As far as I can tell, instead of running the script directly, it just asks a daemon process to launch it instead. Have I understood that right? FWIW, I usually go with caching. All queries get cached for a few seconds and, where possible, I'll cache the entire dataset (e.g. all contacts or all bookmarks) using a background script. The workflow is very often working with "stale" data from the cache, but this usually isn't a problem in practice. Typically, the cache will also be refreshed before you've finished using the workflow.
  17. I start off with a project directory in ~/Code (where I keep my, well, code), usually named alfred-<worklflowname>. This directory is a git repo, which will probably go on GitHub. The actual source code goes in a src subdirectory. That way I can keep things that don't belong in the distributed workflow in the project (README.md, the compiled .alfredworkflow file, helper scripts or source graphics). I create an empty workflow in Alfred and add the icon. Then I copy the contents of it over to the src subdirectory I made. Then I delete the workflow in Alfred. I have two scripts to help with writing workflows. workflow-build.py creates the .alfredworkflow file from the specified directory. It's aimed at Python workflows and ignores .pyc files. workflow-install.py installs the workflow to Alfred's workflow directory (using the bundle ID as its name). Most importantly workflow-install.py -s symlinks the source directory instead of copying it, so I can keep the source code with my other code, not in Alfred's settings dir. I also write workflows in Sublime Text 3 (usually). I run them in iTerm, though, not in ST's terminal. I find ST's terminal isn't very reliable and chokes on a lot of output. It's also not great when the script needs arguments, which is often. The downside of using iTerm is that it uses my shell environment, not Alfred's, but I'm careful about keeping my system Python fairly pristine. I generally try to write workflow scripts as command-line apps, i.e. one main script that takes options and arguments, rather than a bunch of separate scripts. I find this helps me design the app in a more sensible way. If the workflow needs a whole bunch of action scripts, I also prefer to keep these all in the same script and Run Script action (I add the options to {query}). That way I don't have dozens of elements in Alfred's UI, which is a PITA to manage. The thing to watch out for when doing this is your imports. If everything is in one script, you'll likely end up importing a bunch of libraries that won't be needed, which slows the script down. If I'm using libraries that are slow to import (OS X-native ones, like Foundation or AddressBook are the worst), I import them in functions that use them, not at the top of the script. With something as big as ZotQuery, I'd still go with one main script and one main application class, but add sub-applications behind keywords, much like git or pip works.
  18. This will set theURL to the contents of the URL bar. tell application "System Events" tell process "Safari" set theURL to value of text field 1 of group 2 of toolbar 1 of window 1 end tell end tell
  19. I'm not following this. If like Aaron B. you have a query that takes 2-3 seconds to return, how does using a background process help? As soon as your Script Filter exits, it no longer has a way to return results to Alfred, so if it hands off to a background process and exits, how are the results returned to Alfred? Or is this only for non-Script Filter applications? How would a Script Filter process kill its "siblings"? Alfred will only run one at a time, so there can't be any siblings unless Alfred isn't working properly.
  20. That likely just means that all your workflows are up-to-date and were installed from Packal (if available from there). If you want to verify that, open one of your workflows in Finder and rename its packal subdirectory. Run packal update and then packal status in Alfred. You should then see the workflow you just molested listed with a red question mark next to it (available from Packal but not installed from there). You should probably then rename the packal subdirectory back again.
  21. This would be a pretty cool feature. In fact, I'd like to see a standard keyboard shortcut for "clear query, but leave keyword". Admittedly, ALT+BACKSPACE (delete last word) usually works, but it'd be cool to have, say, CTRL+BACKSPACE assigned in Alfred to "delete to keyword".
×
×
  • Create New...