Jump to content

deanishe

Member
  • Posts

    8,759
  • Joined

  • Last visited

  • Days Won

    522

Everything posted by deanishe

  1. That would be ideal, but the problem with that is it would require that the subl command has been linked to /usr/local/bin/. It could use subl from within the application bundle, but what is the workflow supposed to do if both ST2 and ST3 are installed? The workflow will open the .sublime-project file with whatever application you have set to open that type of file. So if you want that to be the subl command, create an application with Automator with a Run Shell Script action that calls subl with the passed file. Set that as the default application for .sublime-project files and it will have the added advantage of also restoring the workspace when double-clicking a .sublime-project file in Finder.
  2. Here you go. My Fuzzy Folders workflow that dynamically updates its own keywords. Written in Python. The source code is on GitHub. The relevant stuff is at the bottom of the ff.py file. You're basically rewriting parts of info.plist. A few things that came up while writing it: You have to specify the vertical offset of each keyword entry in info.plist or they'll all be piled on top of one another in the UI. Keep your keyword-URL combinations in a settings file outside of the workflow directory: info.plist gets overwritten when a workflow is updated. You'll want some type of automatic and/or manual update function for this case (just checking to see if your settings file has changed won't trigger an update if the workflow has been updated). Alfred takes a few seconds to notice that a workflow has been externally updated. You'll want to hardcode the UUID of any non-auto-generated keywords in your script, so you don't accidentally delete them during an update, too.
  3. As Andrew says, that isn't how keywords work. If you can't remember your keyword, try to think of a better one. Preferably a much shorter one. In this case, I'd go with "rails" or "ror" (though I'd be much more likely to remember the former). If you still have difficulty remembering the keyword, you can prefix all your own keywords with a character like ";", e.g. ";rails". That way, when you just enter ";", you should see a list of all your own keywords and nothing else.
  4. Wow! I was way off with the numbers of available packages.
  5. Thanks. I've looked into it, and frankly, I don't understand why the previous version did match old*: it shouldn't have. */old* would be the right pattern. I'll see about including pathspec, which does proper .gitignore style matching.
  6. I didn't touch the code that filters the results, and you're not giving me much information to work with. Could you at least post the full path and whether or not it's a symlink.
  7. Indeed, it didn't follow symlinks (which is find's default behaviour). I've added a new version (1.3) that does follow symlinks. Give that one a go.
  8. Good call with the pull request, btw. I don't know why PHP developers insist on using exec all the time for stuff that's built in (and an order of magnitude faster).
  9. Starting a subprocess is very slow (relatively speaking). It can't be avoided in bash, zsh, etc., but with a "real" language like PHP/Python/Ruby, you should try to use native code and avoid starting subprocesses wherever possible. It's almost always much faster. However, the reason your workflow runs faster when it's been recently run than when it hasn't been run in a while is probably down to the fact that the libraries loaded by PHP (or any other language/app) are still cached in memory if you've recently run the workflow. You may have noticed that if your Mac has been running for a while, the RAM more or less always "full". That's because the system doesn't automatically remove everything from RAM that isn't in use any more, but keeps it around in case some other app might use the same library. It only gets removed from memory when that memory is needed by something that is currently running. So, (most of) the libraries loaded by your workflow are still in RAM if you run the workflow again soon after, but if it's been a while, the libraries have been removed, so other apps can use the RAM, and they need to be loaded back from disk into RAM (and disks are several orders of magnitude slower than RAM).
  10. Another update today. Now you can specify an arbitrary level of the file tree after which a repo is named in Alfred, e.g.: If your directory looks like this: Websites Project 1 htdocs .git other_stuff Project 2 htdocs .git other_stuff ... You can set name_for_parent to 2, so the workflow shows "Project 1", "Project 2" etc. in Alfred, not "htdocs", "htdocs", "htdocs" etc.
  11. Judging by the log output, you are using an outdated version of the workflow. Try updating it.
  12. Sublime Text doesn't have AppleScript support, as it's not a native application. The same applies to most IDEs. The only thing you can do with them with AppleScript is to tell System Events to click on their menu items. It's easy enough to open files in Sublime Text using open -a "Sublime Text" /path/to/file or its subl command-line tool, but that's about the limit.
  13. If you have added a huge directory, it'd be better to replace it with multiple, more precise entries and/or fiddle with the depth parameter. It's not ideal to have the workflow crawl half the HD every few hours.
  14. It depends on how big/deep the directories you specify are. As noted in the OP, it uses the find command, so if you try to search a huge directory, like ~/, it will take a very, very long time.
  15. What shows up in the debugger when you run the workflow? (Set it to log "All information".)
  16. It would definitely be handy to have a "version" entry in info.plist (or wherever). The problem is, workflow authors are not exactly great at remembering to change the version number of their workflows when they update them. This has caused numerous problems with Packal (and is something I've done many times, and even Shawn has ballsed up in this regard). Speaking from experience, version numbers won't be updated consistently by developers due to laziness/forgetfulness/the desire to hide a silly bug, unless it's enforced.
  17. Cheeky update today. It's very handy being able to open your Git repos in your editor, terminal, Finder and SCM app from Alfred. Now you can open the repo in all of them at once! Just set an app_N option to a list of applications. Also fixed a bug with app_6 / fn+↩ not working.
  18. I don't have the app, but by the looks of it, GET is sufficient. You just need to create your URL and then you can open it with cURL (or the HTTP libraries built into any scripting language). Instead of using an Open URL action with {query} as the URL, use a Run Script action with "/bin/bash" as the language: curl -LsS "{query}" Be sure to set the escaping options correctly (Backquotes, Double Quotes, Backslashes, Dollars).
  19. Alfred Git Repos Workflow Browse, search and open Git repositories from within Alfred. Download Get the workflow from GitHub or Packal. Usage This workflow requires some configuration before use. See Configuration for details. repos [<query>] — Show a list of your Git repos filtered by <query> ↩ — Open selected repo in app_1 (see configuration) ⌘+↩ — Open selected repo in app_2 (see configuration) ⌥+↩ — Open selected repo in app_3 (requires configuration) ^+↩ — Open selected repo in app_4 (requires configuration) ⇧+↩ — Open selected repo in app_5 (requires configuration) fn+↩ — Open selected repo in app_6 (requires configuration) reposettings — Open settings.json in default JSON editor reposupdate — Force workflow to update its cached list of repositories. (By default, the list will only be updated every 3 hours.) reposhelp — Open this file in your browser Configuration Before you can use this workflow, you have to configure one or more folders in which the workflow should search for Git repos. The workflow uses find to search for .git directories, so you shouldn't add huge directory trees to it, and use the depth option to restrict the search depth. Typically, a depth of 2 will be what you want (i.e. search within subdirectories of specified directory, but no lower). Add directories to search to the search_dir array in settings.json (see below). The default settings.json file looks like this: { "app_1": "Finder", // ↩ to open in this/these app(s) "app_2": "Terminal", // ⌘+↩ to open in this/these app(s) "app_3": null, // ⌥+↩ to open in this/these app(s) "app_4": null, // ^+↩ to open in this/these app(s) "app_5": null, // ⇧+↩ to open in this/these app(s) "app_6": null, // fn+↩ to open in this/these app(s) "global_exclude_patterns": [], // Exclude from all searches "search_dirs": [ { "path": "~/delete/this/example", // Path to search. ~/ is expanded "depth": 2, // Search subdirs of `path` "name_for_parent": 1, // Name Alfred entry after parent of `.git`. 2 = grandparent of `.git` etc. "excludes": [ // Excludes specific to this path "tmp", // Directories named `tmp` "bad/smell/*" // Subdirs of `bad/smell` directory ] } ] } This is my settings.json: { "app_1": "Finder", "app_2": ["Finder", "Sublime Text", "SourceTree", "iTerm"], "app_3": "Sublime Text", "app_4": "SourceTree", "app_5": "iTerm", "app_6": "GitHub", "global_exclude_patterns": [], "search_dirs": [ { "path": "~/Code" }, { "path": "~/Sites" } ] } Search Directories Each entry in the search_dirs list must be a mapping. Only path is required. depth will default to 2 if not specified. excludes are globbing patterns, like in .gitignore. name_for_parent defaults to 1, which means the entry in Alfred's results should be named after the directory containing the .git directory. If you want Alfred to show the name of the grandparent, set name_for_parent to 2 etc. This is useful if your projects are structured, for example, like this and src is the actual repo: Code Project_1 src other_stuff Project_2 src other_stuff … … Open in Applications The applications specified by the app_N options are all called using open -a AppName path/to/directory. You can configure any application that can open a directory in this manner. Some recommendations are Sublime Text, SourceTree, GitHub or iTerm. Note: As you can see from my settings.json, you can also set an app_N value to a list of applications to open the selected repo in more than one app at once: … "app_2": ["Finder", "Sublime Text", "SourceTree", "iTerm"], … You can also use → on a result to access Alfred's default File Actions menu. License, Thanks This workflow is released under the MIT Licence. It uses the Alfred-Workflow and docopt libraries (both MIT Licence). The icon is by Jason Long, from git-scm.com, released under the Creative Commons Attribution 3.0 Unported Licence.
  20. If the data is already in an SQLite database, you might consider just leaving it there. As you can see, the performance is ridiculous compared to messing around with JSON and Python. (SQLlite is pure C and super-optimised for exactly this kind of stuff—it's what CoreData is based on.) I must admit, my MailTo workflow does pull data from SQLite databases and cache them in JSON, but the JSON is essentially search indices, and I would have used an SQLite cache if the performance weren't acceptable. If you're creating an FTS virtual table (FTS3 only—FTS4 isn't supported by the system Python), you just need to insert a unique id (as a reference to the original full dataset) and the fields you want to search on. In the demo workflow, I included the id for demonstration purposes (it isn't used), but set its "rank" to zero, so it is ignored when ranking the search results. If you really don't want to mess around with writing SQL queries, you can use an ORM like SQLAlchemy or Peewee. That's how most Python developers use SQL databases, tbh. They allow you to treat database tables/rows as classes/instances. Very pleasant to use. I suspect this might mean a serious restructuring of ZotQuery, but IMO the performance is compelling. It all depends on what the typical dataset size is. You can't search thousands of items with Alfred-Workflow's filter() function, but a JSON-based data cache (properly keyed) should be just fine for at least 10,000 items if combined with a more efficient search mechanism. Moving entirely to using the original SQLite db might be more work than it's worth, but I reckon re-implementing the search in SQLite is well worth it. WRT Alfred-Workflow, I've been thinking all day about a useful abstraction that could use SQLite for search. The user would, in any case, have to specify a schema. But how do I go about indexing/updating the search database? Does the user call an index() function with all the data, or specify a callback that the indexer can call if it's out of date? Should the indexer return just the ids (and rank?) of the search results, or require a callback to retrieve the full dataset, so it can return the complete data like filter()?
  21. Here's some sample log output from the above workflow to give you a concrete idea of exactly how fast sqlite full-text search is: 11:10:53 background.py:220 DEBUG Executing task `indexer` in background... 11:10:53 index.py:43 INFO Creating index database 11:10:53 index.py:56 INFO Updating index database 11:10:53 books.py:110 INFO 0 results for `im` in 0.001 seconds 11:10:53 books.py:110 INFO 0 results for `imm` in 0.001 seconds 11:10:53 books.py:110 INFO 0 results for `imma` in 0.001 seconds 11:10:55 index.py:73 INFO 44549 items added/updated in 2.19 seconds 11:10:55 books.py:110 INFO 0 results for `imman` in 1.710 seconds 11:10:55 index.py:80 INFO Index database update finished 11:10:55 background.py:270 DEBUG Task `indexer` finished 11:10:55 books.py:110 INFO 15 results for `immanuel` in 0.002 seconds 11:10:58 books.py:110 INFO 100 results for `p` in 0.017 seconds 11:10:59 books.py:110 INFO 4 results for `ph` in 0.002 seconds 11:10:59 books.py:110 INFO 0 results for `phi` in 0.002 seconds 11:11:00 books.py:110 INFO 9 results for `phil` in 0.002 seconds 11:11:00 books.py:110 INFO 3 results for `philo` in 0.002 seconds 11:11:00 books.py:110 INFO 0 results for `philos` in 0.001 seconds 11:11:00 books.py:110 INFO 0 results for `philosp` in 0.001 seconds 11:11:01 books.py:110 INFO 0 results for `philospo` in 0.001 seconds 11:11:01 books.py:110 INFO 0 results for `philosp` in 0.001 seconds 11:11:02 books.py:110 INFO 0 results for `philos` in 0.002 seconds 11:11:02 books.py:110 INFO 0 results for `philoso` in 0.001 seconds 11:11:02 books.py:110 INFO 0 results for `philosoh` in 0.003 seconds 11:11:02 books.py:110 INFO 0 results for `philosohp` in 0.002 seconds 11:11:02 books.py:110 INFO 0 results for `philosohpy` in 0.002 seconds 11:11:03 books.py:110 INFO 0 results for `philosohp` in 0.002 seconds 11:11:03 books.py:110 INFO 0 results for `philosoh` in 0.001 seconds 11:11:03 books.py:110 INFO 0 results for `philoso` in 0.001 seconds 11:11:03 books.py:110 INFO 0 results for `philosop` in 0.001 seconds 11:11:03 books.py:110 INFO 0 results for `philosopj` in 0.001 seconds 11:11:03 books.py:110 INFO 0 results for `philosopjy` in 0.002 seconds 11:11:04 books.py:110 INFO 0 results for `philosopj` in 0.002 seconds 11:11:04 books.py:110 INFO 0 results for `philosop` in 0.002 seconds 11:11:04 books.py:110 INFO 0 results for `philosoph` in 0.002 seconds 11:11:04 books.py:110 INFO 100 results for `philosophy` in 0.012 seconds 11:11:08 books.py:110 INFO 100 results for `philosophy ` in 0.007 seconds 11:11:09 books.py:110 INFO 2 results for `philosophy t` in 0.002 seconds 11:11:09 books.py:110 INFO 0 results for `philosophy ti` in 0.002 seconds 11:11:10 books.py:110 INFO 0 results for `philosophy tit` in 0.002 seconds 11:11:11 books.py:110 INFO 0 results for `philosophy titl` in 0.002 seconds 11:11:11 books.py:110 INFO 0 results for `philosophy title` in 0.002 seconds 11:11:11 books.py:110 INFO 100 results for `philosophy title:` in 0.007 seconds 11:11:11 books.py:110 INFO 0 results for `philosophy title:t` in 0.002 seconds 11:11:11 books.py:110 INFO 0 results for `philosophy title:th` in 0.002 seconds 11:11:11 books.py:110 INFO 72 results for `philosophy title:the` in 0.010 seconds 11:11:12 books.py:110 INFO 40 results for `philosophy a` in 0.006 seconds 11:11:13 books.py:110 INFO 0 results for `philosophy au` in 0.002 seconds 11:11:13 books.py:110 INFO 0 results for `philosophy aut` in 0.002 seconds 11:11:13 books.py:110 INFO 0 results for `philosophy auth` in 0.002 seconds 11:11:13 books.py:110 INFO 0 results for `philosophy autho` in 0.002 seconds 11:11:13 books.py:110 INFO 0 results for `philosophy author` in 0.002 seconds 11:11:14 books.py:110 INFO 100 results for `philosophy author:` in 0.009 seconds 11:11:14 books.py:110 INFO 0 results for `philosophy author:k` in 0.002 seconds 11:11:14 books.py:110 INFO 0 results for `philosophy author:ka` in 0.002 seconds 11:11:14 books.py:110 INFO 0 results for `philosophy author:kan` in 0.002 seconds 11:11:15 books.py:110 INFO 0 results for `philosophy author:kant` in 0.002 seconds 11:11:18 books.py:110 INFO 3 results for `philosophy author:a` in 0.003 seconds 11:11:18 books.py:110 INFO 0 results for `philosophy author:ar` in 0.002 seconds 11:11:19 books.py:110 INFO 0 results for `philosophy author:ari` in 0.002 seconds 11:11:19 books.py:110 INFO 0 results for `philosophy author:aris` in 0.002 seconds 11:11:20 books.py:110 INFO 0 results for `philosophy author:arist` in 0.002 seconds 11:11:20 books.py:110 INFO 0 results for `philosophy author:aristo` in 0.002 seconds 11:11:20 books.py:110 INFO 0 results for `philosophy author:aristot` in 0.002 seconds 11:11:20 books.py:110 INFO 0 results for `philosophy author:aristotl` in 0.002 seconds 11:11:20 books.py:110 INFO 0 results for `philosophy author:aristotle` in 0.002 seconds 11:11:22 books.py:110 INFO 15 results for `author:aristotle` in 0.002 seconds
  22. No idea, tbh. Haven't used it in a long time. I'd imagine it isn't possible to nest data because that's just not how most search engines work. To prove my point re search indices, I've spent my evening writing a demo workflow showing how to use sqlite3 for a search index. Here's the workflow. Here's the GitHub repo. Hopefully, it's sufficiently well commented to be useful as a starting point. (If not, ask away.) The dataset is the (almost) complete list of Project Gutenberg ebooks. There are 45,000 entries (author, title, URL). Now tell me this doesn't totally rock Note: If you want to use "AND" or "OR", they have to be in capitals. (So sue me…) Man, I should've used this from the start in Alfred-Workflow By all means use whoosh, but I reckon sqlite3 is eminently suitable for ZotQuery's needs (especially as whoosh is designed for searching separate documents, not a single monolithic data file) and is included with the system Python. A bit of SQL is something every developer should know.
  23. I have used it once or twice. It's pretty damn good. You might want to try it out for ZotQuery. It's a much better fit than Alfred-Workflow. It's probably the simplest solution if you don't want to start messing around with SQL. TBH, creating an index is probably simpler than caching queries. You don't have all these thorny questions about which queries to cache, how to keep cache size under control, when to delete cached queries etc. It's also trivial to update the index in the background (using background.py). You can also do "abc AND xyz" with sqlite's full-text search.
×
×
  • Create New...