Jump to content

dfay

Member
  • Posts

    1,054
  • Joined

  • Last visited

  • Days Won

    62

Everything posted by dfay

  1. {query} in the custom search refers to what you've typed into Alfred - you can type in a comma-separated list of terms, and it will search for each term, just as if you had typed it into the ngrams search box. Here are the directions for creating a custom search : https://www.alfredapp.com/help/features/web-search/ - basically you need to click the Add Custom Search button and paste the URL I provided above. It's really not hard to do, and well worth looking at so you can try the solutions other people have provided (preferably before saying they don't work....) even if you are not going to be creating your own custom searches.
  2. If I'm not mistaken, the raw snippets themselves are in synclocation/Alfred/Alfred.alfredpreferences/snippets/ cf. https://github.com/derickfay/import-alfred-snippets Presumably the .alfdb file is generated from those for purposes of speed etc.
  3. The Dropbox API doesn't have a call to create a new note at this point. There are ways to automate a click but they will be messy and breakable.
  4. I use it in Alfred file actions, mostly (I can't think offhand of any conventional workflows where I use it....). e.g. I have a file action "Move to Dropbox tagging current" which runs the following bash script: /opt/local/bin/tag -a current {query} mv {query} '~/Dropbox/'
  5. I assume Spin is thinking of a theme that might display Alfred's results with the corresponding colors if they have colored tags. Which isn't possible (unless I really missed it) but which would be a pretty cool feature.
  6. This is a post without a workflow. Much of what I have to say is also applicable to SAS, SPSS, Excel, or whatever else you might use to analyze quantitative data. Whenever you're collecting data you want to have a codebook listing all your variables and a longer description of what each refers to, what each means for categorical variables (e.g. 1=mac, 2=windows, 3=amiga os, etc.). Putting your codebook in a CSV has multiple advantages. You can keep your codebook in a single file & merge it (in whatever - I use Nisus) but also use it as a list filter in Alfred. Output the variable name to the clipboard & you're in business. I have three surveys of panel data from 1998, 2009, and 2015. I've created three list filters with the code books for each. So I can type s98 and search for a variable - when I find it, it's copied to the clipboard as survey98$variableName, ready for pasting into R. And rather than opening a codebook file and searching for the variable name to check its description, possible values, etc., it's all right there in front of me in Alfred.
  7. See also https://github.com/jdberry/tag But the really great idea is setting the file action (not file filter, correct?) to toggle, not just add. FWIW I use Finder for almost all my tagging -- using a keyboard shortcut as Dean suggests -- auto-completion is super fast.
  8. URL tracks don't respond to add or remove, still, so there's no way to do this from AppleScript.
  9. I'm sure I just found it on Doug Adams AppleScript for iTunes site. Or trial and error looking at the as dictionary
  10. As often comes up, Alfred is good for launching at the request of the user, but is not built for running in the background, so you'll need some other tool(s) as well. Having said that, it's easy to launch Alfred via AppleScript: tell application "Alfred 3" to search "sbyn " To launch that script you'll want to use launchd ( http://www.launchd.info ) or Lingon ( https://www.peterborgapps.com/lingon/ ) which is just a nice front end for launchd . By the way, I bit my nails past university, then I started playing the violin. Once I had to choose between biting and playing, stopping biting was easy. Even though I really only needed to stop with my left hand, stopping my right (bowing) hand as well came automatically. YMMV
  11. this should work: i = "{query}" t =i.split(".") print t[0]+":"+str(int(t[1])*0.6)
  12. Feel free to add it to your much more comprehensive workflow!
  13. Just ran into another issue when I sat down at my iMac for the first time in a week....workflows sync via Dropbox but of course Ulysses authentication tokens are not the same from one Mac installation to another. I think I will need to do something like this and pass the authentication token when I call the script in the Script Filter. Unless you see a better way.
  14. This reminds me I meant to post about a weird issue I had last night. I kept getting a message "xcall.app is no longer running". In activity monitor there were like nine instances of xcall active. I had to kill them all manually before the workflow would work again.
  15. PDF Split (File Action) Split a two-page scanned PDF into two separate pages. Accepts multiple files. When used on a file original.pdf, it creates original-split.pdf in the same location. All the action is in splitPDF.py which is a very slightly tweaked version of a script by Hanspeter Schmid posted here. Built with PyPDF2. Download: https://www.dropbox.com/s/ablkq7p94dxnn5l/PDF Actions.alfredworkflow?dl=1 Not as versatile as Skimmer : PDF actions for Skim was, but working and hopefully more future-proof.
  16. stately-plump-buck-mulligan was taken?
  17. Just occurred to me that both of these assume there's only one root level library item i.e. library[0] Presumably for this to work with On My Mac or External Folders you'd need to iterate through the root level items.
  18. Here's a cleaner version of ul that makes use of the ulysses-python-client library. I also changed it to output just the identified as the argument. #!/usr/bin/python # search for a sheet's title and return its identifier import ulysses import json token = "your token here" ulysses.set_access_token(token) library = ulysses.get_root_items(recursive=True) def all_sheets(g): if hasattr(g, 'containers'): for c in g.containers: yield json.dumps({u'title': c.title.replace('"', '').replace(',', '').replace(':',''), u'arg': c.identifier, u'subtitle': c.type.capitalize()}) for child in all_sheets(c): yield child if hasattr(g, 'sheets'): for s in g.sheets: yield json.dumps({u'title': s.title.replace('"', '').replace(',', '').replace(':',''), u'arg': s.identifier, u'subtitle': s.type.capitalize()}) print('{"items":'+str(list(all_sheets(library[0]))).replace('\\\'','').replace('\'','')+'}') I now have it set up to output to this script to build the Markdown link: #!/usr/bin/python # create a Markdown link from an identifier import sys import ulysses token = "your token here" ulysses.set_access_token(token) query = sys.argv[1] title = ulysses.get_item(query).title print('['+title+'](ulysses://x-callback-url/open?id='+query+')') Now thinking of what else one can do with the identifier... (Update) I've added a modifier (ctrl) to send the identifier only to the keyboard -- this can then be pasted into Ulysses group search (command-shift-F) and it will return all sheets that contain a link to the queried sheet.
  19. Actually there's a typo in Rob's code … in ulysses_calls.py -- open_recent() should call the URL that patgilmour pasted above. def open_all(): # @ReservedAssignment """Open special group 'All', bringing Ulysses forward.""" call('ulysses://x-callback-url/open-all') def open_recent(): # @ReservedAssignment """Open special group 'Last 7 Days', bringing Ulysses forward.""" call('ulysses://x-callback-url/open-all') Here's a script which can be put in a script filter to search the Ulysses library returning newest results at the top. I call it usd for Ulysses-sorted-(by)-date This assumes you have Rob's library installed in your workflow. #!/usr/bin/python # get Ulysses library sorted by date & optionally search for query # outputs the identifier of the found sheet import ulysses import json import sys query = sys.argv[1] token = "your token here" ulysses.set_access_token(token) library = ulysses.get_root_items(recursive=True) def all_sheets(g): if hasattr(g, 'containers'): for c in g.containers: for child in all_sheets(c): yield child if hasattr(g, 'sheets'): for s in g.sheets: yield [s.modificationDate, json.dumps({u'title': s.title.replace('"', '').replace(',', '').replace(':',''), u'arg': s.identifier, u'subtitle': s.type.capitalize()})] print '{"items":[' for i in sorted(list(all_sheets(library[0])), key=lambda item: item[0], reverse=True): if query.lower() in json.loads(i[1])['title'].lower(): print i[1].replace('\\\'','').replace('\'','')+"," print ']}' connect it to an Open URL item with the following: ulysses://x-callback-url/open?id={query} and if you want to create a link to the found sheet instead, add a keyboard modifier and connect to the Markdown link maker in the post after this one
  20. One thing I noticed, though, was that the authentication token I'd acquired earlier today no longer worked and I had to re-authenticate. Have you been able to consistently use the same token without repeat re-authentication?
  21. Just had a few minutes to tool around with this library - it's great. I was able to quickly write the basis for the cross-referencing script. Here's a working draft. import re import ulysses # authenticate 1st & add your token here token = "" ulysses.set_access_token(token) # test - enter test sheet ID here (and make sure it has some links or nothing will happen!) sheetID = "" # get source sheet's title and body, and search for links iItem = ulysses.get_item(sheetID) iBody = ulysses.read_sheet(sheetID,text=True).text # get the Ulysses links to cross-reference uLinks = re.findall("ulysses://x-callback-url/open\?id\=[A-Za-z0-9_-]*", iBody) # get the BibDesk links in case we want to do something with them later bLinks = re.findall("x-bdsk://[A-Za-z0-9_-]*", iBody) # get the IDs for linked sheets idList = [] for aLink in uLinks: id = aLink.replace("ulysses://x-callback-url/open?id=", "") idList=idList+[id] linkToI = "["+iItem.title+"](ulysses://x-callback-url/open?id="+sheetID+")" # add a link to the source sheet to all its linked sheets referenceHeader = "#### Cross-references\n\n" for linkedSheet in idList: ulysses.insert(linkedSheet, referenceHeader+linkToI, format='markdown', position='end', newline=None) (After using this for a couple of hours I am finding it really useful for annotating legal cases and commentaries!)
  22. It looks like all the parameters are included in the URL, so if you are using the same settings every time, you could just create a custom web search with the URL set to https://books.google.com/ngrams/graph?content={query}&case_insensitive=on&year_start=1800&year_end=2016&corpus=15&smoothing=3
  23. Hmm. I think that would still be a lot faster than many script filters that pull results off the net. On reflection, I think the approach you describe is quite good b/c it would require next to no modification of existing workflows.
×
×
  • Create New...