Jump to content
jmjeong

alfred-pinboard version2.3 (alfred3 support)

Recommended Posts

 

- GitHub Page : https://github.com/jmjeong/alfred-extension/tree/master/alfred-pinboard

- Workflow Download : https://raw.githubusercontent.com/jmjeong/alfred-extension/master/alfred-pinboard/pinboard.alfredworkflow

 

 

v2.3 (2016-05-20)

  • Update Settings for Alfred v3
 

v2.27

  • Fix a bug in debug logging

v2.25

  • change Reload threshold time to 1 hour from 12 hours
  • Arrange alfred layout

 

v2.24

  • pblog records copy command
  • guard code for invalid bookmark data

 

v2.22

  • Launch history command (pblog)
  • Sort option : last accessed time (^l)
  • '!' is used to sort key too

 

V2.1 Changelog

 

  - multiple tag search : specify tag group for searching (#)
  - display last modified time of local cached bookmarks
  - display host name only in main list
  - display tag information in main list too
  - update the number of entries in the history list after searching
  - display untagged bookmarks in tag list
  - support sort option : title ascending(`^a`), title descending(`^z`), time ascending(`^d`), time descending(default)
 

 

 

Yet another alfred-pinboard workflow. It provides INSTANT pinboard search and the following function.

  • search pinboard (pba) - supports various search condition such as or(|), and( ), and not(-)
  • search tag (pbtag)
  • search pinboard memo (pbmemo)
  • show starred bookmark (pbs)
  • browse and search history (pbhis)

  • goto or delete the searched bookmark

  • copy url of the searched bookmark
  • send url to pocket
  • mark or unmark the favorite bookmark

search.jpg

Installation
  1. Download and Install alfred-pinboard Workflow
    • You need to set short-key manually
  2. pbauth username:TOKEN <- set access token
  3. pbreload - loads latest bookmarks and memo from pinboard.in
  4. search with pba, pbtag, pbmemo command
  • (optional) pbauthpocket
    • needed only if you want to send URL to pocket
  • (optional) install cron job : for faster searching without pbreload
    • download it from pinboard-download.py
    • chmod a+x pinboard-download.py
    • register script in crontab using crontab -e

      */15 * * * * /path/to/pinboard-download.py > /dev/null 2>&1

Command
  • pba query : search query from description and link and tags
  • pbnote query : search query from pinboard notes
  • pbu query : search query from description(title) in unread list
  • pbl query : search query from link
  • pbs query : search query from starred bookmarks
  • pbtag query : search tag list. You can autocomplete it by pressing ‘tab’

  • pbhis : show search history

  • pbreload : loads latest bookmarks from pinboard.in

  • pbauth username:token : Set pinboard authentication token (optional)

  • pbauthpocket : Pocket authentication (optional)
Search Condition
  • - before search word stands for not ex) -program
  •  stands for and query ex) python alfred
  • | stands for or query ex) python|alfred
  • and query is evaluated first, than or query is evaluated
Keys

You need to set it manually because of alfred restriction

  • ctl-shift-cmd-p : launch pba
  • ctl-shift-cmd-c : launch pbtag
  • ctl-shift-cmd-n : launch pbnote
  • ctl-shift-cmd-s : launch pbs
  • ctl-shift-cmd-h : launch pbhis
Action
  • enter to open the selected url in the browser
  • tab to expand in pbtag command
  • Hold cmd while selecting a bookmark to copy it’s url to clipboard
  • Hold alt while selecting to delete a bookmark from your pinboard
  • Hold ctrl while selecting a bookmark to mark or unmark it
  • Hold shift while selecting to send URL to pocket. You need to set auth_token using
    pbauthpocket

 

  • help

    pbhelp.jpg

    Search

    search.jpg

    Tag Browse

    pbtag.jpg

    Tag Search

    pbtag-search.jpg

    Starred Bookmark

    pbs.jpg

    Search History

    pbhis.jpg

Edited by jmjeong

Share this post


Link to post

Good work!

A possibly useful tip regarding updating without having to do it manually or every X minutes via cron, but still keeping the responsiveness 1a:

What I do in such situations is check the age of the cached data, and fork a background process to do the update if it's stale. Here's a simple example that downloads Pinboard bookmarks (it's the tutorial for my Python workflow library), and the relevant code for forking background processes is here.

I think it's a much more elegant solution. No need to hit the Pinboard server every 15 minutes, the user doesn't need to faff around with cron or manual reloading, and the workflow remains instantly responsive as long as there's data in the cache.

Edited by deanishe

Share this post


Link to post

You can also add an External Trigger attached to a reload script as an option to reload.

 

I use IFTTT + Dropbox + Hazel + External trigger to update my Pinboard cache in another workflow.

 

Anyway, I’ll try yours ASAP.

 

By the way, why not an Add to Pinboard function?

Share this post


Link to post

I use IFTTT + Dropbox + Hazel + External trigger to update my Pinboard cache in another workflow.

That's a very special setup, tbh. Not everyone has all of those (especially Hazel, which is relatively pricey—but totally amazing and worth every penny!). Also, IFTTT is pretty laggy. I tried it for a while for event notifications for my calendar, but they usually arrived after the event had started :(

I suppose the best thing to do with Pinboard is use a cron job, but call https://api.pinboard.in/v1/posts/update first to see if there are new bookmarks.

Asking users to set up a cronjob or indeed IFTTT + Dropbox + Hazel is a recipe for endless support requests, imo.

By the way, why not an Add to Pinboard function?

A good question! Would be very useful.

Edited by deanishe

Share this post


Link to post

That's a very special setup, tbh. Not everyone has all of those (especially Hazel, which is relatively pricey—but totally amazing and worth every penny!). Also, IFTTT is pretty laggy. I tried it for a while for event notifications for my calendar, but they usually arrived after the event had started :(

I suppose the best thing to do with Pinboard is use a cron job, but call https://api.pinboard.in/v1/posts/update first to see if there are new bookmarks.

Asking users to set up a cronjob or indeed IFTTT + Dropbox + Hazel is a recipe for endless support requests, imo.

A good question! Would be very useful.

 

Yes, it is a very, very special setup but an external trigger would be just another workflow feature that can fit in other setups as well.

 

In fact, any external trigger in any workflow is just a blank paper: the usage is up to the user. ;)

Share this post


Link to post

Indeed, external triggers are really useful for integrating other stuff into Alfred.

 

That said, they require AppleScript and coding to set up, so I'd try to avoid asking users to set one up if at all possible: it invariably means lots of support requests for me from folks who aren't good at coding (of which there are a lot), and that goes against my fundamental laziness ;)

Share this post


Link to post

Indeed, external triggers are really useful for integrating other stuff into Alfred.

 

That said, they require AppleScript and coding to set up, so I'd try to avoid asking users to set one up if at all possible: it invariably means lots of support requests for me from folks who aren't good at coding (of which there are a lot), and that goes against my fundamental laziness ;)

 

I agree. I try to let things as simple as possible.

 

But, in this workflow an external trigger would be just an option as much as support to Cron.

 

And i don’t want to manually add an external trigger in this workflow after each update. :)

Share this post


Link to post

Do external triggers also get wiped in updates? That's bad :(

I say, surprisingly enough, go with my background fork method. Better yet, use my Python workflow library. It will make everything better, including your sex life and male-pattern baldness*

* Not actually true. I'm still single and bald :(

Edited by deanishe

Share this post


Link to post

Good work!

A possibly useful tip regarding updating without having to do it manually or every X minutes via cron, but still keeping the responsiveness 1a:

What I do in such situations is check the age of the cached data, and fork a background process to do the update if it's stale. Here's a simple example that downloads Pinboard bookmarks (it's the tutorial for my Python workflow library), and the relevant code for forking background processes is here.

I think it's a much more elegant solution. No need to hit the Pinboard server every 15 minutes, the user doesn't need to faff around with cron or manual reloading, and the workflow remains instantly responsive as long as there's data in the cache.

 

I considered this option, too. But I decided the current method because of limits of API requests and fast speed of API.

In my test case, it(posts/all) takes less than 1 seconds with 500 or more bookmarks and the data in pinboard site is not frequently updated.

 

API requests are limited to one call per user every three seconds, except for the following:

 

 

Some more comments:

 

1. I think calling 4 requests in 1 hour does not burden server. Each API request is finished in less 1 seconds. 

 

2. Bookmark in pinboard.in is updated only by me, so I know the bookmark is up-to date or not already. Most of time, I got the up-to-date result from cron job. In case there is stale data, I can use 'pbreload' command.

 

3. I think checking new-data in background can be alternative solution. I want to get most refreshed data without any delay. Without cron job, I need to wait some time until querying is finished. Actually it is short because server is enough fast. But API requests are limited, the next query must be called after 3 seconds.

Edited by jmjeong

Share this post


Link to post
 

When I type "pbtag", select a tag, and hit return, I get the same message that Jono got in your thread for version 1:

 

3GWlKHQ.png

Share this post


Link to post

 

 

When I type "pbtag", select a tag, and hit return, I get the same message that Jono got in your thread for version 1:

 

 

 

 

Hit 'Tab' instead of 'return'. tag will be auto completed.

Edited by jmjeong

Share this post


Link to post
 

Nothing happens when I hit tab.

 

And also, if I type 'pbtag {tag}' and hit return, I get the same message. If I type 'pbtag {tag} {keyword}' I get internet search fallback immediately after typing any first letter for the keyword.

 

 

EDIT: This is all because I have tab set as my default "show actions" key in Alfred preferences, instead of right-arrow. If I switch to right-arrow, it works. Do you think there's a way to get around this?

Edited by paulw

Share this post


Link to post

pbtag syntax is [pbtag {tag} {search}].

 

pbtag command will show tag list in pinboard.in.  After [pbtag tag] is completed, next searches bookmark in the selected tag. 

[Tab] key means the 'auto complete' key.

Share this post


Link to post

By the way, why not an Add to Pinboard function?

 

I add URL to pinboard only from safari bookmarklet, not from alfred.  Add to pinboard from alfred is time-consuming.

Share this post


Link to post

I considered this option, too. But I decided the current method because of limits of API requests and fast speed of API.

In my test case, it(posts/all) takes less than 1 seconds with 500 or more bookmarks and the data in pinboard site is not frequently updated.

 

API requests are limited to one call per user every three seconds, except for the following:

Some more comments:

 

1. I think calling 4 requests in 1 hour does not burden server. Each API request is finished in less 1 seconds. 

 

2. Bookmark in pinboard.in is updated only by me, so I know the bookmark is up-to date or not already. Most of time, I got the up-to-date result from cron job. In case there is stale data, I can use 'pbreload' command.

 

3. I think checking new-data in background can be alternative solution. I want to get most refreshed data without any delay. Without cron job, I need to wait some time until querying is finished. Actually it is short because server is enough fast. But API requests are limited, the next query must be called after 3 seconds.

I'm aware of the API limits.

4 times an hour is not a huge deal, but it's 4 times an hour, every hour of every day, which is at least 10 time more than necessary. Also, a lot of people have several thousand bookmarks, so it takes a bit longer than a second for them. If a few hundred people install the workflow, that's several gigabytes of unnecessary traffic per day for Pinboard…

The demo workflow I referenced won't call more often than every 5 or 10 minutes (it's configurable). With regard to the 3 second interval, you have to cache the time of your last request in any case, as it's possible for someone to call the workflow repeatedly within the 3/60/300 second limits.

Share this post


Link to post
I'm aware of the API limits.

4 times an hour is not a huge deal, but it's 4 times an hour, every hour of every day, which is at least 10 time more than necessary. Also, a lot of people have several thousand bookmarks, so it takes a bit longer than a second for them. If a few hundred people install the workflow, that's several gigabytes of unnecessary traffic per day for Pinboard…

The demo workflow I referenced won't call more often than every 5 or 10 minutes (it's configurable). With regard to the 3 second interval, you have to cache the time of your last request in any case, as it's possible for someone to call the workflow repeatedly within the 3/60/300 second limits.

 

 

Installing pinboard-download.py script is optional, and download duration is also configurable.

 

I use alfred-pinboard more than 10-20 times per hour. I think current approach does inflict less burden than calling server per each alfred-invocation. And I need fast search speed.

 

With regard to traffic issue, current pinboard-download.py could be updated for downloading the updated part only.

In my case, 500 bookmarks is only 68k, so traffic is not big issue for the time being.  

Edited by jmjeong

Share this post


Link to post

I think we're talking at cross purposes. The script I liked to does not call the API every time the workflow is run, nor would that be a good idea. It loads the data from the cache and updates the cache in the background if it's older than 5/10/whatever minutes.

It'd make sense to only grab bookmarks updated after you last hit the API, but the documentation isn't clear: the "all posts" endpoint accepts a fromdt parameter, but it says posts created after this time. What about updated posts?

Share this post


Link to post
I think we're talking at cross purposes. The script I liked to does not call the API every time the workflow is run, nor would that be a good idea. It loads the data from the cache and updates the cache in the background if it's older than 5/10/whatever minutes.

 

 

Do you want that alfred-pinboard workflow invokes pinboard-download.py script in the background if cache is older than specified time?

 

It'd make sense to only grab bookmarks updated after you last hit the API, but the documentation isn't clear: the "all posts" endpoint accepts a fromdtparameter, but it says posts created after this time. What about updated posts?

 

It is unclear. It needs test.

Edited by jmjeong

Share this post


Link to post

Exactly. When the workflow is run, it loads the data from its cache. If there is no cached data, or it's older than 10 minutes (for example), it downloads the latest data in the background.

Share this post


Link to post

Exactly. When the workflow is run, it loads the data from its cache. If there is no cached data, or it's older than 10 minutes (for example), it downloads the latest data in the background.

 

This approach has one significant drawback.

 

When I launch alfred-pinboard 20 minutes later from the last run, I got the old cache. Only after the background job is finished, I can get the fresh data. Updating bookmark is  instant but updating notes can takes more time because of API request limit.

Edited by jmjeong

Share this post


Link to post

Why do you need to get notes? They're also contained in the posts data.

 

If you add a description in note, note data is replaced with description in posts data. I need full text search for notes data.

 

 

Asides from notes data, updated bookmark can only be gotten in second launch anyway in my example.

Edited by jmjeong

Share this post


Link to post

v2.1 has been released. 


 


  - multiple tag search : specify tag group for searching (#)

  - display last modified time of local cached bookmarks

  - display host name only in main list

  - display tag information in main list too

  - update the number of entries in the history list after searching

  - display untagged bookmarks in tag list

  - support sort option : title ascending(`^a`), title descending(`^z`), time ascending(`^d`), time descending(default)

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×