Jump to content

Quicklook Ondemand


Recommended Posts

I have a script filter in which I want to use quicklook to provide a detailed view of a bunch of results from an API call.  I am rendering (via Jinja) the HTML for each element and passing a local path as the quicklookurl.  As the number of results of the script filter gets large, the quicklook generation becomes quite expensive.


I am requesting a feature in which quicklook can be generated on-demand.  Perhaps as simple as invoking a script with a set of parameters.  Or better yet, to be able to supply the HTML directly in the JSON response without touching disk.

Edited by twang
Link to comment
Share on other sites

I don't think there's much chance of this being implemented. Similar requests haven't made it, and the underlying problem—your script is too slow—can be solved in other ways.


You could generate the HTML in a background process, so the results are still shown quickly.


You could use a faster language than Python (which is very slow) for the HTML generation. Objective-C or Go would be 10–50 times faster.


Or you could start a simple HTTP server that generates the Quicklook previews on-the-fly, and use a URL to that as your quicklookurl.


Edited by deanishe
Link to comment
Share on other sites

I think its naive to simply assume a compiled language could improve performance.  Yes, Python is significantly slower than a compiled language such as Golang or Objective-C.  However, in this case, its disk access that is the bottleneck.


I am actually writing a workflow to help me work with our AWS cloud infrastructure.  We have, at any point in time, several hundred EC2 instances as well as several hundred to several thousand ECS containers.  I am building a workflow to help me search through Instance IDs, Tags, and various other attributes.  I plan on expanding the search to include ECS containers, as well as RDS instances, ElasticCache, S3 buckets, and more.


I would like to provide a quicklook pane that allows me to look at various attributes of the instance/container/etc.  I could try to simply pass the URL to the detailed view of the resource in AWS; but, that assumes that I have an established session in my browser with their web console -- plus, we manage multiple different AWS root accounts, which further complicates that strategy.  Rather, I already have AWS access keys configured for the boto3 client and I can simply query the AWS API for the information I need.


The real issue is that I am generating quicklook files for the entire set of results, where I may only be actually using the quicklookurl feature on one or two of them -- hence I would like some form of a way to hook into when the quicklook pane is activated.


Nonetheless, I can understand how the feature may not be particularly high priority to the authors of this software.  The solution to build a simple HTTP server, while a bit heavy handed, may be the best solution to the problem. 

Link to comment
Share on other sites

Not at all.  I appreciated your suggestions and have taken your advice to implement a web server.  I think you may be a tad defensive.


In any case, to stay on topic -- for anyone who may stumble upon this thread looking for similar needs -- it turns out it was rather simple to implement a small tornado web application that runs as a daemon in the background.  I could serialize the entire instance data from the AWS API (that I already had from the search query) and pass it along as urlencoded data to the app to be rendered via Jinja templates.  It pulls in a lot of dependencies, but handles like a champ.


I highly recommend using the background module in deanishe's Alfred-Workflow package to handle the background process, as it handles the double-fork and properly closes the fds.  It also drops a pidfile and provides an easy way to determine whether the server is alive or if it needs to be started.  I ended up using a port configured from the workflow environment variables.  I could try to let tornado bind to port 0 (a random port chosen by the OS), but it seemed rather complex to try to figure out which port the OS assigned in a platform-independent (or at least a future-proof) way.  Though, in hindsight, I could easily drop a file into the workflow cachedir to be read by the main workflow process.


If you're interested in a reference implementation, you may find it here.

Edited by twang
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...