Jump to content

Use large local database for script filter: Question about cacheing


Recommended Posts

Hi,

 

I am working on a workflow that uses a large local database and I am not sure about the best way to handle the situation. My script gets called every time the user changes the argument, right? (or I guess there is a small delay to avoid handle changes)

It doesn't make sense to open the database every time the script is called so my plan was to cache the relevant data using the folder suggested in the "best practice" post (Non-Volatile:~/Library/Application Support/Alfred 2/Workflow Data/[bundle id]). But I still have to update the cache and I am not sure about the best way to handle that. I am working with Python so does it make sense to start a separate thread to handle the cacheing? But this thread ends up getting started multiple times when the user changes the argument. 

 

Any suggestions about handling this situation would be great.

 

Thanks!

 

Link to comment

Hi,

 

I am working on a workflow that uses a large local database and I am not sure about the best way to handle the situation. My script gets called every time the user changes the argument, right? (or I guess there is a small delay to avoid handle changes)

It doesn't make sense to open the database every time the script is called so my plan was to cache the relevant data using the folder suggested in the "best practice" post (Non-Volatile:~/Library/Application Support/Alfred 2/Workflow Data/[bundle id]). But I still have to update the cache and I am not sure about the best way to handle that. I am working with Python so does it make sense to start a separate thread to handle the cacheing? But this thread ends up getting started multiple times when the user changes the argument. 

 

Any suggestions about handling this situation would be great.

 

Thanks!

 

What type of database is it? I would think it opening the database every time wouldn't really matter. I do it for some of my workflows. It happens so fast you don't really even notice it. I use local SQLite databases for some of my workflows..

Link to comment

What type of database is it? I would think it opening the database every time wouldn't really matter. I do it for some of my workflows. It happens so fast you don't really even notice it. I use local SQLite databases for some of my workflows..

 

It's Zotero's database and I think it's to large to open it every time. My own is over 20 MB right now. The workflow only needs a small subset of that information so that I thought about either creating a cache with separate files for each relevant entry or a database (maybe SQLite) only with the necessary information. In both cases, the cache has to be updated when the Zotero database changed and that is part I am not sure about...

Link to comment

I will try that and see how it works performance wise. But I guess there is another reason for cacheing: I might want to follow the first script filter with a second script filter showing the attachments of a particular item in the zotero library. That would I mean I have to reopen the database in the second filer, no?

 

I guess there are a couple of related general question, which I am unsure about:

- Does Alfred kill the current process/script when a new one starts because the user changes the argument?

- Is there a way to keep variables in the memory when the user changes the argument and the script is called again?

- Or between two script filters or a script filter and a script that gets an argument from a script filter? (like a callback function in the script filter that is called when the user selected an item)

Link to comment

I will try that and see how it works performance wise. But I guess there is another reason for cacheing: I might want to follow the first script filter with a second script filter showing the attachments of a particular item in the zotero library. That would I mean I have to reopen the database in the second filer, no?

 

I guess there are a couple of related general question, which I am unsure about:

- Does Alfred kill the current process/script when a new one starts because the user changes the argument?

- Is there a way to keep variables in the memory when the user changes the argument and the script is called again?

- Or between two script filters or a script filter and a script that gets an argument from a script filter? (like a callback function in the script filter that is called when the user selected an item)

 

1. Alfred doesn't kill the previous process, it still runs to completion.

2. Keep the variables in memory? No. You could always write them to a local file though.

3. Same applies. Whatever is set in the "arg" for the script filter output is passed to the script and is available as {query} but anything else would have to be written to a local file or something and then read in the next piece.

Link to comment

1. Alfred doesn't kill the previous process, it still runs to completion.

2. Keep the variables in memory? No. You could always write them to a local file though.

3. Same applies. Whatever is set in the "arg" for the script filter output is passed to the script and is available as {query} but anything else would have to be written to a local file or something and then read in the next piece.

 

Thanks! I think I will just set the "arg" for the script filter to a json string. That solves the problem and seems like a pretty flexible way to pass an object with multiple elements to the next step...

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...