Jump to content

Alfred stopped finding files on Google Drive File Stream


Recommended Posts

On 8/23/2021 at 3:11 PM, millerstevew said:

With the JSON workflow, I will find there is a brief 1-2 second delay at times when I invoke the workflow command. This is not an issue with the FIND workflow.

 

I suspected that might be the case, because Google Drive users (in this thread) seem to have an huge amount of files.

 

I’m predicting the JSON version will take longer to load but be faster to change queries, while the FIND version will be the opposite. Because what one typically wants is to search once and be done with it, startup time will be more important and FIND seems to win there.

 

For those interested in the technical details, the JSON version needs to load the cache of all files and only then do the parsing, while the FIND version reads the cache until it finds 100 (by default, but I might reduce it to 50) matches, then makes the (way smaller) JSON out of that. And because it sorts by access time, those searches will be faster.

 

On 8/23/2021 at 3:11 PM, millerstevew said:

A question about both: If I am searching for a file titled "2021.22.08 bulletin- final.pdf" and type "2021 bulletin", this does not return the file. Is this expected behavior?

 

I ask because previously, Alfred search would return files. I'm assuming that there is something about the logic of your workflow that doesn't treat spaces as an AND operator. (My CS knowledge is limited, so please forgive me.)

 

Yes, it is expected behaviour, and it’s exactly for that reason. For performance reasons, a typical Alfred search might sometimes not give you an obscure result you’re looking for. This is a good tradeoff, but the workflow needs to be more naive than that so it caches every file in your Google Drive. The consequence is what you noted—making it show results like Alfred would make it way slower.


Because this Workflow is meant to fill the gap until Google fixes Drive indexing, optimisations need to stop somewhere or you’ll never get to enjoy it.

Edited by vitor
Link to comment
8 minutes ago, vitor said:

I’m predicting the JSON version will take longer to load but be faster to change queries, while the FIND version will be the opposite. Because what one typically wants is to search once and be done with it, startup time will be more important and FIND seems to win there.

 

For those interested in the technical details, the JSON version needs to load the cache of all files and only then do the parsing, while the FIND version reads the cache until it finds 100 (by default, but I might reduce it to 50) matches, then makes the (way smaller) JSON out of that. And because it sorts by access time, those searches will be faster.

 

That's fascinating—the way it works. Your assessment—that the FIND version is a better approach for the typical user—would certainly be the case for me, though I don't have nearly as many files in Google Drive as some in this thread. 

 

10 minutes ago, vitor said:

Because this Workflow is meant to fill the gap until Google fixes Drive indexing, optimisations to stop somewhere or you’ll never get to enjoy it.

 

Agreed! Thanks for filling a huge gap. Much appreciated. 

Link to comment
7 hours ago, vitor said:

Because this Workflow is meant to fill the gap until Google fixes Drive indexing

 

I reckon that will take at least two more years. As far as I can tell, Google has done absolutely nothing to address the issue. They just tell everyone that it's a known problem and they'll get round to it, then do nothing about it.

 

7 hours ago, vitor said:

I’m predicting the JSON version will take longer to load but be faster to change queries, while the FIND version will be the opposite.

 

I bet an SQLite version would smash both ;)

Edited by deanishe
Link to comment
18 hours ago, deanishe said:

I bet an SQLite version would smash both

 

Hadn’t though of that! I reckon you’re right. Having to worry about escaping is a bit of a pain, though. I tried to make it even faster by storing each result’s JSON string in the DB, but the code starts to get stupid (and the cache takes a lot more disk space). Generating the JSON for those 50 results will be negligible, so I went back to storing the path (with the basename as TINYTEXT).


Google Drive SQLITE. Also updated the previous post so there are now three.

 

@millerstevew No one else replied yet, so thus far it’s in your hands. This one should be faster than FIND.

Edited by vitor
Link to comment

FIND - Very slow to build cache (10-15 mins), but once going it works nicely.

 

JSON - Cache notification shows within seconds (not sure if this is right), then when I try a search it locks up, so I have to force quit Alfred.

 

SQLITE - I can't seem to rebuild the cache, tried a few times but not getting the notification at all - installed and uninstalled. Going to restart and try again!

Edited by alfredpanda
Link to comment
5 hours ago, vitor said:

Having to worry about escaping is a bit of a pain

 

Escaping? Doesn't Ruby have a proper SQLite library instead of trying to do your own escaping?

 

Might be worth considering fulltext search: then you'll get the nice, multi-word matching.

 

5 hours ago, vitor said:

I tried to make it even faster by storing each result’s JSON string in the DB

 

Not worth it, imo, as you'd still have to smush all the strings back together. As long as you're LIMITing the query, JSON generation should take no appreciable time.

 

Edited by deanishe
Link to comment
2 hours ago, deanishe said:

Escaping? Doesn't Ruby have a proper SQLite library instead of trying to do your own escaping?

 

Yes, but it’s an external gem and I’ve been having some oddities with the default Ruby setup where it stops working for some reason.


Seems to be working now. I’ve updated the Google Drive SQLITE version.


This one seems to be buttery smooth and is likely to be the winner. I’m predicting no major changes going forward. Thank you for the SQLite suggestion; I’m such a fan of the versatility of text files that I seldom remember the DB solution.

Link to comment

I'm testing the SQLITE version again - it's  ~30 mins since running :gdrebuildcache and nothing yet.

 

Could I be doing something wrong?

 

Thanks!

 

462148522_Screenshot2021-08-25at08_33.53@2x.thumb.jpg.0338e77003e1fd54a6991cbba217d84e.jpg

 

EDIT: 39 minutes after and I got the notification, however I also got an error:

image.thumb.png.38366b174d923364c021542f6ac500e7.png

 

image.thumb.png.4369e2465fab38a7b7c4be7c735f9421.png

Edited by alfredpanda
Link to comment

I modified the workflow to use fulltext search, but it kinda sucks for filenames.

 

How about this?

 

args = Query.split(" ").map { |word| "%#{word}%" }
sql = "SELECT fullpath FROM main WHERE " + \
  Array.new(args.length, "basename LIKE ?").join(" AND ") + \
  " ORDER BY accesstime DESC LIMIT ?;"
args.push(Limit)
Results = db.execute(sql, *args).flatten

 

Edited by deanishe
Link to comment

I'm getting this error when I run the latest version. any ideas on how to fix this?

 

13:50:05.698] ERROR: Google Drive[Run Script] /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require': dlopen(/Users/cwwang/Library/Application Support/Alfred/Alfred.alfredpreferences/workflows/user.workflow.A8598191-D45E-4724-8154-42C3E0AE5A32/_licensed/sqlite3/gems/sqlite3-1.4.2/lib/sqlite3/sqlite3_native.bundle, 0x0009): could not use '/Users/cwwang/Library/Application Support/Alfred/Alfred.alfredpreferences/workflows/user.workflow.A8598191-D45E-4724-8154-42C3E0AE5A32/_licensed/sqlite3/gems/sqlite3-1.4.2/lib/sqlite3/sqlite3_native.bundle' because it is not a compatible arch - /Users/cwwang/Library/Application Support/Alfred/Alfred.alfredpreferences/workflows/user.workflow.A8598191-D45E-4724-8154-42C3E0AE5A32/_licensed/sqlite3/gems/sqlite3-1.4.2/lib/sqlite3/sqlite3_native.bundle (LoadError)

Screen Shot 2021-08-25 at 2.15.54 PM.png

Link to comment
1 hour ago, deanishe said:

Are you using an M1-based Mac?

 

I am. Per a previous post, so is @alfredpanda. I’ve confirmed the binary is ARM. Not sure if I can force gem to make a universal build. Trying to make an x86 version via arch didn’t work.


I may have to make the Workflow do the gem install for each user.

Link to comment
3 hours ago, deanishe said:

In the data folder, right?

 

Probably best. Though I just noticed: it looks like macOS’ default Ruby installation already includes the sqlite3 gem. Try this one, @CWW.

 

This one also includes the AND addition from above.

Edited by vitor
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...