Jump to content

h2ner

Member
  • Content Count

    43
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by h2ner

  1. https://github.com/gennaios/alfred-jnana/releases port of Gnosis to Go. setup code for creating database, etc. not included but if having run Gnosis at least once, moving the db to the workflow folder as jnana.db and it should work. I reread your variable docs deanishe and figured it out. I didn't have a step at the end that deleted the variable; had thought that if it wasn't exportable, it would automatically disappear and not show up in the variable list. Perhaps it's there in the wording somewhere. Having added the step of saving a variable through AppleScript, it seems there's a noticeable delay which may be the step of retrieving the variable. Where I need a variable, for filtering items according to the open file, that SQLite query is more complex and seems like it can take up at least 20+ more ms; could be both combined that make that querying items for the currently opened file seem slightly slower than a FTS query on the whole database. It seems like Arg and Vars is then not appropriate for use before a Script Filter action? Arg and Vars seems to automatically pass the variable as query or argv to the following script filter. Or maybe there's another way to set it up. As the workflow is in general pretty fast in Go, any slightly delay I'm starting to notice.
  2. Hello Andrew, When I tried an Arg and Vars step after Run Script, which as you suggested sets an environment variable from {query}, in the next step of a Script Filter, {query} and argv are also set to the value from the Run Script (return … [to {query}]). Perhaps I'm missing something? The Script Filter should run with argv as whatever is the input text of the Script Filter, with the variable additionally available. So far, I'm only able to get that from the above mentioned AppleScript within the Run Script. I make use of the variable only during execution of the workflow – no need for it to be saved afterwards –, and am thus not sure if there's another way, or if within the Script Filter, some alternate code is needed. I don't remember if it was mentioned in the docs, are variables set permanently or is there a way to set variables to be available only during script execution and thus aren't saved to the plist? If they are permanent, perhaps then there's some step I can add at the end which will remove the variable from the environment.
  3. Hello, I was looking at my workflow and eventually figured out why one part is slow. I have a script filter that first finds the front most app and then gets the open file. I realized that is running on each execution of the script filter when it doesn't need to. I thus separated it into an intermediary run script action. How do I pass that file path as a variable? I'm not sure if Arg and Vars can be used in this case, accessing the variable thus from the next step of an AppleScript script filter. Alternatively, I can save the file as an environment variable using: set bundleID to (system attribute "alfred_workflow_bundleid") tell application "Alfred 3" set configuration "…" to value quoted form of theFile in workflow bundleID with exportable end tell but then the variable becomes a permanent setting, saved with the workflow plist. Unneeded plus if I wanted to not have that always show up as a git change, I'll have to add some file filter to ignore that line. Is what is set by Arg and Vars accessible by later AppleScript scripts and script filters?
  4. I was out of mind when I had phrased that. Closer to what I was originally thinking, trying to get an idea of potential lowest total execution time a script filter. I haven't looked much at the AppleScript in a while but will do. Launching a minimal python script w/o imports that returns one result, unsure what is the overhead of launching python. Unsure if Alfred needs to create a shell environment first, or if there's anything else. Perhaps all of that doesn't add much and the potential of a script filter to execute and return results, in Python, might be in the dozens of milliseconds? Helps to get an idea of let's say my script execution time could be reduced by perhaps 100 ms, that is perceivable; if it's just 20 ms or so, what percentage of total time might that be? A rough estimate would be nice to know though now I've started to port it to Go, less important. Though over time, having an idea of potential speed and trying to reach it is a nice aim. Recently I had tried to separate the Peewee import as such of their apsw module and Postgres extended. Problem was the main Peewee module imports all drivers if available, rather than by class instantiation, and I had some issue with it breaking a table field type that is extended in it's extended SQLite module that uses apsw, but seems was imported from main Peewee. Reverted back since it didn't make any speed difference. Now I'm more comfortable with SQL, Python, etc., I may try to remove Peewee though now with Go, who knows. Only in recent weeks had I started to seriously look into SQLite, and I'm pretty happy with it, so may remove the other code and drivers. Will wait for that, though will concentrate on Go. As as far SQLite, seems the load time of the apsw driver itself and running a query, if I remember correctly, might be around 40 ms. Timing the query within sqlite3, it rounds to 10s of ms, is 10 ms w/a query that returns one result. Maybe there are compile options of the driver. … All these details, perhaps less important now though I may try some, though future efforts will be porting to Go. Also would like to better know SQLite options, pragma statements that might affect queries that return many results, and there was also a mention in the sqlite-users mailing list of someone writing a ranking algorithm that adds a fair amount of Sphinx's SPH04. Little details I may try to look into over time. Thanks for the AppleScript tip. As far as the error, on a new setup of Alfred with a different macOS user, I was unable to produce it. Seems like an AppleScript error. Will keep looking into it and thinking what it might be. Of note, seemingly unrelated, a recent version of calibre (within the last ~2 years) is needed for EPUB use. I had asked, specifically for this workflow, and there was an addition to the ebook-viewer command-line parameters to specify opening a file to a TOC entry by title. Not ideal since it might mismatch but it's a start. Unaware of any other EPUB viewer that has anything similar or AppleScript support.
  5. Indeed removing Peewee is a source for more speed. At this point, the workflow maybe in maintenance mode with fixes, new features, etc. but not larger efforts of recoding. As I started porting it to Go, at least the most used part that would benefit most from speed, I use both in combo and will slowly convert the rest to Go as I learn the language, available modules, etc. As for what's taking up the ~150 ms for one script filter, I was curious how such a time compared to total round trip time of a Python script filter. As is: - full-text relevancy ranked search is done on an SQLite FTS5 table, sorted by relevancy in 3 columns. DB is at ~1.2 million rows. Should be a few dozen ms max or less. Read some notes the other day about FTS query syntax that would result in the same results but one might run slightly faster. various pragma statements might make a small difference. - loading of three DB drivers (apsw for SQLite, psycopg2 for postgresql, pymysql for sphinx) seems to take up about 2/3s of execution time. ~100 ms of the total ~150. Wasn't too familar with SQLite at the time I began so used postgresql/sphinx. I could remove the code to optionally use a different backend. should reduce load time but I'll likely leave it in there for those that want it. It was only recently in getting the workflow mostly good enough for sharing that I looked into SQLite how to configure it. Still possibly more work could be done with that. Overall seems Peewee possibly could completely load and parse modules in less than 50 ms if just using apsw for SQLite. total time includes steps: - init db, create tables if needed, create triggers if they don't exist for FTS table population, SQLite pragma statements. Unsure what is deal. All of that seems to go quick. - reading of cached query string. so bringing up the script filter will populate text field and results with previous search. query string is saved on each execution of script filter, unsure if there's a way to save it only after and if one has pressed return or selected a filter result. as such, query string is saved on each execution of script filter. seems like this step, at least file write, could be threaded. - image thumbnails are used if they exist for filter results. so for each result that goes into the filter (searching index ebook TOC entries to open book to section), a check if exists a thumbnail is done and then the icon type is set. Search is limited to 100 results so 100 checks are done. Maybe that doesn't take too much time. As there could be results from the same file occurring multiple times in results (different sections of same book), maybe some map could be done to set icon path for all entries to the same file. Not too strong with such tasks but will look into it; unsure if such a task before creating workflow items would be faster or slower. In general, unsure how fast all that could run in optimal conditions. under 60 ms? Unsure still of additional overhead of Alfred initiating the script filter, running AppleScript before and after, and parsing and displaying script filter results. Part I mostly wanted to get faster was remade in Go; as such, perhaps will mostly be devoting effort that. As it seems mostly ready for others to start using, updated:
  6. Thank you deanishe. I started my one workflow long ago when my programming abilities were less than ok. They're perhaps only slightly better now. Working on it here and there over time, I haven't looked at it in a while since the beginning. I recently tried to make some improvements and additions to get it close to sharing. As mentioned elsewhere, I used Peewee for ease of use at the start and trying to reduce code duplication with configuration of different db backends. That takes about .09 seconds to load, most of it seems to be the loading of two drivers. Now after having read up on SQLite, I'm using only that thought haven't removed the rest of the code. A full-text search over a fairly large database takes about 150 ms total from script start to returning workflow items. I tried to optimize it and it's maybe as close as it can get; only so much can be done it seems. Removing Peewee might gain me half that time, unsure. You encouraged me to try Go. As I was getting back to my workflow, been looking into Go in recent weeks. The main part of the workflow for which I'm concerned about speed is a script filter of a full-text search over the entire db. Like a search engine, I might refine the search terms to narrow down and explore results. Since you're reply, I've tried to get just that most used part working and got it going earlier today. So nice. ~8–50 ms. And thank you for the wonderful alfred-workflow and awgo.
  7. It seems some here might have an idea. Anyone have a rough estimate of how long it takes, on a modern system, for a script filter to run a minimal python script and return results to Alfred? Perhaps somewhere in the hundreds of milliseconds? I'm wondering how much I should try try to optimize my workflow, e.g., reductions in execution time of for example 0.05 seconds might be perceivable but require too much effort.
  8. Sadly no AppleScript support. Neither in some other similar app of which I forget the same. Any other similar apps? Would be great.
  9. Terrific. I am used to apps like Intellij IDEs where one can access any app action and any preference thru search. I have seen very few apps offer that. Sublime Text is decent for that. Hope such comes someday.
  10. I’m not sure if since my first post there have been any changes or additional features that might help. To better explain what I’m hoping for: I have a script filter that searches a database and returns results. Relevant results could be at least several dozen and sometimes a hundred or more. Just like a search engine, I tend to often look at several to a dozen or two of the most relevant entries, perhaps refining search and continuing. Ideally, a script filter could persist its state so the next time I run it, it doesn’t requery the database but is reinvoked with the same query text filled in if I wish to edit the text, the filter results list is the same, and cursor and scroll position is also the same so for example I could easily just scroll down one and choose the next result if let’s say there’s 100. Sometime there was the addition of using AppleScript to call Alfred from an external script. With persistent variables; can I perhaps set the query text as some variable and then have an action before my script filter that calls it with the previously saved query text stored in a variable? My preference is to use the external trigger mode bound to a hot key as like a search engine I may invoke it repeatedly to look at dozens of results. Is it worth the possible addition of a toggle so that script filters can persist state upon successive invocations? I imagine some may wish to do something similar, use a script filter repeatedly to examine numerous results while refining the search.
  11. I realize removing peewee would give me great gains. If it's something like .15–.2+ seconds to return results for a script filter, there is a visible delay. I'm pretty novice at programming and my workflow was meant to be able to work out of the box with sqlite, but usable with other databases such as postgresql. Full-text searches with relevancy ranking over two full text indices in two tables; after enough records postgresql FTS became slow so I added sphinx. Maybe I could figure it out without an ORM though there'd be more code, more to maintain, and so forth. I'm not sure if I'm ready to do that yet and maybe some lightweight ORM-like library in Go, though indeed more complicated as you say, may work better. There may be other reasons too to want to switch. My workflow works fairly well enough as is for me, though performance certainly could be better, though it's a good amount left to do before release whenever that happens.
  12. Indeed the SQLite module would be much more responsive. At the time a few years ago, I had tried to optimize it as much as I could using peewee and considered it a bit slow. Gotten used to it since then but now revisiting it, maybe an eventual port to Go would be worth it. I'd possibly still use some ORM or ORM-ish library thought it should be better.
  13. The main workflow I use and perhaps the only one I spent any significant time on uses peewee. Agree native sqlite support might not be worth adding. It's configured to work on install using sqlite but optionally with postgresql & sphinx for full-text search. Currently searching 1 million+ records with sphinx and running at the terminal 'time' a script filter using your alfred-workflow returns results in ~0.15 seconds. I'm a novice at python, haven't profiled it in a while, haven't done any dev on it in a while, just got back to it, and am unsure if it's function calls, peewee initialization, db driver overhead, etc., I'm unsure what constitutes that time. I'm now looking at porting it to Go for more speed. It's not too bad but certainly faster is better. In a quick comparison of various ORMs, I found peewee easier to use and not to bad to write code to use various db backends.
  14. Agree with deanishe and was thinking the same: cmd-return to open in Contacts. I didn't know of cmd-O until I found this thread. As the modifiers control and option offer other options, I think cmd is a good choice for perhaps the most often wanted other action.
  15. Perhaps there hasn't been much consideration. Labels/tags are possibly going to be more important and utilized; see how they occupy a primary place in Files for iOS. If they were displayed in Alfred, a bit like Gmail for iOS, right aligned in a colored box with label name on the same line as the file path, I don't think that'd be too intrusive.
  16. Given Alfred's UX, indeed I do not hope for the power that's possible with an entirely different UX, yet the original scenario mentioned, wanting very very very very much to return to the previous state of a script filter, with same results and scroll position, plus a query string history if possible, stands. Rerunning the previous query could take time; if it were possible to cache state given some time out and restore, that might work. I'm not quite sure if I understand, having only started workflow dev, but I'll think about it. I'm not using an ET but a hotkey bound to a script filter; perhaps that too is part of ET mode. I don't know if there's any current way to export the query string to some variable or preference (that part yes) and then prefill the script filter with the last used string, maybe through some run script between the hotkey and script filter; I may do that if possible though a future browsable query history, or better that plus restore state, is very much more preferred. Yes, thinking about it, agree per-workflow history might be troublesome, but a hotkey to show some type of history list would be great. The mentioned other app that also permits cmd-space to fill in a previous string could be nice as well though I haven't thought much about if an item history + search string history would be more useful and how it might look.
  17. The info concerning dimensions of 256px is very helpful. Thank you deanishe. It is easy enough for me to experiment to find out when an icon is shown as is in place of an icon of filetype. I had only tried with large JPGs; and indeed perhaps it is determined by some size limit. If someday the workflow docs could be updated, as well as either a link or mention in your excellent alfred-workflow, that'd be terrific. I'm unsure of the reasons why Andrew decided to resize images without keeping aspect ratio. If it is merely a question of not having looked into it, here is some info to save him a bit of time looking it up: https://stackoverflow.com/questions/2531812/trying-to-resize-an-nsimage-which-turns-into-nsdata
  18. hello deanishe, I'm trying to understand then what constitutes an icon. I tried passing (large) .jpgs and saw generic file type icons. Currently looking through a few installed workflows, some use .png. If I need to generate PNGs or even icon files, I can, may, and likely will eventually do that. The docs, the last time I looked, weren't clear.
  19. Hello, I've tried to search previous threads but was unable to find very much. I am curious about icon type fileicon in filter results. In the cases I've tried so far, I get only a generic icon; in limited testing, .jpg, .pdf, and .epub show only generic type icons. Perhaps some would depend on thumbnails from a quicklook plugin, yet for others if there is a Finder generated thumbnail, is that used? If needed, I could generate and cache my own thumbnails, and that may be necessary. In default queries results outside of a workflow, file icon thumbnails would be wonderful as well, though perhaps that hasn't been implemented because of caching, performance, or some other reason. There is also the question of aspect ratio. At least in some other workflows, I've noticed they are resized as a square; perhaps related to how the UIImage is currently generated. If needed, if I generate my own thumbnails, I could pad them, though it is possible to have Alfred resize while keeping aspect ratio.
  20. I'm hoping that script filters can be more robust and comparable to how one would use other search engines. There are filters that can return dozens, hundreds, or thousands of results. With many other search engines, one wants to act on numerous results while refining the query. If one has a filter that returns a hundred results, with the filter rerun and highlighting the first result (if using the up arrow), finding ones previous place with scroll position is more difficult. Plus, what prompted this thread in the first place, when calling filters from a hot key, unless I've missed some setting, the up arrow does nothing for me. On a related note, it would be also helpful to see a history of query strings for a particular script filter, or even a list of previous entered strings for all filters + previous items (web searches, files, folders, …) From what I've found, up arrow seems to be the only form of history save clipboard. Workflows is what made me attempt to switch from another app, yet I continue to use another for such features as history. In another, cmd-B will show a history of previous items. Plus when using another search, shift-space will fill in the text from a previous search, handy when wanting to enter in the previously used string into another search or filter, such as when wants to use the same string against multiple web searches.
  21. I also have reservations about including filter results in default queries but it is an interesting idea. Even if results were cached, given the number of workflows some have, results could be in the thousands, tens of thousands, hundreds, or millions. But perhaps if there was some cache time, and if only previously actioned items (with return) were included, that might be interesting.
  22. Hello, In my current in progress script, I might use a script filter to query some data, act on some result, and then wish to return to the same filter and pre-fill the previous query string, either to look at other results, or refine the query. I haven't yet tried to see if text can be passed to a script filter; in that case, I could save the text, and fill it to perhaps rerun the query. if it works, that could be an option for now. Currently, I'm copying and pasting. Perhaps a future option could save history for each filter, and perhaps even optionally bring up a filter in the previous state with queried results and scroll position.
  23. Progress continues and for a while I was almost ready to post a version for testing. It was a long way getting here; parsing PDF and ePub bookmarks, exploring ways and libraries to store them while imaging future additions, SQL full-text search across bookmark title, section, and file name, etc., quite a bit of work has gone into it. Dependencies should be mostly added and initial SQLite support so it can just work without setup, that should be there. Exploring DB migrations, I'm not sure how much I can later support that, or how difficult it will be, so I'm trying to finalize the schema for future uses of more than bookmarks, for instance browsing books by author, publisher, tags, ratings, etc., auto-reimporting bookmarks if file date modified changes, and who knows what else. I'm not sure who else is interested; perhaps more will show themselves once they start using it, and if they use ebooks regularly, discover the power of being able to search all books by chapter/section title and go to it instantly. Cloning git and using deanishe's scripts to symlink or install, while also changing the git stored environment variable for DB from postgresql to sqlite, and it should work. Recent DB changes should be pushed soon; for anyone that wants to test, be ready to manually migrate if needed.
  24. Hello mbigras, yes progress has been decent. It's now renamed to alfred-gnosis and on github: https://github.com/gennaios/alfred-gnosis It looks the same as in the above screenshot except for search all which will show "section | section hierarchy" in the title, and book file in the subtitle. In the past week or so, I've added ePub support, with a command-line option to recursively import a folder. ePub 2 only at the moment. calibre recently added command-line go to bookmark in ePub and it works by bookmark title. Not quite precise but a good start. There are hotkeys bound which might not be suitable to everyone: cmd-G - filter bookmarks of current book in Skim, Preview, Acrobat, or calibre viewer control-D filter all bookmarks cmd-U - remove bookmarks, should be update, remove and then reimport (in case of PDF/ePub edit), then run cmd-g again to import. (might be broken) cmd-E - edit epub (currently hardcoded to BBEdit) I've been refactoring in the last week and part of it might be partially broken. If so, possibly only the part that finds the ePub NCX to parse the TOC. I've started to add tests with py.test, as numerous times I've broken something trying to rewrite or add functionalities. For setup, I haven't quite got the hang of how to include dependencies (e.g., in ./lib/) and use those. peewee, python-epub and lxml are among the requirements, running the script from the command-line, python gnosis -e, to get bookmarks for current epub in calibre (by reading first entry in recent files in viewer settings, … -i ~/folder to import epub, or -g file.pdf, and it should complain about missing libraries. SQLite support I haven't looked at in a while. Part of it was not being familiar enough with Python or Peewee, at the time I began, to use one database or another at runtime, and part of it was some other details including including an updated sqlite for FTS5 full-text search (for search all), and how to use that binary instead of what's included with macOS. I might be able to figure that out now though I haven't looked at it. For the most part, running the workflow with various command-line options should show what libraries are needed (jpdfbookmarks as well) and besides that, postgresql with user postgres with create table permissions. I should include jpdfbookmarks but was trying to keep the git repo size small, and might switch to a faster PDF bookmark parser. I tried pdfminer recently and it looks to work well; I need only to rewrite the code that figures out searchable bookmark section (e.g., section 1 > chapter 1. title). The code for that is ugly anyway. I think that's about it for setup. Perhaps it's not too much to get it ready to release though I've been working on other things for a while and recently got back to this only in the last week or so. DB_ENGINE=postgresql in workflow environment variables but perhaps that's in a .plist and already saved in git. It may also be necessary to manually create some postgres indices. As my python and even overall programming experience is limited, any contributions would be fantastic! an example of search all for ePub (I need to change the workflow icon):
  25. I'm not too familiar with the calibre command-line tools so though the SQLite database was the main and perhaps most rich source of data, but if there are indeed tools to query and modify the DB, that's great.
×
×
  • Create New...