Jump to content

smarg19

Member
  • Posts

    505
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by smarg19

  1. True enough, but sometimes one way is all that's needed. Of course, Andrew could upgrade this facet of workflows in the future (here's to hoping).
  2. You must not have seen enough of @deanishe's comments Otherwise, you would know that Shawn and I would never consider any of that "hostile". Dean likes to get a bit drunk and speak his mind. I'm quite fine with it.
  3. Alternatively, you can use External Triggers to link two Script Filters together. Filter 1 connects to an AppleScript node that calls the External Trigger and passes the arg. Filter 2 is triggered by the external trigger and receives the arg as the query. My Pandoctor workflow (see sig) uses this method a lot.
  4. One other reason for my change is that it forces me to keep track of what I have installed. If I add an app or binary or do something, I add it to the git repo. As for setting up a Mac just the way I want automatically, I'm currently trying to add custom icon support, but dots and mackup gets me pretty far along.
  5. Yeah, I'm actually working on a system to nuke and pave my comp every 6 months or so. My thinking is that I can automate the parts of my setup that I have organized, and then reset and try to organize the parts that aren't on a clean install. So, instead of organizing only the mess itself, I am constructing a automatic system to rebuild the organized parts so I can try again to build up an organized system for some other part of my setup. So, I just nuked my Mac a week ago. Used homebrew and homebrew cask to install binaries and apps, and I'm now trying to organize my development setup. That was the impetus for this thread. Once I get it organized, I'm going to use cookiecutter to automate building new projects and add my organized old projects to my automatic build system. It's a bit extreme, but it makes my head feel good, so I'm doing it
  6. My two cents: this discussion suggests the difficulty of a solution that solves every problem with great stability. Andrew's current implementation has the advantages of being immediately understood and thoroughly consistent. I, for one, have never gotten responses about slow response times, and in the workflows where I need it, all I'm doing is forcing a minimum number of characters before filtering actually begins (so, for example, ZotQuery won't start until it gets at least 3 characters) and caching results. For slow HTTP APIs (right now, only LibGen has this issue for me) I implement the period hack. In either case, it's an easy situation to code and to explain to users. It works the same every time. And it isn't a convoluted solution to an edge-case problem. The ultimate problem is that you can't write code to figure out when you've gotten a meaningful amount of input. Conceptually, you have only two options: [1] accept all input, or [2] force the user to tell you when they've entered the meaningful input. If you want to go route 1, all of these server solutions seem like slaying a dragon to kill a fly.
  7. Its a hack, but for my internet service workflow (LibGen), I just require a `.` at the end of the query. So, the filter won't run the web request and scraping code until it gets a query that ends with a period. This means the initial filter calls die basically immediately (this check is the first code that's run when the query comes in). I then just have an informational, non-valid result that is returned while the query is lacking the ending period. It looks like this: Not the most elegant, but it helps a lot with speed and responsiveness.
  8. Can you put the workflow-build.py script as a Gist, I see that the workflow-install.py script is already a Gist. I think that workflow makes a lot of sense. Fiddling with Alfred's preferences directory and the random UID folder names is a PITA. Can I ask why you don't use virtualenv? I know that we talked a bit about this in the past, but it seems that building it all in the ~/Code/ directory as CLI tools and then building the .alfredworkflow file after the fact could take advantage of virtualenv. Specifically, I'm thinking that using virtualenv (and virtualenvwrapper) with Sublime Text 3 would allow me to creat a Custom Build System within a Sublime Text project to run the code within the Sublime Text console (for simplicity and speed) while still ensuring the control of the environment. However, as you are much wiser than I am, I'm interested to hear your thoughts on this possible approach. Also, (this is a general Python question) I'd love to hear your thoughts on organizing code when taking a functional (clean architecture) approach. I confess that I am a bit OCD about my code. I like things to be cleanly organized. My issue is that Python really only allows for classes as containers for related functions. But, AFAIK best-practices define classes as objects with state. I want to use them as "function buckets", an object without state. For example, I have a class in ZotQuery called ResultsFormatter; this class contains all the methods used to format a Python dictionary containing a Zotero item's information into an Alfred-ready dictionary to be plugged directly into Workflow.add_item(). This object has not state really. It is really acting as one big pure function (input -> output) with a bunch of sub-functions to organize/decouple the function bits. Is this best-practice? Is there another way to do this cleanly? As an other example, I like to use classes to represent collections of information. So, I have a class right now called ZotQueryBackend; this contains all of the information for the data files ZotQuery uses (a clone of Zotero's sqlite database, a JSON representation of that data, a sqlite FTS database of that data, and an ASCII-folded FTS database of that data), as well as containing the methods for updating those data files. Should each of these data files have its own class, then have the state of update-needed or update-not-needed? If so, how could I organize these classes cleanly? And I will state at the end that part of what I mean by "cleanly" is being able to use Sublime Text's code-folding functionality, which requires meaningful indentation. I know this final bit is slightly outside the purview of the original question, but my Googling isn't really helping me get at some of this "tribal knowledge", and you're the best Python programmer I actually know...
  9. PS. My goal is to make a Cookiecutter template. So, structure of complex workflows would be particularly appreciated.
  10. Ok, I am in the process of re-tooling my workflow development ...erm... workflow. I use Sublime Text 3 and write all my workflows in Python (with some AppleScript occasionally thrown in). My basic question to the community is this: How is your environment set up for Alfred workflow development? But, as that question is perhaps overly broad, here are some specific sub-questions: How do you do version control?do you make the workflow directory the repo? do you do this in Alfred's auto-generated workflow dir (~/[parents]/Alfred.alfredpreferences/workflows/XXXXX-11111/)? How do you structure complex workflows?is the structure similar to the standard Python package structure, with the workflow dir acting as the package dir? if you have multiple scripts, how do you organize them? How do you run/test the workflow as you are building it?in Sublime, with Python, I have problems with imports as soon as I create any structure beyond a flat collection of scripts. I run the scripts from Sublime, not the command line. I need a Custom Build System, does anyone use one for Python alfred workflow devel? There are lots of other specific questions, but my general aim is to just hear from people. Right now, I am re-writing my ZotQuery workflow, which if anyone uses it or has looked at it, you would know that it is big and gnarly. I have most all of the code written, but the workflow's organization is shit. It's far too big to all sit in one script (and still allow me to navigate it). I need some organizational inspiration. So hit me with your setups. stephen
  11. TBH, I'm not much of a network guy, so I can't speak with great authority, but this is the issue as I basically see it. First, let's take the KA workflow. It's a bit odd because KA is a bit odd. It sends zipped data across the network, which the workflow then has to unzip and then interpret. This is in contrast with LibGen, which just sends the data directly. So, there is a contrast between a "stream" of data being sent and a "package" of data (ultimately, both are streams, but I'm playing with metaphors, so whatever). With the "stream" of data, pieces are being sent one at a time, and the receiver (the workflow in this case) needs to collect all the pieces, put them together, and then will interpret the whole thing. When it's just a package being sent, you get the whole thing in one call. Because KA sends zipped data, it's sending a "stream", not a "package". The workflow is trying to interpret the whole package before it all gets there. This is ultimately a problem with the workflow, so I will have to figure it out. As for ZotQuery, I have to assume the same basic issue is the problem. Again, since I'm not a network guy, I don't really know what's going on, but I will ask around and look into it. This problem will take some time tho, given my basic ignorance.
  12. Ok. TBH, I am beginning to suspect that your internet connection is wonky. The workflow runs clean for me every time. The error you are reporting says essentially that there is nothing there. Now, the request went through, so it's not that your internet is off. But the workflow isn't getting anything from the response. This might also explain the ZotQuery export errors. Can you check your internet connection? Obviously you are getting online, to post here, but the error just doesn't make sense otherwise. In the meantime, I will try to make a version with some logging so that I can see what's happening in the intermediate steps.
  13. What are you searching for? This will help me replicate the bug.
  14. All three of these report not working. The first report follows the exact pattern as the third, just at different times. Search for "anderson", try to export full reference. Get bad request. The second one says that the server can't find it. I will look into the errors. But network stuff is harder and will probably take me longer.
  15. Post in the KA Torrents thread here on the forum. I'll look into it. Will read over these too.
  16. The error arose with a query for "o", and it was one out at 2700+ items that had the error. But I've fixed that bug. So it won't matter. Items without a last name will have xxx. there instead. OK. All this does is make the debugging trickier. Your info is correct. Your account is synced. My next guess would be another input error on my part. To test this, turn the debugger on and try to export three different items. Copy the bug report and send it to me. We will go from there. I will read over the attachment caching code. There could very well be a bug in there; who knows.
  17. Hmmm... This is all quite interesting. The IndexError: string index out of range error suggests that you have some references without the family (i.e. last name) field set. Both times that error occurs when the workflow is first run, with the first letter of the query you are entering (once with "o", another time with "a"), which results in lots of results. In iterating over these many results, the workflow hits some without last names. This is a bug. I will need to fix it. However, it also suggests that your Zotero data has some funny points. As for the export error, the problem is not with the workflow, but with the Zotero API. The workflow is attempting to get the information from your Zotero library, but the Zotero server says there is a problem. This is likely caused by one of two issues: first, your authentication information might be incorrect. Do any export commands work? Short references or long references? Any item? If so, then the authentication information is fine. If all requests fail, we should look there first. Second, your local Zotero data may not be synced with your remote (i.e. cloud) Zotero data. In the Zotero client, in the top right, you have the sync button. Pressing that will sync your local info with your remote account. When exporting citations, ZotQuery uses the Zotero API with your remote (cloud) data. So, if you have an item in your local library, ZotQuery will find it and make it available to you, but if that item hasn't been synced to the cloud, then the Zotero API doesn't know about it and spits back an error. If it is neither of these two issues, we will have to dig deeper. But I'm hoping that these two most common issues will fix the problem. As for the other problem, I will try to replicate it and then fix it ASAP. Again, thank you so much for working out the kinks in this beta release. I hope it's not overly frustrating to have a partially working workflow, but once I get the kinks worked out, it'll be great. As an example, when the workflow searches for one letter (like "o") using the general query, it returns 2968 results in about a second and a half. That is crazy faster than the older version. Anyways, let me know about how the troubleshooting goes. stephen
  18. I understand that! Too little time, too much to (possibly) do. That's one of the main reasons this workflow does the bare minimum. I wrote it in an afternoon when Piratebay went down.
  19. Ah... This is why this is a beta version You've found another bug. And, in addition to that bug, you've actually helped me to see that there's is some functionality that I need to finish (like the z:bug filter). So, there is a bug in the z:bug filter. I will have to update that. As for the other intermittent problem, I cannot tell what that is. First of all, there are no searches that begin z:[space]anything. So, if you were every typing that, Alfred would fall back to the default searches. However, those default fallbacks would never show an error, so you must have typed something else to get the string out of range error. If you could replicate that with the debugger on, it would be greatly helpful. Also, feel free to ask me any questions about the API and the functionality. I know the workflow is big and has alot of functionality. If you haven't read my blog post on all that it does, that would probably be helpful. stephen
  20. At this moment, I don't plan on expanding it, as I personally don't use any of those features. I'm a man with simple torrent tastes Obviously, though, some people do use those features. And seeing as how Piratebay just might not come back ever, working something up with KA seems like a good idea. I'm busy with other things right now, but I'm willing to help in small bits. Tho, from what I remember, your Piratebay workflow was written in PHP, which I don't know. So, I guess I'm saying, you can take all of my code and build on it and I'm willing to help when I can. Or you can make my workflow obsolete Either way, I'll be fine.
  21. KA Torrents Download at Packal Version: 1.0 Unfortunately, the Piratebay is now down. Whether it will remain down, and if not for how long remains to be seen. I had been using Florian's fantastic Piratebay workflow to easily search and download files. With the servers down, I needed another backend and so another workflow. IMO, KickAss torrents is the best aggregator out there now that Piratebay is gone. So, I wrote a simple workflow to search and download torrents from KickAss. There is only one keyword filter: torrent. Type that then your query. If you select a result, it will open the magnet link, meaning that your torrent client will automatically open to start downloading. Fast, easy. No whistles. Enjoy!
  22. Damn. If I had a nickel everytime I made that error, I'd probably have a dollar. I accidentally hardcoded the arguments for the workflow in the script file. But, it's an easy fix. I've just uploaded version 2.1 to Packal. Download the new version and give it a whirl
  23. Can you run the workflow with Alfred's debugger on, and then post the debugger output? I need specific line numbers and such. Also, the authorization was to download a dependency. I'm using an early version of a workflow utility to allow workflows to download utilities at runtime, instead of bundling them all with the workflow. This reduces the size of the workflow itself, and thus alleviates user's Dropboxes (if they are syncing).
  24. @dfay is 100% correct. When you try to run the wp command in line 97 of your script from Alfred, Alfred is running the script in a totally sanitized environment, where installed external command line tools don't exist. To make it work, you can try to different typically approaches. First, export $PATH:... before the bash /Applications/MAMP/htdocs/Dropbox/myscript.sh {query} call in Alfred. This will set the PATH variable to whatever you put, and then allow for the relative call to wp that you make in line 97. Second, (my preference), make the call to wp explicit. So it would be something like this in line 97: /usr/bin/local/wp core download --force. This is assuming that the wp executable lives in /usr/bin/local. Wherever it lives, put the full explicit path. Either way, you need to avoid any $PATH magic when working with Alfred.
×
×
  • Create New...