Jump to content

Alfred Dependency Downloader Framework


Recommended Posts

I'm not sure it's a great idea to leave it up to the author. That makes the framework something else to worry about as much as a useful tool. Best to specify that errors may occur and what the bundler will do in that case (throw a PHP error they should try to catch, populate an optional $error variable).

 

It's definitely not optimal. And it's not really a good idea. But, it might be the best of all the bad ideas. On the same note of bad ideas: exiting by sending a default script filter readable AlfredXML error message is a bad idea because the bundler might be called from somewhere outside a script filter.

 

What I did in my Python framework (pinched the idea from a Ruby one) is to provide a wrapper function to call the workflow code. The wrapper catches any errors and displays an error in Alfred. The bundler could similarly "hijack" execution, display an error and exit.

 

That's a clever idea, and definitely one worth looking into. The main problem with the bundler is that it can't assume that it's installed. So, I'm not sure if it would be best to display the error warning just so that it would show up in the workflow debugger, write it to a log file, send information back to a script filter, or send an applescript dialog.

 

I really wish (and this is impossible) that we could know if the call came from a script filter or a run script.

Link to comment

The get-pip.py file needs to be extracted/installed and a simple wrapper script (the pip executable) created (or maybe not—I think you can import pip and use it that way). Probably the cleanest solution is to point to a pip-installer wrapper instead of get-pip.py itself.

I'll look into it at the weekend if Stephen doesn't beat me to it.

 

What are the extract/install commands? The bundler JSON also supports installation directives.

 

So, here is the JSON to get and install CocoaDialog. It comes in a DMG file, so the JSON indicates that it downloads the dmg, mounts it, copies the files out of it, and unmounts it.

{
    "name": "cocoaDialog",
    "type": "utility",
    "versions": {
        "default": {
            "invoke": "cocoaDialog.app/Contents/MacOS/cocoaDialog",
            "files": [
                {
                    "url": "https://github.com/downloads/mstratman/cocoadialog/cocoaDialog_3.0.0-beta7.dmg",
                    "method": "direct"
                }
            ],
            "install": [
                "hdiutil attach -nobrowse -quiet '__FILE__'",
                "cp -fR /Volumes/cocoaDialog.app/CocoaDialog.app '__DATA__'",
                "hdiutil detach -quiet /Volumes/cocoaDialog.app"
            ]
        }
    }
}

The __FILE__ and __DATA__ are custom flags that the bundler understands.

 

So, __FILE__ is the file downloaded, and __DATA__ is the appropriate place in the data directory.

Link to comment

It's definitely not optimal. And it's not really a good idea. But, it might be the best of all the bad ideas. On the same note of bad ideas: exiting by sending a default script filter readable AlfredXML error message is a bad idea because the bundler might be called from somewhere outside a script filter.

 

 

That's a clever idea, and definitely one worth looking into. The main problem with the bundler is that it can't assume that it's installed. So, I'm not sure if it would be best to display the error warning just so that it would show up in the workflow debugger, write it to a log file, send information back to a script filter, or send an applescript dialog.

 

I really wish (and this is impossible) that we could know if the call came from a script filter or a run script.

Sending XML is not optimal, but it's what I chose to do in my library because it's the only way to inform the user of an error. I consider a bit of XML popping up in the debugger a lesser evil than Alfred's default utter silence.

The library returns a non-zero exit code if there's an error, so the debugger is the only other place the XML can appear. It won't pop up in URLs or notifications. There is the AppleScript dialog route, but it struck me as un-Alfred-y.

When the get-pip.py script is downloaded, it needs to be called with /usr/bin/python get-pip.py --target=/path/to/install/directory. It will install the pip package in that directory. And code that wants to use it needs to add that directory to sys.path and import pip.

What path would the bundler actually return after that command?

Edited by deanishe
Link to comment

When the get-pip.py script is downloaded, it needs to be called with /usr/bin/python/get-pip.py --target=/path/to/install/directory. It will install the pip package in that directory. And code that wants to use it needs to add that directory to sys.path and import pip.

 

Does it need to be that or just `python /path/to/get-pip.py --target=/path/to/install/directory`?

Link to comment

Awesome. Then, without testing, all you should need for the JSON is

{
    "name": "Pip",
    "type": "utility",
    "versions": {
        "default": {
            "invoke": "get-pip.py",
            "files": [
                {
                    "url": "https://raw.githubusercontent.com/pypa/pip/master/contrib/get-pip.py",
                    "method": "direct"
                }
            ],
            "install": [
                "/usr/bin/python '__FILE__' --target='__DATA__'"
            ]
        }
    }
}

Then you can treat it like you would any utility. It should install only when downloaded the first time.

Link to comment

Actually, if you want to have different versions of Pip available (is this necessary between Python 2.7 and 3.0?), then you'd just set one as the default, and then you'd duplicate the code above, altering the download links, and using the version numbers as the keys (in place of "default").

Link to comment

What path would the bundler return to the calling script, though? Presumably it returns the path to something within __DATA__. __DATA__ is the value the caller would need, though that shouldn't be hard to figure out in any case.

I'm not sure whether you'd need a different version for Python 2 and 3: some libraries modify themselves before installation, others handle the differences at runtime. I don't know which one pip is. It'd probably be best to keep different versions for each, however, as a best practice to avoid problems.

I don't think anyone is writing workflows with Python 3 yet: it isn't included on OS X yet, and I'm not sure any of its new features are compelling enough for workflow authors to require it.

That said, I know how to work with Unicode under Python 2, and that's what tends to bite the unwary. And Python 3 will be coming to OS X sooner or later. A lot of Python 3 coders don't really get Unicode too well.

Link to comment

It's late. I confess confusion. For now, simply point me in a direction and give me task; that will probably be simplest. All this other noise is going over my head (I should probably sit down and study the bash bundler, but I just haven't...)

Link to comment

The __DATA__ path is constructed per the asset. So, since, in the JSON, it's defined as a utility, version default, and name Pip, __DATA__ would be:

$HOME/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries/assets/utility/Pip/default/

And, then, two things would be there:

(1) Everything that Pip would install under that directory, and

(2) A file called "invoke" that would have the simple content "get-pip.py" — which is probably inaccurate and would need to be adjusted (we just need to swap out the line in the json "invoke": "get-pip.py" with whichever file is needed to call it).

 

When using the loader, you'd make the call, and what would be returned is

$HOME/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries/assets/utility/Pip/default/get-pip.py

While, that's fairly easy, it is necessary because it's not always standard. For reference, the invoke line from cocoaDialog above is

"invoke": "cocoaDialog.app/Contents/MacOS/cocoaDialog",

because you need to get the binary inside the cocoaDialog.app directory.

 

Basically, the bundler just appends the contents of the "invoke" file (which is created from the "invoke" line in the JSON) to this variable path:

/path/to/bundler/alfred.bundler-$major_version/assets/$TYPE/$NAME/$VERSION/

That's why those are needed in calling the asset via the __load function in the bash bundler. Some things you can call with just one argument, but you can't with utilities. If I were to restart this, I'd rearrange the order of the arguments, but I can't because that call is made within the wrapper, which means that it can be updated only with a new major version.

 

The beauty of using the __load function is that it does the extra lifting for you: check for the existence of the asset, on failure, download the asset, return the appropriate call.

 

Simple. Elegant. Could be better.

Link to comment

It's late. I confess confusion. For now, simply point me in a direction and give me task; that will probably be simplest. All this other noise is going over my head (I should probably sit down and study the bash bundler, but I just haven't...)

 

Here's the task in parts:

 

(1) rewrite the JSON (for now it will have to be a separate file with the full contents) that I last provided for Pip to include the appropriate "invoke" command.

(2) Write a wrapper function that will take up to four arguments:

       (1) Asset Name

       (2) Asset Type

       (3) Asset Version

       (4) Optional JSON path.

(3) Have that do a system call that equates to:

sh ~/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries/wrappers/alfred.bundler.misc.sh "$TYPE" "$NAME" "$VERSION" "$JSON_PATH"

(4) Test it.

 

For testing, just use Terminal Notifier because it will always be there (because the Bundler downloads it so that it can use it itself).

 

(5) So, use the function

      (a) to grab the terminal notifier "command." The bash equivalent (which should work with your function's system call) is

tn=`__load Terminal-Notifier default utility`

      So, you'd replace "__load" with

sh /path/to/bundler/alfred.bundler-aries/wrappers/alfred.bundler.misc.sh 

After saving the output of that command to a variable,

      (B) call it as a system call. The bash equiavent is:

"$tn" -title 'Bundler Test' -message 'Terminal Notifier called from inside the Bash Workflow.'

So, "$tn" is the quoted path to the terminal notifier app binary which is, really,

~/Library/Application Support/Workflow Data/alfred.bundler-aries/assets/utility/terminal-notifier/default/terminal-notifier.app/Contents/MacOS/terminal-notifier

(6) Use the Pip JSON that you wrote as well as the utility load function to then call

sh ~/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries/wrappers/alfred.bundler.misc.sh "utility" "Pip" "default" "path_to_the_json_file_you_just_wrote" 

And then see if Pip is installed to the appropriate place, which would be:

/path/to/bundler/assets/utility/Pip/default 

Extra credit:

Rewrite the function you just wrote so that it is a function call that needs fewer arguments. So, something like "bundler-load-utility" would be the function name (convert to appropriate Python function naming schemes).

 

Have it so that you can call it with just a utility name; so, that means filling in "utility" and "default" as, well, defaults. The JSON path should always be optional in that you needn't send it to the alfred.bundler.misc.sh script at all. If it is there, it should override any existing JSON. Basically, the bundler always looks first for a JSON file in 

/path/to/bundler/meta/defaults/AssetName.json

If I remember it correctly, the JSON path, if included, circumvents that and goes straight to read the included file. A last note: names are case sensitive. So, calling "Pip" is different than calling "pip"; this is an artifact from not having to worry about screwing with cases in Bash. It can be done not so hard, but it's annoying. For the naming schemes, I've always tried to follow how the actual asset names itself. So, Cocoa Dialog is cocoaDialog. Terminal Notifier is terminal-notifier. Pashua is Pashua. Etc...

 

Task #2: Write a function for the Python wrapper that will download and install the Bundler. This is easy. Steps:

      (1) Check to see if the bundler exists;

      (2) If it doesn't exist, then run these system commands:

mkdir "$HOME/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/alfred.bundler-aries"
mkdir "$HOME/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/alfred.bundler-aries/installer"
mkdir "$HOME/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries"

curl -sL "https://raw.githubusercontent.com/shawnrice/alfred-bundler/$bundler_version/meta/installer.sh" > "$HOME/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/alfred.bundler-aries/installer/installer.sh"

sh "$HOME/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/alfred.bundler-aries/installer/installer.sh"

For ease of reading, I store the data and cache directories as variables along with the installer URL. I also store the bundler major version as a variable. The reason for the last two commands as variables? Easier to rewrite for upgrades between major versions. The paths work themselves out when I change the two globally.

 

Obviously, make the directories only if they do not exist.

 

      (3) Make sure that the installation check is included earlier in basically every function that is run. Also, note that grafting onto the alfred.bundler.misc.sh script takes care of downloading the asset if it doesn't already exist.

 

For a "real world" example, download this workflow. It has an example implementation of both the Bash bundler and the PHP bundler.

 

Are these good for tasks?

Link to comment

I think the Python version can dump they type argument: it will only work with utilities, as libraries will be handled in a different way to the bash and PHP versions.

Something like:

 

# bundler.py

def _bootstrap():
    # Check if bundler and pip are installed.
    # If not, install bash bundler, then pip using the bundler

def utility(name, version=None, json_path=None):
    # Call _bootstrap() to set up bundler if necessary
    # Return path to specified utility, installing it first if necessary

def init():
    # Call _bootstrap() to set up bundler if necessary
    # Find `requirements.txt` and compare it against installed packages
    # Run `pip` (using `pip.main`) on `requirements.txt` if it has changed or hasn't yet been installed
    # Add appropriate library directory to `sys.path`
Link to comment

I think the Python version can dump they type argument: it will only work with utilities, as libraries will be handled in a different way to the bash and PHP versions.

 

Agreed. But, just make sure that the function sends those arguments to the shell script.

 

So:

def utility(name, version=default, json_path=none):
Link to comment

Ok. I appreciate the clarity. But I actually still have a few questions. As I said earlier, my own mental deficiencies make moving from specifics to generals much more difficult than moving from generals to specifics. So, I would love to get a firm grasp on what the Python bundler will accomplish and what that will look like.

First, I would love to see a full directory tree for all of the possible directories to be stored under $HOME/Library/Application Support/Alfred 2/Workflow Data/alfred.bundler-aries. This would greatly help me in conceptualizing what might be created, where it will be put, and in general how the bundler organizes assets.

Second, what exactly is actually put in a workflow to work with the bundler? What is the minimum needed and what is the maximum possible? Is it in the workflow's dir that the oft referenced meta/defaults directory resides? Is this where the bundler.py script will reside?

Third, correct me if I'm wrong in my understanding of what all the Python bundler needs to be able to download (if, when called, they don't exist):

  • download the bundler (which I'm only just now fully realizing is a separate thing from the Bash bundler wrapper, right?)
  • download any of the bundler's default assets (i.e. Pashua, etc)
  • download other non-Python assets that aren't default assets
  • download Python packages and modules
The "tasks" above are aimed at the first three, correct? And you have these functionalities already coded in your bash script, alfred.bundler.misc.sh? And I have the final functionality already written in Python. So we are merely trying to write Python wrappers for the other three types of downloads that will actually interface with your bash script?

The ultimate goal is thus to make the full functionality of bundler available to a workflow author who only wants to write in Python. So, they can use Python dependencies, or any other dependency (like my own ui-helpers.scpt Applescript utility) in their workflows, but only have to deal with the bundler in Python?

If all of this is generally right-headed, then I am confused by the decision to download a workflows dependencies to a workflow specific directory. This seems to defeat the first purpose of the bundler, i.e. non redundancy of dependencies. Two workflows could need requests, but this current set-up (i.e. my current version of the Python downloading functionality) would create two copies of requests in each workflow's own directory. This merely moves the problem from one directory to another. So, I once again propose using a storage system that mirrors OS X's own for Python, i.e. site-packages, where all dependencies live together. This is one place where all workflows can access a single copy of a dependency.

So, these are my questions and my thoughts.

Link to comment

To get a better understanding of how it works with bash and what is created, just download that example work and watch what happens when you invoke it for the first time.

 

There should be two python files, a wrapper and a wrapper around the wrapper, basically. So the double-wrapper (what you find in the "wrappers" directory) are the files that a workflow author includes with their workflow. These should be as general as possible so that we have the option to change how the bundler works in the background without breaking the wrappers. The as possible is obviously language dependent.

 

So, the alfred.bundler.misc.sh is one of those wrappers that anyone could include in their workflow file. The wrappers communicate with bundler.sh and bundler.php for the heavy lifting. So, in minor versions, we can change the bundler.sh and bundler.php files as long as they send the same output to the alfred.bundler.sh and alfred.bundler.php (and alfred.bundler.misc.sh) wrappers. I'm not sure if this is possible with Python.

 

Right now, the bundler.sh file does rely on some of the same files that the bundler.php files rely on (and vice versa) because, mostly, Bash can't handle JSON.

 

Bash and PHP libraries have the luxury of being, for the most part, single files that just need an "include" or "require" command that basically just inserts the "library" into the script itself, creating one large script. From what I understand, Python's packaging system, with all the name spacing, etc..., does not allow for such simplicity.

 

Hence, this is the need for the individual workflow directories for Python. So, it would be best to create each workflow directory under /path/to/bundler/assets/python/workflows/bundleid. Here, reuse makes it harder to have multiple versions of packages next to each other. So, we need to have individual directories that each Python workflow can use. If you can figure out a way to link to python packages that might live in a single place, then that would be ideal to keep only one copy of each package on the user's system. However, I don't think that it is possible to do such a thing.

 

So, what we're left with when we use the Python bundler is that it can use utilities in a non-duplicated manner, but for Python packages, it just allows those packages to live outside the workflow (not data) directory, which is great for those of us who sync everything in Dropbox, and it makes it so that workflow authors can declare dependencies and not worry about having to write complex installation scripts. And it gives everyone reliable access to Pip.

 

If we abandon the option to have more than one version of a package installed, then we can (I think) plop them all in the same directory (/path/to/bundler/assets/python) and store them however Pip wants to.

 

Does this help clarify?

Link to comment

The way I see it, we create a Python wrapper/library (the bundler.py file that authors can include) to go with the bash and PHP ones. This would implement the Python version of the bundler install/loader API plus the Python-specific init method to (install and) call pip and twiddle sys.path.

Alternatively, it could install the bash wrapper and use that.

 

If we abandon the option to have more than one version of a package installed, then we can (I think) plop them all in the same directory (/path/to/bundler/assets/python) and store them however Pip wants to.

 

Does this help clarify?

There can't realistically be versioned libraries, but there has to be one install directory per workflow. It's not uncommon for library APIs to change, so it's very possible that different workflows will want different, incompatible versions of libraries. For example: /path/to/bundler/assets/python/my.workflow.bundle.id/. If we don't give each workflow its own install directory, the bundler will break folks' Python workflows.

Edited by deanishe
Link to comment

I'm trying to implement a Python bundler, but I'm having difficulty getting Pip installed.

 

Am I right in thinking that if I specify a path to a JSON file, the bundler drops it on the floor and throws an error? That's certainly my reading of bundler.sh. If I add a Pip.json file to the meta/defaults directory it (kinda) works.

 

Which brings me to a bigger complaint: The error reporting is awful. It's very hard to figure out why the bundler has failed because it only throws one, entirely vague error. It doesn't even exit with an appropriate error code to let calling tools know something went wrong.

Edited by deanishe
Link to comment

So, I'm pretty sure that Shawn said somewhere above that he has the PHP script handle the JSON stuff, so that may be the underlying problem when you are sending JSON to the Bash script.

Also, the logic for workflow specific silos does make sense, but two thoughts come to mind. First, Dean's comment about library API's is true, but I for one often just download a package and build my workflow around it, which means that I get the most current version of the package via pip and go from there. So, if I were to translate what I did to install the package when creating the workflow, I would simply put the package name in requirements.txt and nothing about the version. But this could create problems down the road, when, say, someone downloads the workflow in a year, and the package has changed but pip dumbly installs the most recent version. This scenario helps to remind that workflow authors will need to note the version of any packages that they are using and hardcode that version into requirements.txt. This is the most future-safe option. But, at least for me, this would require explicit notice. Otherwise, I would have done it in too simple a manner.

Next, couldn't we add a couple of features to allow for more safeguard against unnecessary redundancy. I'm thinking about myself and both of you, who write multiple workflows. Often enough there are overlapping dependencies. What if we added the ability to cross-check with other workflows by the same author. So, we allow for an author to put a file next to requirements.txt called overlap.txt. In this file he lists any and all other workflows that have a dependency overlap. For example, I might put this in the ZotQuery workflow:

com.hackademic.wikify = htmltotext
com.hackademic.wikify = wf-helpers.scpt
com.hackdemic.skimmer = wf-helpers.scpt
etc. This file simply has bundle id and dependency for any possible overlaps. So, if this file exists, on init(), the bundler can look for the dependency in that path to see if it exists, and if so, send that path back. This would allow for some overlapping so that if someone downloaded ZotQuery and then downloaded Skimmer or Wikify, they wouldn't need to have multiple copies of a handful of dependencies, because I as a single author, have mapped the overlaps for each workflow relative to the others.

This seems both a feasible and reasonable addition, does it not? Am I missing something?

Link to comment

No the problem is most definitely with the bash script. Trust me: I've read the code :)

The bash bundler scripts accept a path to a JSON file as an argument, but completely ignore it. If you pass one, the bundler won't even try to install anything. It's definitely a problem with the bash code.

The offending code is here. As you can see, if $json is empty, it looks for $name.json in its defaults, otherwise it just falls straight off the end and returns an error message.



The problem with having an overlap.txt is that it then requires that the user keep all of an author's workflows updated or face having stuff break.

There are a few workflows that I won't update because I've edited them to change keywords/behaviour, and re-applying my changes isn't worth the benefit of any new features/bugfixes.

 

Still, it might be worth allowing authors to create their own set of shared stuff. An optional bundleid argument to init() would do it.

Edited by deanishe
Link to comment

Sent a pull request on GitHub to add JSON file handling to bundler.sh and improve the error handling a wee bit.

I've added Pip as a default utility, though the Python Bundler currently points to a URL in my fork on account of it not being available in the main one.

I wonder whether it (the JSON file) should be "bundled" with bundler.py, but it'd have to go in a docstring and be saved to a temporary file before each call in order to keep the file count to one. It seems incorrect to add it as a utility because (1) it's not a utility in the same vein as the other utilities and (2) it doesn't work like one (there's no executable: it needs to be added to sys.path and imported).


There seems to be something slightly funky happening with the GateKeeper stuff. It occasionally pops up its dialog when installing/calling Pip, which it shouldn't, as it's pure command-line stuff. (Also, isn't there supposed to be some informational dialog that pops up first?)

I suspect the problem has to do with the existence of empty invoke files. My personal preference would be not to create these files if they're going to be empty (i.e. the utility can't be invoked). I may have misunderstood how they're used, however. (I haven't read, and don't really want to read the PHP code.)

Also, the odd syntax error appears to be being thrown by the bash code. I figure it's down to bad quoting of variables (i.e. running tests on unquoted variables that are empty), and I've added quotes around a few of them, but I'm leery of messing with the bash code too much, as I don't really get the language that well, nor is there a test suite to let me know that I haven't royally messed things up.

A couple of things I've noticed (but may have completely misunderstood because bash):

  • Is there a difference between [[ ! -z "$var" ]] (i.e. not empty) and [[ -n "$var" ]] (i.e. is set)?
  • Isn't var=$(echo `cat somefile`) exactly the same as var=$(cat somefile)?
Edited by deanishe
Link to comment

Sent a pull request on GitHub to add JSON file handling to bundler.sh and improve the error handling a wee bit.

I've added Pip as a default utility, though the Python Bundler currently points to a URL in my fork on account of it not being available in the main one.

 

Accepted the pull request. Also gave you full commit access to the repo.

 

I wonder whether it (the JSON file) should be "bundled" with bundler.py, but it'd have to go in a docstring and be saved to a temporary file before each call in order to keep the file count to one. It seems incorrect to add it as a utility because (1) it's not a utility in the same vein as the other utilities and (2) it doesn't work like one (there's no executable; it needs to be added to sys.path and imported).

 

But we basically call it with a system call, right? (My lack of knowledge of Python). If we call it with a system call, then it is a utility. Otherwise, would we actually treat it as a Python package? If so, then it can be the latter, although it might be a special case. It seems to occupy a liminal space for the way that we're using it.

 

There seems to be something slightly funky happening with the GateKeeper stuff. It occasionally pops up its dialog when installing/calling Pip, which it shouldn't, as it's pure command-line stuff. (Also, isn't there supposed to be some informational dialog that pops up first?)

I suspect the problem has to do with the existence of empty invoke files. My personal preference would be not to create these files if they're going to be empty (i.e. the utility can't be invoked). I may have misunderstood how they're used, however. (I haven't read, and don't really want to read the PHP code.)

 

The Gatekeeper scripts used to be functioning well, but I might have accidentally done something to them that made them appear funky. They might be there for the invoke files. I'll have to check to see if any of those are empty because they shouldn't be.

 

Don't worry about the PHP there.

 

A couple of things I've noticed (but may have completely misunderstood because bash):

  • Is there a difference between [[ ! -z "$var" ]] (i.e. not empty) and [[ -n "$var" ]] (i.e. is set)?
  • Isn't var=$(echo `cat somefile`) exactly the same as var=$(cat somefile)?

 

Inconsistencies on my part that probably stemmed from what time of the day I was writing / debugging it.

 

I should get some time in a few hours, and I'll go through and fix those inconsistencies, check out gatekeeper, etc...

 

Thanks for all the work.

Link to comment

Accepted the pull request. Also gave you full commit access to the repo.

 

Ta. Not sure I want it, though. I have a nasty habit of coding drunk (like now), and I'd hate to break loads of stuff. I can't even do pull requests properly yet… (I'm guessing you didn't see all my messing around with them in real time.)

 

But we basically call it with a system call, right? (My lack of knowledge of Python). If we call it with a system call, then it is a utility. Otherwise, would we actually treat it as a Python package? If so, then it can be the latter, although it might be a special case. It seems to occupy a liminal space for the way that we're using it.

 

Essentially, it's used as a Python library (the Python bundler adds its parent directory to sys.path and does import pip).

OTOH, the standard bundler utility install mechanism makes installing pip marvellously easy (and AFAIK installing it elsewhere would require significant modifications), and it's a one-off kind of thing that wouldn't be used by other workflows. Other workflow authors (let alone users) don't have to know it's in there, soiling the purity of the assets directory.

I chose assets/python/<bundle-id> as the logical place to put the workflow-specific install directories, and I'm actually a bit worried about putting the Python bundler's files in there, too. Perhaps I should rename the directory to use the Bundler "bundle ID", i.e. alfred-bundler-aries.python-helpers instead of just python-helpers).

 

The Gatekeeper scripts used to be functioning well, but I might have accidentally done something to them that made them appear funky. They might be there for the invoke files. I'll have to check to see if any of those are empty because they shouldn't be.

 

The pip one should be: there is no invoke-able command. It can't be called, only imported. I mean, I could create a small callable wrapper in there and use that instead, but it's unnecessary for the functioning of pip. Most properly coded Python "executables" are thin wrappers that load and call library functions that can just as easily be called with the command-line arguments as the executable itself.

 

What's more, if the Bundler is designed to support more generic "assets" than callable utilities (which is my understanding), a callable executable is not a given: I might use it to install an icon collection, for example, for which an invoke command is nonsensical.

 

Don't worry about the PHP there.

 

If you insist ;)

 

Inconsistencies on my part that probably stemmed from what time of the day I was writing / debugging it.

 

I should get some time in a few hours, and I'll go through and fix those inconsistencies, check out gatekeeper, etc...

 

Nah. It's probably down to pip not really being a callable utility, and thus not having an invoke-able command. Even though it could, although not necessary.

 

Thanks for all the work.

 

Haha! As if I'm not going to use the crap out of this!

Thank you for all the graft on the bundler.

Once we're golden master on this, I'm probably going to include it in my Alfred-Workflow Python library, and then there'll surely be all manner of neckbeards demanding I have sex with their hot sisters and shit.

 


 

What do I (we) do about the pip version? Currently, its JSON file is versioned on the basis of the Python bundler version (i.e. new bundler version, new pip version), but it'd be way better to ensure the latest version of pip is installed. 

Does the bundler have a way to ensure that, or is it up to you/me/us to keep the JSON file up-to-date?

 

Currently, I have the Python bundler pull a JSON recipe from my fork. If it's now one of the default utilities, what do I have to do in bundler.py to ensure that it's updated?

 

(Note: pip can update itself with  pip --update pip. I could set the bundler to run that once a month or something.)

Edited by deanishe
Link to comment

Also, one more thing.

bundler.py is documented in such a way that it's possible to auto-generate its documentation using sphinx. I've done so locally, but I don't know what to do wrt publishing the docs.

All the documentation for Alfred-Workflow was generated this way.

Should I add them to the GitHub repository? (There's a pydoc subdirectory and a build-pydoc.sh file in the rootdir, although the latter can easily be dispensed with.)

Adding them to the gh-pages branch presents a problem, as we (I) would have to create a template that matches the other docs.

I suppose it might be worth considering creating a standalone (in name only: it would still use bundler.sh) library to publish on PyPi and use their automatic pythonhosted.org option.

(I should probably do this with Alfred-Workflow, too.)

Link to comment

Adding them to the gh-pages branch presents a problem, as we (I) would have to create a template that matches the other docs.

 

The template that I'm using up there sucks. So, I'm more than happy to adjust what is currently up there to match yours. Unfortunately, we'd have to do a little manual editing each time, which should be too big of a deal.

 

Don't add them to the Aries branch of the repo because that's what everyone downloads, and I'd like to keep it as clean as possible. If you want to throw them up on GH, then throw them into the master branch.

 

Get your workflows library up on PyPi first, and then we can figure out the best way to push the bundler on that one.

Edited by Shawn Rice
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...