Jump to content

Alfred Dependency Downloader Framework


Recommended Posts

 

Ta. Not sure I want it, though. I have a nasty habit of coding drunk (like now), and I'd hate to break loads of stuff. I can't even do pull requests properly yet… (I'm guessing you didn't see all my messing around with them in real time.)

 

Well, you have the commit access now. We should probably move the development to another branch though... (and I got an email for each pull request. I was a bit confused expecting to see more and then just seeing one. I'm glad I didn't check in sooner).

 

 

 

Ta. Not sure I want it, though. I have a nasty habit of coding drunk (like now), and I'd hate to break loads of stuff. I can't even do pull requests properly yet… (I'm guessing you didn't see all my messing around with them in real time.)

 

 

Essentially, it's used as a Python library (the Python bundler adds its parent directory to sys.path and does import pip).

OTOH, the standard bundler utility install mechanism makes installing pip marvellously easy (and AFAIK installing it elsewhere would require significant modifications), and it's a one-off kind of thing that wouldn't be used by other workflows. Other workflow authors (let alone users) don't have to know it's in there, soiling the purity of the assets directory.

I chose assets/python/<bundle-id> as the logical place to put the workflow-specific install directories, and I'm actually a bit worried about putting the Python bundler's files in there, too. Perhaps I should rename the directory to use the Bundler "bundle ID", i.e. alfred-bundler-aries.python-helpers instead of just python-helpers).

 

 

The pip one should be: there is no invoke-able command. It can't be called, only imported. I mean, I could create a small callable wrapper in there and use that instead, but it's unnecessary for the functioning of pip. Most properly coded Python "executables" are thin wrappers that load and call library functions that can just as easily be called with the command-line arguments as the executable itself.

 

What's more, if the Bundler is designed to support more generic "assets" than callable utilities (which is my understanding), a callable executable is not a given: I might use it to install an icon collection, for example, for which an invoke command is nonsensical.

 

Well, perhaps "invoke" was a poor word to use there because it's misleading. The "invoke" file contains the relevant information to call the libraries/asset. So, for some utilities, you just need the name for convention ("Pashua.app"); for others, you need the path ("terminal-notifier.app/Contents/MacOS/terminal-notifier") to properly call it. In order to make the wrapper as abstract as possible, the __loadAsset function needs to return the usable information so that it can be stored in a variable.

 

For libraries, it makes a bit less sense, but I decided to include it anyway (1) for uniformity and (2) on the off chance that someone had a strange PHP 'library' that included five different files that needed to be loaded. Hence, the PHP __load() function, if grabbing PHP libraries, actually does a foreach loop and requires each one.

 

For Python, the requirements.txt file is, basically, the equivalent of the "invoke" file as it tells the bundler what to do / what to return / what to construct.

 

In re: Pip. I was under the impression that it was a utility because I call in on my system with 'pip install....' etc, just like I do with apt or brew. And that makes sense because it's used as a command. So, why don't we just do the same here? It makes the installation _very_ easy, as you mentioned, which might be enough of a reason to treat it as such.

 

We could treat Pip as a standard Python asset, but it really isn't a good idea because Pip is something that no workflow should ever call but something that only the bundler should use. So, we should treat it specially. Now, that means we could treat it as a utility, or we could treat it as an internal utility and have it live somewhere else in the directory structure. Say: `bundler/meta/utilities/Pip`.

 

---

 

For the regular file structure, what I'd assume is that in the root bundler directory, there would be something like bundler.py. In the wrapper directory, there'd be alfred.bundler.py (these may not match up). Basically, bundler.py would be the package that does all the heavy lifting for the bundler, and alfred.bundler.py in the wrappers directory would be the file that users would include with their workflow in order to implement the bundler. (Again not sure if this can work with Python). The idea is that the wrapper that the workflow author includes is abstract enough that it can make a simple call to load something and get exactly what it needs back, and it must also be able to install the bundler itself. What I like about this abstraction is we can then change anything we want about bundler.py, and alfred.bundler.py will still work exactly the same.

 

There is an auto-updater for the bundler to be able to upgrade itself from one minor version to another. Pretty soon we're going to test to see if that works in the real world as we'll have to bump the minor version from 1 to 2 when we get all of this stuff sorted. I haven't been back through the code, but what it does is, once per week or so, it checks the contents of the minor version in the GH repo, and, if it differs, then it downloads an update script and runs it. The update script will have all the upgrade logic to it from version to version, and so it should be able to just about anything we need. Emphasis on the should.

 

For upgrading Pip, we could do something similar in that we create a sort of place that the bundler will check in every once in a while and upgrade Pip if it is instructed to do so. We can figure that out later but before we push the version change. (In order not to hurt a workflow's performance, these scripts are executed and then disowned).

Link to comment

Okay, so:

 

Fixed gatekeeper script (I had introduced an unexpected character error in the Applescript).

Bundler.php/bundler.sh now calls gatekeeper only if '.app' is found somewhere in the 'invoke' file.

 

Matched alfred.bundler.sh to alfred.bundler.misc.sh

 

Started the update-the-bundler.sh script to create the upgrade path from minor version 1 to 2.

Link to comment

Nice improvement. GateKeeper's still a problem, however. See below.

I just pushed some changes to the bundler.py inline documentation, but my commit messages are all messed up. Sorry :(


Pip normally is installed as a runnable program, but it doesn't have to be. The way most Python utilities are installed is that you specify a script name and a function in your library, and then the installer creates a simple wrapper that calls that function when run. If you cat the pip excecutable, you'll see what I mean.

Instead of creating the wrapper script, I just import pip instead and call its main() function with the command line arguments.

Creating the runnable wrapper would actually make things (slightly) more complicated.


For the Python version, there is only the wrapper for workflow authors to include (bundler.py in the wrappers directory). It can't be called alfred.bundler.py, as dots aren't allowed in module names. It uses the bash wrapper to handle utilities and Pip for Python libraries. There's not a whole lot to it.

I'd sooner avoid re-implementing the core functionality in Python, but the performance is currently definitely unacceptable. From my machine: 

Calling `bundler.init()`
10 calls in 0.0388 s (0.0039 s/call)

Calling `bundler.utility("cocoaDialog")`
10 calls in 1.7871 s (0.1787 s/call)
(bundler.init() is the pure Python code that handles the Python libraries.)

The time it takes for one call to get a utility's path is the time I normally aim to have my entire workflow finish and return its results in…

After a bit of digging, it turns out that it's gatekeeper.sh that's taking most of the time. With gatekeeper.sh disabled: 

Calling `bundler.init()`
10 calls in 0.0409 s (0.0041 s/call)

Calling `bundler.utility("cocoaDialog")`
10 calls in 0.4689 s (0.0469 s/call)
We definitely need to do something about that. Is there some way we could cache the results of the calls to gatekeeper.sh (and the various PHP scripts, but mostly gatekeeper.sh) so they don't have to be called every time?
Link to comment

Caching might make things possible. The spctl command that Gatekeeper uses is stupidly slow (thanks Apple). I might have taken the more complicated route for the Gatekeeper script by using spctl. It appears that I might be able to just do something like

xattr -d -r com.apple.quarantine <app/path>

It's a bit harder for me to test the Gatekeeper scripts because I disabled it a while ago because it's damn annoying, and I know what I fucking downloaded. Gatekeeper, to me, really does seem like it's a move to push newer users into their Walled Garden that is the App Store. I have setup a clean VM of Mavericks to test these things, but it's a pain in the ass. Will do that, however, before we push a final.

 

For the Bash bundlers, I'll look to see if there is a way that I can remove the PHP scripts as much as possible. Some of the scripts need to run every time (like the registry), but I can improve the performance by forking the process, which I thought I had done, but apparently haven't well enough. They rest are necessary only when installing new assets, and the extra lag time isn't really a big deal because the added % time is negligible because the bulk will be taken from the download.

Link to comment

So, I added caching of the calls to bundler.sh to bundler.py

Calling `bundler.init()`
10 calls in 0.0449 s (0.0045 s/call)

Calling `bundler.utility("cocoaDialog")`
10 calls in 0.0002 s (0.0000 s/call)
I'd say the performance problem's solved for the Python version :)

I also added an uninstaller script. Hope that's okay.

Link to comment

I was actually planning on creating the uninstaller script as a separate workflow. Actually, it would be more of a bundler/manager script that would also work to remove regular data directories for uninstalled workflows. That's why the registry is there so that if a utility has been orphaned (the workflows are no longer in the bundle ids), the user has the option to remove the utility / asset. For python, it would be easier because they directories are named after the bundles.

 

I think having the manual installation is a better idea than one that happens silently. Thoughts?

Link to comment

Caching might make things possible. The spctl command that Gatekeeper uses is stupidly slow (thanks Apple). I might have taken the more complicated route for the Gatekeeper script by using spctl. It appears that I might be able to just do something like

xattr -d -r com.apple.quarantine <app/path>
It's a bit harder for me to test the Gatekeeper scripts because I disabled it a while ago because it's damn annoying, and I know what I fucking downloaded. Gatekeeper, to me, really does seem like it's a move to push newer users into their Walled Garden that is the App Store. I have setup a clean VM of Mavericks to test these things, but it's a pain in the ass. Will do that, however, before we push a final.

 

For the Bash bundlers, I'll look to see if there is a way that I can remove the PHP scripts as much as possible. Some of the scripts need to run every time (like the registry), but I can improve the performance by forking the process, which I thought I had done, but apparently haven't well enough. They rest are necessary only when installing new assets, and the extra lag time isn't really a big deal because the added % time is negligible because the bulk will be taken from the download.

Obviously, the php calls are irrelevant compared to the time it takes to install stuff.

I still have Gatekeeper on, but it seems to go quiet after you've said yes once. Do you know how to reset it?

Why does registry.php need to run every time? Can't it create a cache file that can be grepped, and only run if the entry doesn't exist?

I've noticed that the first run can take a very long time if neither the bundler nor dependencies are installed. Is there any way we can notify the user sooner? At the least, shouldn't we notify the user whenever something is being installed?

Link to comment

I was actually planning on creating the uninstaller script as a separate workflow. Actually, it would be more of a bundler/manager script that would also work to remove regular data directories for uninstalled workflows. That's why the registry is there so that if a utility has been orphaned (the workflows are no longer in the bundle ids), the user has the option to remove the utility / asset. For python, it would be easier because they directories are named after the bundles.

 

I think having the manual installation is a better idea than one that happens silently. Thoughts?

I wasn't thinking of a utility for the user, more something to make testing/development easier.

Something significantly smarter would be required for users, and with some kind of UI, to boot.

So you're planning an accompanying workflow to manage the bundler?

WRT to Python, uninstallation is a piece of cake: run through the workflows directory and grab all the bundle IDs, then delete any subdirectories of assets/python that aren't in the list of bundle IDs.

Edited by deanishe
Link to comment

Well, currently it creates a file that's in JSON, which means that Bash can't really read it. I might be able to make another file, like I do with the Packal updater (the endpoints.list and endpoints.json), but I'm not sure if that would work well.

 

But, it's fine to have it run each time if we just fork the process because, well, it wouldn't affect the workflow performance.

 

The notification is hard. For Alfred Cron, I built in a response for the script filter that just says it's setting cron up, but that's not integrated in the bundler. Since the bundler currently uses TN to notify, we have to wait until the bundler is partially setup to use that, which is unfortunate.

 

I suppose that we could throw up an Applescript dialog with a timer. What do you think of that?

 

We could push a notification each time the bundler installs something, but since it is all contained in a separate directory, do you think that's necessary?

Link to comment

What I like about manual uninstallation is that, say, again, Pashua is installed. The workflow that uses it is deleted, but another might be installed later that would re-download it, so keeping it around makes sense, but giving the user an easy option to delete it also makes sense.

 

I could use a UI that would be like the one for the Packal Updater, but I don't think its necessary because a simple script filter should suffice. One that has drill down to see what's installed per workflow, and it can also show orphaned assets with the option to uninstall them.

 

That workflow shouldn't be too hard to write.

Link to comment

Since we're not treating Pip as a utility, let's actually get rid of the JSON and installing it via the utility. Can you write the installation as a function in bundler.py?

Err … I could, but the bundler just made it so easy :)

But seriously, it shouldn't be a problem if you think that's better. Thinking about it, I could try to update pip every time a workflow wants to install something. That would hide the performance hit ;)

I haven't really given any thought to a proper uninstaller: I just wanted a quick something to help me test better.

One more thing that had me scratching my head for a bit: the bundler deletes its cache directory. Could it be modified to only delete the temporary files instead (I wanted to keep some stuff in there)?

Quick note: you guys rock. I'm learning a lot just trying to follow along.

Have you looked at the Python bundler? It's mostly a rip of your code :)

The only "clever" part is the newly-added function call caching, which I largely ripped from the wiki.

Link to comment

Have you looked at the Python bundler? It's mostly a rip of your code :)

The only "clever" part is the newly-added function call caching, which I largely ripped from the wiki.

 

Yeah, I've looked and I'm glad I could help. It's just great to learn more about something closer to a "program" with distributed code and robust functionality. I'm still at the early stages of getting out of "scripting mode" where all of my code is extremely procedural. I'm learning more and more how to think at larger, more abstract levels, and you two guys have helped me the most in that area.
 
PS. Shawn, you should check out version 2.0 of Skimmer. I've finally added Annotation Export. 
 
Also, I was trying to see if I could get Alfred Python to work if I put it in the bundler directory tree. I'm still early on, but it seems that adding to ability to pass the plist file into the initializer takes care of a lot (at least all of the stuff my workflows are currently using). Just a thought... 
Link to comment

Well, I figure that we should make Pip either fully a utility or fully a python asset. I am okay with the in-between as something with JSON that isn't really a utility. I'm all for the easiest yet still functional and performant.

 

For Pip updating itself: make it even better: look at the update.sh script: you'll see that in bundler-dir/data it stores a 'last checked for update' time, and it won't check again for a week. You could always store it the same way and then check for that once a week.

 

The thing about the cache directory itself is that it is deleted at reboot (is this true at all?). But, for the update check for the bundler itself is stored in the data directory, which, to me, makes sense. If you have a more permanent cache, then just go ahead and store it in "bundler-dir/data" or somewhere in a subdir of that.

Link to comment

Well, I figure that we should make Pip either fully a utility or fully a python asset. I am okay with the in-between as something with JSON that isn't really a utility. I'm all for the easiest yet still functional and performant.

Makes sense design-wise, but I don't see much point in creating the pip executable: it's more dangerous than anything. If it isn't called correctly, it will install stuff in the system Python, which is bad.

There isn't really such a thing as a Python asset in the bundler sense: they don't work the same way as PHP/bash.

At any rate, I've given the Python bundler its own "bundle ID" and assets/python subdirectory, and I will stick pip in there instead.

 

For Pip updating itself: make it even better: look at the update.sh script: you'll see that in bundler-dir/data it stores a 'last checked for update' time, and it won't check again for a week. You could always store it the same way and then check for that once a week.

Yeah, I would do the same. And similarly, I would only call update.sh if other stuff is being installed: Running stuff in the background isn't quite so straightforward in Python (the main script won't exit till all subprocesses are done), and it seems prudent to only run it when a network connection is required in any case.

 

The thing about the cache directory itself is that it is deleted at reboot (is this true at all?). But, for the update check for the bundler itself is stored in the data directory, which, to me, makes sense. If you have a more permanent cache, then just go ahead and store it in "bundler-dir/data" or somewhere in a subdir of that.

No, the cache directory isn't deleted at reboot. It's just intended for data that can be deleted without messing things up.

 

Also, I was trying to see if I could get Alfred Python to work if I put it in the bundler directory tree. I'm still early on, but it seems that adding to ability to pass the plist file into the initializer takes care of a lot (at least all of the stuff my workflows are currently using). Just a thought...

What is "Alfred Python" (there are several libraries of that name), which "initializer" and why would you want to pass a plist to it?

WRT the Python bundler, you should be able to install any package on PyPi and any package on GitHub that has a setup.py file.

I'm probably going to add my own workflow library to PyPi soon. I haven't decided yet whether I should include the bundler in the workflow library or recommend installing the library via the bundler…

I suppose I could do both.

Edited by deanishe
Link to comment

Also, I added code to the bundler to notify the user when Python dependencies are being installed.

It takes a good few seconds in the best case, so I figured it makes more sense than dumb silence and an unresponsive workflow.

Should I leave that up to the workflow developer instead, or set a flag so they can disable it?

Edited by deanishe
Link to comment

What is "Alfred Python" (there are several libraries of that name), which "initializer" and why would you want to pass a plist to it?
 
WRT the Python bundler, you should be able to install any package on PyPi and any package on GitHub that has a setup.py file.
 
I'm probably going to add my own workflow library to PyPi soon. I haven't decided yet whether I should include the bundler in the workflow library or recommend installing the library via the bundler…
 
I suppose I could do both.

 

Left a word out. I meant your Alfred Workflow. And I mean the init for Workflow class. Here's the version:
def __init__(self, input_encoding='utf-8', normalization='NFC',
         default_settings=None, capture_args=True, libraries=None,
         plist=None):
 
self._default_settings = default_settings or {}
self._input_encoding = input_encoding
self._normalizsation = normalization
self._capture_args = capture_args
self._workflowdir = None
self._settings_path = None
self._settings = None
self._bundleid = None
self._name = None
self._info = None
self._info_loaded = False
self._logger = None
self._items = []
self._search_pattern_cache = {}
if plist:
    self._info_plist = plist
else:
    # info.plist should be in the directory above this one
    self._info_plist = self.workflowfile('info.plist')
if libraries:
    sys.path = libraries + sys.path
This allows me to put Alfred Workflow in the bundler directory (instead of in my workflow's actual directory) and use one copy of it for all my workflows. Thus using Alfred Workflow in the way I would something like requests. I was toying with this since this is something that is in all of my workflows. But you have to pass plist since typically workflow.py would search it's own directory tree for the plist. By passing it in on initialization, I can put workflow.py in the bundler dir. That's all I meant.
 

 

Edited by smarg19
Link to comment

At any rate, I've given the Python bundler its own "bundle ID" and assets/python subdirectory, and I will stick pip in there instead.

 

I like it.

 

Yeah, I would do the same. And similarly, I would only call update.sh if other stuff is being installed: Running stuff in the background isn't quite so straightforward in Python (the main script won't exit till all subprocesses are done), and it seems prudent to only run it when a network connection is required in any case.

 

Use nohup and direct all output and errors to /dev/null. The process will be considered done so that things can go on. There should be a check for a network connection as well. Running update.sh when things are installed isn't optimal -- at least for bundler updates -- because if someone has a few workflows that are installed when all the dependencies are already there, then the bundler won't update itself. The best way to do it is to make sure that the update is checked once per week or so in order to make sure that things are up to date.

 

 

No, the cache directory isn't deleted at reboot. It's just intended for data that can be deleted without messing things up.

 

Makes sense, but let's keep it in the data directory anyway just because those files should always exist. Let's just use the cache directory for very ephemeral storage.

 

 

I'm probably going to add my own workflow library to PyPi soon. I haven't decided yet whether I should include the bundler in the workflow library or recommend installing the library via the bundler…

I suppose I could do both.

 

Let's get a release going before you do it. It makes great sense to have the bundler bundled with the library. I've been writing my own PHP library, and I figure I'll do the same with it, but we should work out a way to figure out how to have them 'self-upgradable' between at least minor versions if not major versions. What's pretty awesome about doing something like that is that the bundler/library would have a co-dependent updating relationship for things that don't change how they work, so, basically, you can make better fixes for each without breaking any workflows.

 

Should I leave that up to the workflow developer instead, or set a flag so they can disable it?

 

If it's up to the workflow developer, then we need to provide copy/paste sample code. Otherwise, notify each time in the least intrusive way. You can rely on the existence of TN because the bundler installs it for itself.

 

Also, I added code to the bundler to notify the user when Python dependencies are being installed.

How? TN or Applescript?

Link to comment

 

 

Left a word out. I meant your Alfred Workflow. And I mean the init for Workflow class. Here's the version:
def __init__(self, input_encoding='utf-8', normalization='NFC',
         default_settings=None, capture_args=True, libraries=None,
         plist=None):
 
self._default_settings = default_settings or {}
self._input_encoding = input_encoding
self._normalizsation = normalization
self._capture_args = capture_args
self._workflowdir = None
self._settings_path = None
self._settings = None
self._bundleid = None
self._name = None
self._info = None
self._info_loaded = False
self._logger = None
self._items = []
self._search_pattern_cache = {}
if plist:
    self._info_plist = plist
else:
    # info.plist should be in the directory above this one
    self._info_plist = self.workflowfile('info.plist')
if libraries:
    sys.path = libraries + sys.path
This allows me to put Alfred Workflow in the bundler directory (instead of in my workflow's actual directory) and use one copy of it for all my workflows. Thus using Alfred Workflow in the way I would something like requests. I was toying with this since this is something that is in all of my workflows. But you have to pass plist since typically workflow.py would search it's own directory tree for the plist. By passing it in on initialization, I can put workflow.py in the bundler dir. That's all I meant.
 

 

 

Dean might have to make these changes, but remember that because of how Python (in my opinion stupidly) handles packages, you'll get a copy of the library/workflow.

 

The only thing negative that I've ever really seen/heard/understood about Python is the way it handles packages. Why is PHP better at this than Python? This just seems wrong. Am I right?

Edited by Shawn Rice
Link to comment

Left a word out. I meant your Alfred Workflow. And I mean the init for Workflow class. Here's the version:

 

def __init__(self, input_encoding='utf-8', normalization='NFC',
         default_settings=None, capture_args=True, libraries=None,
         plist=None):
 

self._default_settings = default_settings or {}
self._input_encoding = input_encoding
self._normalizsation = normalization
self._capture_args = capture_args
self._workflowdir = None
self._settings_path = None
self._settings = None
self._bundleid = None
self._name = None
self._info = None
self._info_loaded = False
self._logger = None
self._items = []
self._search_pattern_cache = {}
if plist:
    self._info_plist = plist
else:
    # info.plist should be in the directory above this one
    self._info_plist = self.workflowfile('info.plist')
if libraries:
    sys.path = libraries + sys.path
This allows me to put Alfred Workflow in the bundler directory (instead of in my workflow's actual directory) and use one copy of it for all my workflows. Thus using Alfred Workflow in the way I would something like requests. I was toying with this since this is something that is in all of my workflows. But you have to pass plist since typically workflow.py would search it's own directory tree for the plist. By passing it in on initialization, I can put workflow.py in the bundler dir. That's all I meant.

The "proper" way to do it would probably be to change the workflowdir() method to start looking in the current working directory instead of its own location.

I wrote it the way it is in order that the code would still work if called from outside the workflow directory (in Terminal, basically). There may be a way to figure out the directory of the calling script—that would be optimal.

If you just pass in a path to the plist and have the library installed outside of your workflow's directory, workflowdir() and dependent methods will be broken.

 

Dean might have to make these changes, but remember that because of how Python (in my opinion stupidly) handles packages, you'll get a copy of the library/workflow.

 

The only thing negative that I've ever really seen/heard/understood about Python is the way it handles packages. Why is PHP better at this than Python? This just seems wrong. Am I right?

It isn't. Python works this way because it has implicit namespaces (so you don't have to do silly stuff like start your functions with __ or some other prefix to—hopefully—avoid name conflicts). It's really not a problem once you understand how it works (i.e. sys.path). Also, as discussed elsewhere, it allows Python to bundle a huge number of libraries by default without having to load them all every time.

Admittedly, it would be great if Python has some concept of versioned libraries.

 

I like it.

 

 

Use nohup and direct all output and errors to /dev/null. The process will be considered done so that things can go on. There should be a check for a network connection as well. Running update.sh when things are installed isn't optimal -- at least for bundler updates -- because if someone has a few workflows that are installed when all the dependencies are already there, then the bundler won't update itself. The best way to do it is to make sure that the update is checked once per week or so in order to make sure that things are up to date.

 

 

Yeah. This is kinda frowned upon. Forking is the "proper" way to do it, and also rather excessive in this case (IMO).

 

Makes sense, but let's keep it in the data directory anyway just because those files should always exist. Let's just use the cache directory for very ephemeral storage.

Well, what I wanted to store there was most definitely cache data, but it's no biggie.

 

Let's get a release going before you do it. It makes great sense to have the bundler bundled with the library. I've been writing my own PHP library, and I figure I'll do the same with it, but we should work out a way to figure out how to have them 'self-upgradable' between at least minor versions if not major versions. What's pretty awesome about doing something like that is that the bundler/library would have a co-dependent updating relationship for things that don't change how they work, so, basically, you can make better fixes for each without breaking any workflows.

Oh, I won't be doing it any time soon. I'm just wondering if I should bundle the bundler or recommend bundler as a way to install the workflow library.

Probably the former, tbh.

 

If it's up to the workflow developer, then we need to provide copy/paste sample code. Otherwise, notify each time in the least intrusive way. You can rely on the existence of TN because the bundler installs it for itself.

 

How? TN or Applescript?

I'm using TN (seeing as it's already there). I think setting a module-level flag is the simplest solution (though I don't know why a developer would want to turn it off).

Edited by deanishe
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...