Jump to content

Is there any disadvantage to use self-call and delay to apply same action every so often?


Recommended Posts

image.png.b6801c231a7815dda267ae15893330a0.png

I am using the python package "Alfred-Workflow" to write a workflow to manage downloads, it first read from an RSS source and then I can send some item to aria2c to download.

However, when there are many items in the RSS, updating takes some time so I need to cache the results for 20 minutes.

 

I know that I can set the update process by "run_in_background", to run it I need to at least trigger the workflow once.

It would be nice if the update can run in background every 20 minutes automatically without being triggered.

 

I read the tutorial and find the delay block utility, I found that if I can wrap an action with self-call and delay, then the whole procedure can be viewed as a scheduled task.

However, I did not find anyone else using this approach, I wonder if there is any performance drawback.

Link to comment

Hi Joshua, welcome to the forum. I'm going to move this thread to the "Workflow Help & Questions" forum.

 

No, that approach isn't common because Alfred is designed to sit in the background doing as little as possible until the user activates it. In line with that approach, the common Alfred-Workflow idiom is to update any cached data in the background (using run_in_background) only when the user is actually using the workflow. While an update is running, you show the old cached data and set rerun, so Alfred keeps re-running the workflow until the cache update is done and the user is looking at the latest data.


If the update takes so long that this isn't feasible, you should use a Launch Agent to run it via AppleScript and an External Trigger, rather than creating a loop in Alfred that keeps it actively running. (You should always use External Triggers to call your workflow rather than simulating its Keyword/Hotkey.) I use LaunchControl to manage Launch Agents, but you can also generate the configuration files online.


It's just a much better way to do it. It will start automatically on boot and not need you to kick-start it by running the workflow, and won't keep Alfred doing stuff when it wants to be quietly idling in the background.


If the reason for the slowness of the update is due to feedparser, which is terribly slow, you might want to check out speedparser—it can be nearly 100x faster. It needs lxml, though, and only works with valid feeds.

Edited by deanishe
Link to comment
On 12/30/2018 at 6:21 AM, deanishe said:

Hi Joshua, welcome to the forum. I'm going to move this thread to the "Workflow Help & Questions" forum.

 

No, that approach isn't common because Alfred is designed to sit in the background doing as little as possible until the user activates it. In line with that approach, the common Alfred-Workflow idiom is to update any cached data in the background (using run_in_background) only when the user is actually using the workflow. While an update is running, you show the old cached data and set rerun, so Alfred keeps re-running the workflow until the cache update is done and the user is looking at the latest data.


If the update takes so long that this isn't feasible, you should use a Launch Agent to run it via AppleScript and an External Trigger, rather than creating a loop in Alfred that keeps it actively running. (You should always use External Triggers to call your workflow rather than simulating its Keyword/Hotkey.) I use LaunchControl to manage Launch Agents, but you can also generate the configuration files online.


It's just a much better way to do it. It will start automatically on boot and not need you to kick-start it by running the workflow, and won't keep Alfred doing stuff when it wants to be quietly idling in the background.


If the reason for the slowness of the update is due to feedparser, which is terribly slow, you might want to check out speedparser—it can be nearly 100x faster. It needs lxml, though, and only works with valid feeds.

Thanks for the help. Launch Agent solves my problem.

For my case I did not use both feedparser and speedparser, because I have to fetch pages of the links in the RSS and get some extra information.

So I used beautifulsoup4 together with multi-thread processing to speed it up, also I did some incremental update to avoid fetch pages multiple times.

As my RSS only has 10 new posts per day on average, the current workflow works perfectly, but I think the performance will be fine as long as there is no burst in my RSS.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...