Stephen_C Posted August 9, 2023 Posted August 9, 2023 Save a selected URL to a text file (appending if the file exists) Introduction Sometimes when browsing you want to save a URL quickly for future reference without creating a bookmark in your browser. That is what this workflow does. For simplicity it uses a simple plain text file (which will be created for you if it doesn't already exist) to which saved URLs will be appended. You select the location folder for that file in the workflow configuration.Usage Using your Universal Action hotkey on a selected URL, select Save URL to links file from the list and press ⏎. You will then be prompted for a description of the URL (which may be a useful reminder). Type the description and press ⏎. If you wish you can leave the description blank by simply pressing ⏎. The result will be a text file (which you can open in your default text file editor—I'm using CotEditor here and added the "Saved links” heading manually when creating the file): Notes 1. In the workflow configuration you can choose the keyword you wish to use to open the Links.txt file. 2. If (quite understandably 😀) you loathe the sound effect you can, of course, mute it in the workflow configuration. GitHub download link Stephen Vero 1
Stephen_C Posted August 9, 2023 Author Posted August 9, 2023 With grateful acknowledgement to @Acidham for the ideas I've released version 2.0 which: enables you to choose markdown or plain text for the file where you save the links; and when pressing ⌘ after typing the keyword to open the links file, allows you to replace the links file with an empty file (thus deleting previously saved links). Stephen Acidham 1
Stephen_C Posted August 9, 2023 Author Posted August 9, 2023 …and with apologies for the plethora of releases, version 2.1 adds error trapping for occasions when files do not exist. I dislike silent failures. 😀 Stephen
Stephen_C Posted August 10, 2023 Author Posted August 10, 2023 Version 2.2 corrects some carelessness in the conditional logic when opening or replacing the links file. Thanks to @Acidham for pointing it out. Stephen
Stephen_C Posted August 11, 2023 Author Posted August 11, 2023 Version 2.3 improves creation of new links file (adding a heading) and expands the ReadMe to cover opening of a links file and creation of a new links file. Stephen
Stephen_C Posted August 13, 2023 Author Posted August 13, 2023 Version 2.4 represents a significant re-write of the workflow so that: The first time you run the workflow the relevant plain text or markdown file is created with the heading Saved URLs. It is now obligatory to include a description of the URL when saving to a markdown Links file but that remains optional when saving to a plain text links file. The ReadMe has been updated and expanded. The grammar in a couple of the dlalog boxes has been improved. I have added a warning when you choose to create a new Links file (potentially deleting any previously saved links). Stephen
Stephen_C Posted August 23, 2023 Author Posted August 23, 2023 Version 3.0 is a significant update and adds the ability to search a Links.txt file and open any found URL directly from Alfred. Note that ability does not currently extend to a Links file saved in markdown format.Configuration options You can choose: The keyword you wish to use to trigger a search of the Links.txt file. Whether you wish the selected URL to open in your default browser or (if you use Firefox) in a Firefox private window. Searching URLs in a Links.txt file Simply type your search keyword and the relevant URLs will display in Alfred's window. (Note that the search is case insensitive.) Press ⏎ to display the selected URL in your chosen browser. Here is an example of the result when I have chosen the configuration option to open a URL in a Firefox private window: You will be warned if: - the Links.txt file does not exist; or - your search term is not found. Notes 1. I am indebted to @vitor for huge help with the script filter. 2. If anyone more skilled than I is interested in contributing amendments to the script filter to: detect use of a markdown links file; and grep for searched links within that file to extract the URLs all help will be gratefully received and will be acknowledged! Stephen
Stephen_C Posted August 27, 2023 Author Posted August 27, 2023 Although version 3.2 is a point release it contains one significant enhancement and a significant bug fix. Version 3.1 (which was not released) extended to markdown Links files the ability to search for URLs and open a selected link (with thanks to @vitor for conducting me to a missing exit 😀). This version also fixes an irritating and persistent bug which, when searching a Links file, on occasion led to Alfred showing default fallback searches as the search result. Stephen
Acidham Posted September 25, 2023 Posted September 25, 2023 @Stephen_C In order to extract the title of a webpage as a prefill for the prompt, I added the following Python script as a WF step. It would be great to add it to your workflow… Python3 script: import urllib.request import re import os def fetch_webpage_title(url): try: # Fetch the webpage content response = urllib.request.urlopen(url) html = response.read().decode('utf-8',errors='ignore') # Use regex to find the title tag title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE) # Extract the title if found if title_match: return title_match.group(1).strip() else: return "" except Exception as e: return f"An error occurred: {e}" url = os.getenv('theURL') print(fetch_webpage_title(url))
Stephen_C Posted September 25, 2023 Author Posted September 25, 2023 @Acidham thank you for that. I've been testing it for a while. It's potentially really useful. However, I'd prefer to have a rather more elegant failure fallback. 😁 By way of example, the following sites fail leaving the error message as the description (and I'm not sure that's ideal: would it not be better simply to leave the field blank for the user to complete and explain that in the confiiguration?): https://www.macbartender.com/Bartender5/ https://support.captureone.com/hc/en-us/community/topics https://webbtelescope.org/images https://apod.nasa.gov/apod/astropix.html This link simply produces a blank with no error in the description field (perhaps better?): https://support.mozilla.org/en-US/kb/getting-started-thunderbird-main-window-supernova#w_2-unified-toolbar This site produces a rather odd result: http:// https://www.stclairsoft.com/blog/default-folder-x-6-new-features/ If possible, I'd prefer a more uniform approach in respect of uncooperative sites (however they may be defined) before introducing and releasing this (which is not to detract from the fact that it has great potential). (I apologise for the fact that I've not used Python for very many years so am now rather too rusty to tackle any re-progamming myself!) Stephen
Acidham Posted September 25, 2023 Posted September 25, 2023 (edited) @Stephen_C uups sorry forgot to return an empty string instead of returning the error. And shame on me, I did not test it enough. i changed the line return f"An error occurred: {e}" to return "" import urllib.request import re import os def fetch_webpage_title(url): try: # Fetch the webpage content response = urllib.request.urlopen(url) html = response.read().decode('utf-8',errors='ignore') # Use regex to find the title tag title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE) # Extract the title if found if title_match: return title_match.group(1).strip() else: return "" except Exception as e: return "" url = os.getenv('theURL') print(fetch_webpage_title(url)) Edited September 25, 2023 by Acidham
Stephen_C Posted September 25, 2023 Author Posted September 25, 2023 @Acidham thanks, that is better but there are still problems with some sites. Is there any way of dealing more neatly with the URL for this Alfred page (i.e., the one you are on now), for example (i.e., where there are certain punctuation marks in the URL)? Also, please try these two URLs and note that an extra line appears to be added to the description field: https://e-life.co.uk/login https://www.eyecarepartners.co.uk/ Stephen
Stephen_C Posted September 25, 2023 Author Posted September 25, 2023 (edited) To be quite clear I'm adding this ability to the workflow as an option but I'd just like to get the extraction as robust as reasonably possible before releasing the update. I much appreciate your assistance. Stephen Edited September 25, 2023 by Stephen_C
Acidham Posted September 25, 2023 Posted September 25, 2023 We could improve over time. I did this quick and dirty for the time being. In case no error were thrown, I suggest putting it into the next release and improving it over the time.
Stephen_C Posted September 25, 2023 Author Posted September 25, 2023 Thanks. I'll work on that basis and include an appropriate warning in the ReadMe. Recovering from dentistry just at the moment 😑 so will probably be tomorrow before I'm up to releasing version 4.0 including this option. Thanks again for your programming. Stephen Acidham 1
Stephen_C Posted September 26, 2023 Author Posted September 26, 2023 Version 4.0 has been released: with credit to @Acidham for the major new feature: If you check the relevant box in the user configuration the workflow will attempt to extract the title of the web page for which you are saving the URL and put that title in the description field. If the workflow is unable for any reason to retrieve the title the description field will simply be left blank for you to complete. If you prefer to complete the description field yourself simply leave unchecked the relevant check box in the user configuration. I have also updated the ReadMe and ensured both it and the user configuration are now in a rather more logical order. Stephen
Acidham Posted September 26, 2023 Posted September 26, 2023 @Stephen_C I just shared via direct message a fixed code snipped, but I am uncertain if the message was ever sent. Therefore, here again. The new code should fix most of the issues when reading the title of a webpage. Just replace the code in WF with the code below. import urllib.request import re import os import http.cookiejar def fetch_webpage_title(url): try: cj = http.cookiejar.CookieJar() opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj)) headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.3'} request = urllib.request.Request(url, headers=headers) response = opener.open(request) # response = urllib.request.urlopen(url) html = response.read().decode('utf-8',errors='ignore') title_match = re.search('<title>(.*?)</title>', html, re.IGNORECASE) # Extract the title if found if title_match: return title_match.group(1).strip() else: return "" except Exception as e: return "" url = os.getenv('theURL') print(fetch_webpage_title(url))
Stephen_C Posted September 26, 2023 Author Posted September 26, 2023 For those following…further update will follow after incorporation of new code and testing..and some breakfast here! 😀 And, again, thanks to @Acidham for all the work. Stephen Acidham 1
Stephen_C Posted September 26, 2023 Author Posted September 26, 2023 Version 4.1 is now released containing the improved Python code for retrieving the website title for the description field. Stephen Acidham 1
Stephen_C Posted February 26, 2024 Author Posted February 26, 2024 Version 4.5 requires Alfred 5.5 so if you are not using that please stay on your earlier version. The new version allows preview of plain text and markdown links files using Alfred's new Text View. Editing of plain text links files also uses Alfred's text view. Full instructions for use of the workflow are contained in the ReadMe. Stephen
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now