-
Posts
166 -
Joined
-
Last visited
-
Days Won
7
Content Type
Blogs
Gallery
Downloads
Events
Profiles
Forums
Articles
Media Demo
Posts posted by iandol
-
-
To answer my own question (after trying to do this in VSCode, so many JS tools its hard to find what you want...)
Open Safari, enable developer tools then enable this option:
Now in your JXA script type in
debugger;
where you want, e.g.
Now use Alfred as you normally would and when Alfred runs the script, the remote javascript debugger will pop up. Cool!
-
I usually learn best when I can get code into a debugger and breakpoint my way through the code. I do this with Ruby and Lua plugins by using remote debugging. The docs for Ruby are here: https://github.com/ruby/debug#remote-debugging and the Lua tool here: https://github.com/pkulchenko/MobDebug — remote debugging allows you to "hook in" to any code run by a different app (i.e. Alfred could run ruby without any explicit debug command in place, but with a breakpoint command a remote debugger could hook in to that script).
It seems Javascript (JXA) is a useful language for macOS automation. I see @vitor for example uses it for his great ChatGPT workflow. So my question is: is there a remote debugging interface that would enable me to run an Alfred workflow, and breakpoint a Javascript script and inspect it as it runs?
-
15 hours ago, vitor said:
@iandol @outcoldman @llityslife Please try this version. Instructions are at the bottom of the About, in the Advanced Configuration section. The update is to be considered experimental and things can change, but this method aims to allow you to use the local models you have set up more easily and not worry about the endpoint being overridden on updates, while at the same time not overwhelming other users.
Thanks so much, I think this setting is a nice compromise (env variables are hidden away for miost users). I am having problems with the setting though:
{"error":"Unexpected endpoint or method. (POST //v1/chat/completions)"}
The variable seems to be being sent if I add a debug node:
[13:33:35.534] ChatGPT / DALL-E[Debug] 'what is the elvish scripting language?', { chatgpt_api_endpoint = "http://localhost:4891" chatgpt_keyword = "chatgpt" dalle_image_number = "1" dalle_images_folder = "/Users/ian/Desktop/DALL-E" dalle_keyword = "dalle" dalle_model = "dall-e-2" dalle_quality = "standard" dalle_style = "vivid" dalle_write_metadata = "1" gpt_model = "gpt-4" init_question = "what is the elvish scripting language?" openai_api_key = "sk-xxxxxxxxxxx" system_prompt = "" }
The problem is the `//` I get the same error if I use curl directly:
▶︎ curl -s -X POST http://localhost:4891//v1/chat/completions {"error":"Unexpected endpoint or method. (POST //v1/chat/completions)"}⏎
There is no `/` at the end of my variable, not sure where the extra `/` is coming from?
-
9 hours ago, outcoldman said:
As an example, I am running LM Studio with Mistral on my mac, and I was able to modify chatgpt script to replace https://api.openai.com to http://localhost:1234 and can communicate with my local model.
Also would be nice to easily switch between pre-configured prompts/models/urls.
Right, I did the same. Vitor's workflow can work for local use with a simple change. I use both GPT4All and LMStudio and as both support the same API they can be swapped out without any changes other than API base URI. But Vitor wants to focus on OpenAI services only, so we either need to make local changes to his workflow, or someone can release a fork of his workflow to add local use?
-
Moderator’s note: The ChatGPT / DALL-E workflow offers the ability to change the API end point in the Workflow Environment Variables, enabling the possibility of using local models. This is complex and requires advanced configuration, not something we can officially provide support for. This thread was split from the main one so members of the community can help each other setting up their own specific models.
Thanks vitor! There are many open source models with performance equivalent to GPT3.5 or better, without the privacy concerns, dependency on an internet connection for each question, or costs. And there are a number of macOS apps that manage them. This unlocks the power of LLMs for everyone. The good news is that most of these tools offer a local API server that is compatible with the OpenAI API. Therefore all one needs to do is change the URI and you can switch from the commercial OpenAI service to a privacy-respecting, open source, and free alternative:
https://github.com/nomic-ai/gpt4all — more basic UI, model selected using model key of API
https://lmstudio.ai — more advanced UI, uses UI selected model for API requests.
Checking the code, the JS can be tweaked to make the URI redirect to localhost: http://localhost:4891/v1/chat/completions — for GPT4All, the model file needs to be specified but for LMStudio the model is chosen in the UI and that is what the API serves. So a feature request is the option to specify the API address so this workflow can run locally if LMStudio or GPT4All (or several others) are installed. Dall-E is a harder deal, as while there are open source models like stable diffusion (and amazing macOS apps like Draw things to use them), i don't know of a tool that offers an API that would be hot-swappable for the OpenAI commercial APIs...
-
-
It is the new year for sure, both on solar and lunar calanders — @Vero any updates?
I would also like to mention GPT4All — a local-only tool that enables multiple model workflows without internet, with an API and can explore local docs: https://gpt4all.io/index.html — it would be great if ChatFred or its sequel could access this too!
-
Chris, you are on a workflow roll!!! Thank you for yet another great workflow 😍
-
Hi @Andrew -- thanks for this tweak, I'll definitely try it and see how it works.
I understand the tension you have with latching duration, and I would still like to see something like "pinning" to give some terms special privilege, as you could do in Quicksilver. This means the algorithm still works as usual, but individual pinned items will override the latch expiry — this would overcome your issues with latching. This does require some sort of UI so I do understand it is a non-trivial feature request. The easiest is just a keyboard trigger, so lets say I type sn which I always want to be connected to the "Find your Snippets", when that is focussed in the Alfred window, then I press ⌘⌥P and that pairing is pinned. This would already really help without any further work, but the question is what happens if later I need to reassign sn; the easiest way would be I just focus a new entry and ⌘⌥P again. Perhaps a button to erase all pins to reset back to default?
-
Thanks for link to the (well-written) documentation, that is helpful! I'll symlink to one of those paths as the static builds from https://ffmpeg.martin-riedl.de are more up-to-date, contain some optimisation patches and homebrew's ffmpeg pulls in too many dependencies for my tastes. I realise I am swimming against the recommendations 🤪
EDIT: I can confirm symlinking allowed yt-dlp to use my own ffmpeg builds.
-
I found after not using this workflow for a a few months, downloads were initiated first time, then disappeared. The log shows a couple of errors:
[14:03:46.259] Download Media[Script Filter] Passing output 'https://youtu.be/FEyVqA-DVwg?si=WuUSr5LWuC6cpnt5' to Run Script
[14:03:57.095] STDERR: Download Media[Run Script] WARNING: You have requested merging of multiple formats but ffmpeg is not installed. The formats won't be merged
xattr: No such file: /Users/ian/Downloads/Girl in China reasons with dad for more play time.mp4
55:228: execution error: Alfred 5 got an error: Cannot find workflow with Id 'com.vitorgalvao.alfred.watchlist' (-2)
You do not have WatchList installed. Download it at https://github.com/vitorgalvao/alfred-workflows/tree/master/WatchListI assume the ffmpeg error is the important one, though I didn't know we also needed to install another (WatchList) workflow (I see there is a setting in Download Vid to disable this, but it was enabled for me, my memory is fuzzy but I don't remember activating it, but I probably trust you more than I trust myself 😆).
I install ffmpeg myself to my ~/bin folder which is on my shell path, but perhaps that is not picked up by Alfred? How can I symlink so DV can find my ffmpeg?
EDIT: Using yt-dlp in the shell works fine...
-
-
On 12/20/2022 at 6:55 PM, vitor said:
Thank you for the kind words. Yes, it will make the transition.
Hi Vitor, this is just a gentle ping to not forget your super synant in the Gallery! Best wishes and thanks for great workflows and all your work on the Gallery!
-
Hi @Andrew, I still find the rolling loss-of-latch the biggest issue I have using Alfred. As mentioned above I have a few core terms I really want to always "bind" to a result, things I use infrequently but want to always be prioritised. I keep my fingers crossed this is still floating somewhere in that cooking pot of yours!
-
Well, the whole Apple ecosystem is not focussed on backwards compatibility (top-down from Apple really). If you develop free tools, I do understand if you adopt the new APIs that Apple introduces, there is a lot of new shiny every year, and some of it is useful. Those who develop commerical Mac apps spend a *lot* of time dealing with compatibility across macOS releases, and this accounts for a significant chunk of development time and is utterly frustrating (dealing with the vast undocumented underbelly of the macOS ecosystem and the utterly opaque Apple bug tracking system...)
-
These have been lossy-optimised (retaining alpha channel) using ImageOptim… I think Chris is better placed to decide the instructions.
At least for the introduction sentance, the one from github is better than the forum post:
QuoteA citation picker and minimalistic reference manager for Alfred. Uses a BibTeX File and supports various formats like Pandoc Markdown, Multi-Markdown or LaTeX.
Author + year search
A citekey search using @
A keyword search using #
This one I couldn't capture as a full window as it had non-relevant entries in the search results, but just in case:
-
Hi @vitor — sorry for the ping but as you had processed Chris's other workflow (Neovim one), I had assumed you would also do this one. I realise you are doing amazing work on curating the gallery, please take this as the gentlest of reminders. 🥰
-
I also see the same behaviour, though as DeepL looks like it is some sort of Electron app it is not surprising. If you just do a ⌘A ⌘C (select all and copy), then it does show up in Alfreds clipboard manager…
Somewhat offtopic but this free app is a modern replacement of the XCode clipboard viewer: https://sindresorhus.com/pasteboard-viewer
-
I use a bunch of Linux workstations regularly for my research. There is nothing that comes close to Alfred on Linux, and I end up with a hodge-podge of utilities (launcher, clipboard manager, clipping host etc.) that cannot in any way come near the pleasure of using Alfred. While I can replace most of the general utility of Alfred, I cannot in any way replace the general "workflow" (let alone the power of workflows!).
And with Alfred Gallery, this has been kicked up a whole notch!
-
Thank you @gingerbeardman for the repo and @vitor for the nice updates!!! 😍
-
3 hours ago, vitor said:
Instead of manually adding ~, try resetting the Search Scope. If the default doesn’t work for you, “Applications and Home” should.
Alfred Defaults does not work, even though ~/Library/CloudStorage is included. I also tried adding ~/Library/CloudStorage/Dropbox explicitly but it doesn't work. Applications and Home does work. This is macOS 13.1 (22C65), Dropbox V166.2.1015 & Alfred 5.0.6 [2110].
Dropbox itself is currently buggy as hell (default app reassigned and many quarantine errors for synced files), but in this instance at least Spotlight finds my Dropbox files when Alfred doesn't (when set with Alfred Defaults).
-
I have also not been able to use `file` prefix or [spacebar] search with any of my Dropbox files. And indeed explicitly adding `~` to the Search Scope works. As far as I know, my settings for Search Scope were the defaults before I added `~` — it does include `~/Library/CloudStorage`:
@Vero — is this expected behaviour or a bug?
-
Yes, the workflow itself is unlikely to change much at all, so perhaps a manual solution would be enough. But I think @gingerbeardman could add this as a public github repo, and this may make things a bit easier down the line?
-
OK, so in this case IINM the workflow is not contained in a release, but is an attachment: https://github.com/glushchenko/fsnotes/files/10324668/FSNotes-AlfredWorkflow-22.zip — i.e. there is no release page or atom feed that can be used?
Using alternative and local models with the ChatGPT / DALL-E workflow
in Workflow Help & Questions
Posted
It seems the script got stuck at line 200 https://github.com/alfredapp/openai-workflow/blob/main/Workflow/chatgpt#L200 and kept returning the first error from the stream.txt so I didn't see any change editing the code. 🤪 I deleted the files in the workflow data folder and it seems to be working now, though sometimes the model response is slow (when it first loads into memory), and there is a stream error.
Anyway I can confirm LM Studio + Hermes 7B model works well with your modified script with the caveat that you must NOT append a / to the endpoint, and I don't know why once it errors it cannot recover without manually deleting the files (possibly ⌘↵ would have done this, I didn't try it?).
GPT4All fails to work, as it doesn't use a streaming API (stream=false). Non-streaming mode is easier to work with (blocking response is trivial to handle), but your code is optimised for streaming...