Jump to content

iandol

Member
  • Posts

    166
  • Joined

  • Last visited

  • Days Won

    7

iandol last won the day on October 8 2023

iandol had the most liked content!

Recent Profile Visitors

1,778 profile views

iandol's Achievements

Member

Member (4/5)

41

Reputation

  1. I at least got the notion that @vitor wanted to keep that workflow "streamlined". It certainly would be possible to magically switch to claude based on the model name (claude model names begin with `claude`, that is how Kiki from @gloogloo does this), thus could be done without any new UI, but would require more complex code on the backend. You could make a pull request on github, then @vitor could evaluate if he is willing to make this change? offtopic: your ยตBib looks awesome!
  2. There is no "free" version of the API according to the pricing page: https://openai.com/pricing There is a free web page interface (https://chat.openai.com), but that is not the API. For the API they give you some tokens at the start if I remember correctly, then you must pay-per-use. If you want free: use a local LLM (the model runs on your Mac, no costs and no privacy concerns as OpenAI hoovers up all your data), or use a wrapper tool like https://openrouter.ai (that can utilise OpenAI, Claude, or many different open source free models using a single unified API). Sadly there is a small API incompatibility between this workflow and OpenRouter (@vitor may accept a pull request to fix it but I haven't had time...), so that leaves you with: give your credit card details to OpenAI, or use a local model...
  3. Note: this workflow also supports OpenRouter as well as Local LLM tools like https://lmstudio.ai and https://gpt4all.io/index.html and https://ollama.com โ€” this means you are not tied to OpenAI and its (im my personal opinion) problematic "hijack" of LLMs into a paid corporate tool. LLMs are based on academically "open" technology, and OpenAI was originally started as a way to democratise these tools before profit and corporate battles took ChatGPT as a closed-walled garden...
  4. Well what a good advert for the utility of LLMs then! I made a pull request to your github...
  5. @gloogloo โ€” I think you've done an amazing job, I think the workflow is not too complex for the user at least and you've provided nice documentation (given this is something you did mostly for yourself). The feature set makes this workflow useful in its own right. Supporting OpenRouter for example means many more Alfred users who either dislike OpenAI as a company (that's me), don't have a credit card, or can't afford or want to pay rolling fees, can use the open source models easily. I can add a pull request for custom API endpoint if you want, as i understand from a quick look the main request is made using a bash script and curl right? I'm fine with bash, though I never learnt javascript so i am more wary to tweak that code... it would maybe help if you made the bash script a file rather than embedded in the workflow, as I think that makes it easier to contribute to via github etc. Perhaps a thread on these forums for your workflow would raise visibility and maybe garner some help for future feature updates. I personally don't mind your use of dialogs for UI personally, but I'm sure some questions to @vitor would help in implementing the super cool new views that Alfred 5 enables.
  6. Do you see a log of your individual request activity in the OpenAI accounts page? With OpenRouter (screenshot below), each request is shown with the tokens sent and received and then the costs (in this case Mistral is an opensource and therefore free-to-use model), that could help you work out if pruning may help?
  7. Kiki has now added claude API support: https://github.com/afadingthought/kiki-ai-workflow ๐Ÿ˜ โ€” @gloogloo also added whisper support which could be incredibly useful and helpful to many people...
  8. Don't forget there are local LLMs, which while less powerful, are totally free to use (and easier to customise), and also other "wrapper" tools which give you more flexibility in terms of what to use as the model backend (like https://openrouter.ai/). I don't use OpenAI myself but I assume costs increase as message context grows? If so, one way to keep costs down could be to "prune" the previous message context, but that does make the chat less accurate. One way to do that already is to start new chats whenever you don't really need the previous message context?
  9. @Alfred0 โ€” you can have a look at Kiki (https://github.com/afadingthought/kiki-ai-workflow), which supports different profiles and I think is designed more for your workflow. I also use different system prompts and variables to "guide" the LLM (in my case using BetterTouchTool, but the underlying idea is the same, have an LLM for editing, one for creative writing, one for coding support or geek stuff etc.). I believe @vitor wants to keep this workflow "lean'n'clean" in terms of the core feature set, and the great thing about Alfred is how easy it is to modify workflows for specific purposes ๐Ÿ˜
  10. Wow, kiki has a great feature set (and well documented), thanks @gloogloo โ€” I expect local LLM tool LMStudio will also work with Kiki without any changes (it is great to have a free LLM running without need for internet or accounts etc.) Do you use stream=true in your code? You should add this to the gallery if it isn't already there.
  11. +1 for currency conversion (even via an API download list), like this one:
  12. Hi @giovanni โ€” I am getting this error at present: Traceback (most recent call last): File "/Users/ian/Library/CloudStorage/Dropbox/Assorted/Alfred Settings/Alfred.alfredpreferences/workflows/user.workflow.FA895A69-DD0A-4EDF-AA0A-3934D0B00C79/convert.py", line 19, in <module> from pint import UnitRegistry, UndefinedUnitError, DimensionalityError File "/Users/ian/Library/CloudStorage/Dropbox/Assorted/Alfred Settings/Alfred.alfredpreferences/workflows/user.workflow.FA895A69-DD0A-4EDF-AA0A-3934D0B00C79/pint/__init__.py", line 17, in <module> import pkg_resources File "/Users/ian/Library/CloudStorage/Dropbox/Assorted/Alfred Settings/Alfred.alfredpreferences/workflows/user.workflow.FA895A69-DD0A-4EDF-AA0A-3934D0B00C79/pkg_resources/__init__.py", line 57, in <module> from pkg_resources.extern import six ImportError: cannot import name 'six' from 'pkg_resources.extern' (/Users/ian/Library/CloudStorage/Dropbox/Assorted/Alfred Settings/Alfred.alfredpreferences/workflows/user.workflow.FA895A69-DD0A-4EDF-AA0A-3934D0B00C79/pkg_resources/extern/__init__.py) I have to admit I don't quite understand how the python dependencies are packaged in the workflow. My python is 3.12.1 (installed with pyenv)
  13. @vitor's ChatGPT workflow would "almost" work for Claude. The Claude API looks like: https://docs.anthropic.com/claude/reference/messages-streaming curl https://api.anthropic.com/v1/messages \ --header "content-type: application/json" \ --header "x-api-key: $ANTHROPIC_API_KEY" \ --data \ '{ "model": "claude-3-opus-20240229", "messages": [{"role": "user", "content": "Hello"}], "max_tokens": 256, "stream": true }' And the responses look like: event: message_start data: {"type": "message_start", "message": {"id": "msg_1nZdL29xx5MUA1yADyHTEsnR8uuvGzszyY", "type": "message", "role": "assistant", "content": [], "model": "claude-3-opus-20240229", "stop_reason": null, "stop_sequence": null, "usage": {"input_tokens": 25, "output_tokens": 1}}} event: content_block_start data: {"type": "content_block_start", "index": 0, "content_block": {"type": "text", "text": ""}} It is similar to OpenAI API, but there are enough differences that even with some of the advanced env variables that the current test release has it won't work. You could use something like OpenRouter, which "wraps" Claude's API with the OpenAI one. But the current workflow doesn't work with the stream end markers and so can be buggy. https://openrouter.ai
  14. Couldn't the system prompt help with that? At least for my local LLM workflow, I set up different system prompts which "guide" the LLM to answer in a particular way (as an english editor, as a computer specialist etc.) โ€” in my case BetterTouchTool triggers these with different keystrokes. The issue is how to manage these flexibly in an Alfred workflow. Currently you support a single system prompt, if you could have a set of these, and a way to specify which one to use this would at least help in guiding the LLM to respond in a specific way. Each time the system prompt changed you would reset the conversation to.
  15. @vitor: I don't understand this change on line 200 of chatgpt: const apiEndpoint = envVar("dalle_api_endpoint") || "https://api.openai.com" This seems to force the dalle api? How does chatgpt_api_endpoint get used?
×
×
  • Create New...