Jump to content

iandol

Member
  • Posts

    166
  • Joined

  • Last visited

  • Days Won

    7

Reputation Activity

  1. Like
    iandol got a reaction from CharlesVermeulen in ChatGPT / DALL-E - OpenAI integrations   
    There is no "free" version of the API according to the pricing page:
     
    https://openai.com/pricing
     
    There is a free web page interface (https://chat.openai.com), but that is not the API. For the API they give you some tokens at the start if I remember correctly, then you must pay-per-use.  
     
     
    If you want free: use a local LLM (the model runs on your Mac, no costs and no privacy concerns as OpenAI hoovers up all your data), or use a wrapper tool like https://openrouter.ai (that can utilise OpenAI, Claude, or many different open source free models using a single unified API). Sadly there is a small API incompatibility between this workflow and OpenRouter (@vitor may accept a pull request to fix it but I haven't had time...), so that leaves you with: give your credit card details to OpenAI, or use a local model...
     
  2. Like
    iandol got a reaction from MRR in "Selection in macOS" as a Workflow Action?   
    I often want to get a selection from the current app to pass as argv to a script. Using the [hotkey] trigger this is trivial, but I normally want to be able to use both a [hotkey] and/or a [keyword]. Now the issue is [keyword] as an input does not support passing a "Selection in macOS" argument. The way I currently handle this is [keyword] triggers a script action Applescript to copy the selection (system events) to the clipboard then have two scripts, one that handles stdin via [hotkey] and one that uses the clipboard from [keyword]. But this seems a bit fussy (and fills up the clipboard in one but not the other trigger). I also sometimes use [hotkey] to also trigger an applescript>clipboard then have a unified script but we still are using the clipboard when we don't really have to. If we had a new dedicated [action] which simply passed the current selection to the next block it would elegantly solve this (my [keyword] would then trigger this new [action] to pass into a single argv script).
     
    I know this is fairly minor and there are workarounds and perhaps I'm missing a better way to do all this, but if not please consider adding this action, thank you! 
  3. Like
    iandol reacted to zeitlings in Request for Claude.ai workflow   
    The integration is not so seamless as to not break the existing code, though. However, I forked the repo, pushed all changes and applied some cosmetics.
    The code and standalone workflow are now up for grabs and can be integrated or not : https://github.com/zeitlings/alfred-anthropic
     
     
    Thanks!
  4. Like
    iandol got a reaction from gloogloo in Kiki - AI Powered Chat & Text Tools   
    Note: this workflow also supports OpenRouter as well as Local LLM tools like https://lmstudio.ai and https://gpt4all.io/index.html and https://ollama.com — this means you are not tied to OpenAI and its (im my personal opinion) problematic "hijack" of LLMs into a paid corporate tool. LLMs are based on academically "open" technology, and OpenAI was originally started as a way to democratise these tools before profit and corporate battles took ChatGPT as a closed-walled garden...
     
  5. Like
    iandol reacted to gloogloo in Kiki - AI Powered Chat & Text Tools   
    In case anyone stumbles upon this workflow. For some reason I can't edit my original post but just a quick note to mention that Kiki now supports Anthropic's models (Claude), Whisper AI, and can also be set to use a custom API Endpoint, among a few more things. Updates and full documentation can be found over at GITHUB
  6. Like
    iandol reacted to gloogloo in Kiki - AI Powered Chat & Text Tools   
    Kiki
    AI-Powered Chat & Text Tools
    For Chat GPT and OpenRouter's Models
     

    A short video showcasing some features.
     
    REQUIREMENTS: 
    OpenAI or OpenRouter API Token
    Jq: Can be installed from Homebrew 
     
    This workflow is specifically designed with the following features in mind:
    Quick chats initiated from Alfred’s command bar: These chats start in the command bar of Alfred and continue as AppleScript dialogs. Create and use presets for selected text or user input: Customize prompts, system role, temperature, and other settings per preset. This feature can help with grammar correction, translations, rephrasing, tone adjustment, smart text transformations, idea generation, and much more. Chat initiation options with the use of modifiers from Alfred’s command bar: These options include selecting an alternative model, an alternative system role or "persona," pasting results in the frontmost window, and preserving or resetting existing context. Easy continuation of previous conversations: Seamlessly continue previous conversations by using Alfred Universal actions on existing context files. Trigger presets on text using hotkeys, snippets, or external triggers: Activate presets on text using hotkeys, snippets, or external triggers for faster results, without needing to use Alfred's command bar. Markdown Chat: Enjoy the convenience of making AI requests directly in your preferred markdown text editor. Customize the chat settings through presets included on the header of your notes according to your preferences.  
    SCREENSHOTS

     

     

     

     

    A few things I must mention:
    I have written an extensive documentation over at Github. I may have gone a bit overboard, but this workflow can be as simple or as complicated as users want it to be. My coding skills are very basic, so most of the code used to create this workflow came out of Kiki itself. There are some workflows that are very powerful in using all that OpenAI's API has to offer, but Kiki doesn't try to do everything. This is a tool that shines as a utility for everyday text-related tasks. Because of that, I currently do not have plans to integrate it with image generation, vision, etc. Instead, I welcome any ideas on how to make this better at what it does. I have tried to work around limitations, and I've tried to make this workflow as customizable as possible, but I know it could probably be better. If you encounter any bugs that you can reproduce please feel free to comment or let me know. Honestly, I'm a total beginner at this, but will be happy to help if I can. Lastly, I hope I am not asking too much by requiring users to install Jq. I know that Alfred workflows that make it to the gallery do not have to deal with this, but it seems gallery submissions are closed for the time being. I also know that not every user wants to deal with the creation of presets using JSON files, do you have any idea on how to improve this?  
    It's my first workflow submission to this forum. I'm not sure of what I'm doing here (or on Github), but I hope you give this a try and hopefully find it useful.
     
    GITHUB | DOWNLOAD
  7. Like
    iandol got a reaction from gloogloo in Request for Claude.ai workflow   
    Well what a good advert for the utility of LLMs then! I made a pull request to your github...
  8. Like
    iandol got a reaction from Gold in ChatGPT / DALL-E - OpenAI integrations   
    Don't forget there are local LLMs, which while less powerful, are totally free to use (and easier to customise), and also other "wrapper" tools which give you more flexibility in terms of what to use as the model backend (like https://openrouter.ai/).
     
    I don't use OpenAI myself but I assume costs increase as message context grows? If so, one way to keep costs down could be to "prune" the previous message context, but that does make the chat less accurate. One way to do that already is to start new chats whenever you don't really need the previous message context?
  9. Like
    iandol reacted to vitor in ChatGPT / DALL-E - OpenAI integrations   
    Thank you.
     
     
    That is correct.
     
     
    Make a new workflow with a Keyword Input and connect it to an Arg and Vars Utility with your custom text plus {query} for the new input from the Keyword. Then connected it to a Call External Trigger Output set to open continue_chat from this workflow.
  10. Like
    iandol reacted to gloogloo in Request for Claude.ai workflow   
    I made Kiki, a workflow that works not only with Open AI but also with OpenRouter models (which includes Claude). I have actually already incorporated access to Anthropic's API but haven't gotten around uploading this latest version (I need to update the documentation since I also incorporated Whisper). The update should be up sometime this coming week. It's not as pretty as Alfred's new Chat GPT workflow (still haven't figured out how to use incorporate the new text view or if I should incorporate it at all) but it's super versatile.
  11. Like
    iandol reacted to giovanni in Offline Unit Conversion Workflow   
    @iandol yes, not sure what has changed but other pythons can now take precedence over the system Python. You just need to add `/usr/bin/` before `python3` in the script filter (see also this issue). I will update this and other workflows with the same issue. 
  12. Like
    iandol reacted to vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    That’s a copying mistake. Should have been chatgpt_api_endpoint. Fixed. Thank you.
  13. Like
    iandol got a reaction from vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    I'll have a look. The OpenAI API is pretty simple to be honest, taking their simple guide:
     
    https://platform.openai.com/docs/guides/text-generation/chat-completions-api
     
    curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-3.5-turbo", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }'  
    We have: API address, API key, the model name & the messages as the core components. Messages are obvious, but the address, key and model name are essential and also required for online alternatives like openrouter.ai and local tools like LM Studio. The hard coded model names for OpenAI do not work for any other alternative, so a way to override it is needed. These are definitely "if there was only one more version, what should be included" options... I think having the standard drop-down for models hard coded is great for beginners (your UI is clean and simple), and the env variable as a text field is perfect for more advanced use.
     
    There are a bunch of other parameters for fine tuning the model response: temperature, max_tokens, n, top_p etc. — of these I think none are really essential, though if I was forced to pick I'd have temperature (guiding the fidelity vs. creativity of the model responses) and max_tokens (as at least local models have specific token count limits):
     
    https://platform.openai.com/docs/api-reference/chat/create
     
    These options are certainly very specialist. I agree that stream=off is not worth supporting, as it adds substantial backend complexity for you with minimal gain (while I love GPT4All, I just won't use it with Alfred, and LM Studio, Ollama and others can take its place...)
     
  14. Like
    iandol reacted to vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    That would add complexity and foot guns for a feature the overwhelming majority will never take advantage of. Most people don’t know (nor should they have to) the exact string of characters representing each model. That configuration is a popover button because it’s what makes the most sense to cover most cases.
     
     
    I’m open to the idea of allowing more overrides and even have several ideas in mind on how to do it. But adding hidden variables piecemeal to support an advanced feature for a handful of users isn’t a good way to develop stable software. So please first investigate thoroughly what are the exact customisations that would be beneficial for custom models and then we can make a decision on all of them at once. Some things can go in (like the custom API endpoint) while others probably won’t (e.g. a non-streaming method) but they should be evaluated in bulk so the workflow doesn’t end up like a brittle Frankenstein-type construction. Think in terms of “if there were going to be just one more version, what would be necessary to cover the bases?”
  15. Like
    iandol reacted to ThanhD in ChatGPT / DALL-E - OpenAI integrations   
    I hope a similar workflow for Gemini, its API is free of charge 😀.
  16. Like
    iandol reacted to vitor in ChatGPT / DALL-E - OpenAI integrations   
    Frequently Asked Questions (FAQ)

    How do I set up an alternative AI model?

    The workflow offers the ability to change the API end points and override model names in the Workflow Environment Variables. This requires advanced configuration and is not something we can provide support for, but our community are doing it with great success and can help you on a different thread.

    How do I access the service behind a proxy?

    Add a new https_proxy key in Workflow Environment Variables. Or configure the proxy for all workflows under Alfred Preferences → Advanced → Network.

    Why can’t I use the workflow with a ChatGPT Plus subscription?

    The ChatGPT Plus subscription does not include access to the ChatGPT API and are billed separately.

    Is there a video which shows how to use the workflow?

    Yes, on YouTube.

    How do I report an issue?

    Accurate and thorough information is crucial for a proper diagnosis. When reporting issues, please include your exact installed versions of:
    The Workflow. Alfred. macOS. In addition to:
    The debugger output. Perform the failing action, click “Copy” on the top right and paste it here. Details on what you did, what happened, and what you expected to happen. A short video of the steps with the debugger open may help to find the problem faster.  
    Why do I keep getting [Connection Stalled]?
     
    This happens when the workflow takes too long to receive a reply from the API. It indicates a problem either with your connection or OpenAI’s service.

    Open a terminal and run the following (replace the YOUR_API_KEY text within the quotes with your API key):
     
    openai_key="YOUR_API_KEY" time /usr/bin/curl "https://api.openai.com/v1/chat/completions" --header "Authorization: Bearer ${openai_key}" --header "Content-Type: application/json" --data '{ "model": "gpt-3.5-turbo", "messages": [{ "role": "user", "content": "What is red?" }], "stream": true }'
    It should provide a clue as to what is happening. Include the result in your report.
  17. Like
    iandol reacted to vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    As this is an advanced option which won’t be relevant to most users and can be tricky to set up correctly, I’ve split the conversation into a different thread (this one). Please continue the discussion on local models here. A moderator’s note at the top explains the situation, but the post is otherwise unchanged.
  18. Like
    iandol reacted to vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    @iandol @outcoldman @llityslife Please try this version. Instructions are at the bottom of the About, in the Advanced Configuration section. The update is to be considered experimental and things can change, but this method aims to allow you to use the local models you have set up more easily and not worry about the endpoint being overridden on updates, while at the same time not overwhelming other users.
  19. Like
    iandol got a reaction from Chris Messina in Using alternative and local models with the ChatGPT / DALL-E workflow   
    Moderator’s note: The ChatGPT / DALL-E workflow offers the ability to change the API end point in the Workflow Environment Variables, enabling the possibility of using local models. This is complex and requires advanced configuration, not something we can officially provide support for. This thread was split from the main one so members of the community can help each other setting up their own specific models.
     
     
    Thanks vitor! There are many open source models with performance equivalent to GPT3.5 or better, without the privacy concerns, dependency on an internet connection for each question, or costs. And there are a number of macOS apps that manage them. This unlocks the power of LLMs for everyone. The good news is that most of these tools offer a local API server that is compatible with the OpenAI API. Therefore all one needs to do is change the URI and you can switch from the commercial OpenAI service to a privacy-respecting, open source, and free alternative:
     
    https://github.com/nomic-ai/gpt4all — more basic UI, model selected using model key of API
     
    https://lmstudio.ai — more advanced UI, uses UI selected model for API requests.
     
    Checking the code, the JS can be tweaked to make the URI redirect to localhost: http://localhost:4891/v1/chat/completions — for GPT4All, the model file needs to be specified but for LMStudio the model is chosen in the UI and that is what the API serves. So a feature request is the option to specify the API address so this workflow can run locally if LMStudio or GPT4All (or several others) are installed. Dall-E is a harder deal, as while there are open source models like stable diffusion (and amazing macOS apps like Draw things to use them), i don't know of a tool that offers an API that would be hot-swappable for the OpenAI commercial APIs...
     
  20. Like
    iandol reacted to vitor in Using alternative and local models with the ChatGPT / DALL-E workflow   
    As with most things in computing, using local models has tradeoffs. You’ve mentioned some of the positives, but some of the challenges include having to download multi-GB files, requiring better machines, and being harder to set up. None of that is insurmountable, but it is an extra hurdle that can confuse most users. There are so many different knobs and dials, even when using one specific service, that an early explicit goal of this workflow remains to avoid configuration fatigue. We’re aware alternative models exist and are certainly not averse to them, but for this workflow right this moment the operative word is focus.
     
    The great news is that everything in the workflow is built on top of the new Text View, which is content agnostic. In other words, as you’ve noticed, there’s nothing tying Alfred to a particular approach and anyone can build their own!
  21. Like
    iandol got a reaction from Vero in ChatFred: OpenAI's GPT-model workflow   
    Awesome and congratulations! Woah, the changelog is epic!!!! Downloading now!
  22. Like
    iandol reacted to Andrew in Just checking in   
    @luckman212 DING... fresh out the oven: https://www.alfredapp.com/whats-new/
  23. Like
    iandol reacted to Vero in ChatFred: OpenAI's GPT-model workflow   
    @TomBenz Keep an eye out for exciting news in the new year 👀 That's as much as I can say for now...
  24. Like
    iandol reacted to pseudometa in Writing Assistant: Autocorrection, synonym suggestion, and rephrasing of text.   
    Autocorrection and synonym suggestions for the word under the cursor. Rephrasing of the selected text. 
     
    The workflow offers four hotkeys:
    Autocorrect the word under the cursor. Suggest synonyms for the word under the cursor. Rephrase the selected text via ChatGPT, improving its language (requires OpenAI API key).  Same as 3, but use Markdown markup to show the changes: Additions are displayed as ==highlights==. Deletions are displayed as ~~strikethroughs~~. There is a workflow configuration to alternatively format the changes as Critic Markup.  
    Unfortunately, the forum keeps on failing when I try to upload the demo .gif file. The demo is available at the workflow's GitHub page though:
     
    ➡️ https://github.com/chrisgrieser/alfred-writing-assistant
  25. Like
    iandol got a reaction from pseudometa in Writing Assistant: Autocorrection, synonym suggestion, and rephrasing of text.   
    Chris, you are on a workflow roll!!! Thank you for yet another great workflow 😍
×
×
  • Create New...