zeitlings Posted May 29 Posted May 29 (edited) ChatGPT, Claude, Perplexity, and Gemini integrations for chat, real-time information retrieval, and text processing tasks, such as paraphrasing, simplifying, or summarizing. With support for third party proxies and local LLMs. 1 / Setup Configure Hotkeys to quickly view the current chat, archive, and inference actions. For instance ⌥⇧A, ⌘⇧A, and ⌥⇧I (recommended). Install the SF Pro font from Apple to display icons. Enter your API keys for the services you want to use. Configure your proxy or local host settings in the Environment Variables (optional). For example configurations see the wiki. Install pandoc to pin conversations (recommended). 2 / Usage 2.1 Ayai Chat Converse with your Primary via the ask keyword, Universal Action, or Fallback Search. ↩ Continue the ongoing chat. ⌘↩ Start a new conversation. ⌥↩ View the chat history. Hidden Option ⌘⇧↩ Open the workflow configuration. 2.1.1 Chat Window ↩ Ask a question. ⌘↩ Start a new conversation. ⌥↩ Copy the last answer. ⌃↩ Copy the full conversation. ⇧↩ Stop generating an answer. ⌘⌃↩ View the chat history. Hidden Options ⇧⌥⏎ Show configuration info in HUD ⇧⌃⏎ Speak the last answer out loud ⇧⌘⏎ Edit multi-line prompt in separate window ⇧↩ Switch to Editor / Markdown preview ⌘↩ Ask the question. ⇧⌘⏎ Start a new conversation. 2.1.2 Chat History Search: Type to filter archived chats based on your query. ↩ Continue archived conversation. ⇥ Open conversation details. ⌃ View message count, creation and modification date. ⇧ View message participation details. ⇧⌥ View available tags or keywords. ⌘↩ Reveal the chat file in Finder. ⌘L Inspect the unabridged preview as Large Type. ⌘⇧↩ Send conversation to the trash. !Bang Filters Type ! to see all filters currently available and hit ↩ to apply it. 2.1.3 Conversation Details Conversations can be marked both as favorites or pinned. Pinned conversations will always stay on top, while favorites can be filtered and searched further via the !fav bang filter. ⌘1 · Same as above, except ⇥ Go back to the chat history. ⌥↩ Open static preview of the conversation. ⌘Y (or tap ⇧) Quicklook conversation preview. ⌘3 ⌘L Inspect message and token details as Large Type. 2.2 File Attachments You can use Ayai to chat with your documents or attach images to your conversations. 2.2.1 Universal Action: Attach Document By default, when starting a new conversation or attaching a file to an ongoing chat, a summary will be created. You can also enter an optional prompt that will be taken into account. Currently supported are PDF, docx, all plain text and source code files. To extract text from docx-files, Ayai will use pandoc if it is installed. Otherwise a crude workaround will be used > ↩ (or ⌘↩) Summarize and ask with optional prompt starting new chat. ⌥↩ Summarize and ask with optional prompt continuing chat. ⌘⇧↩ Don't summarize and ask with prompt starting new chat. ⌥⇧↩ Don't summarize and ask with prompt continuing chat. ⌃↩ Edit multi-line prompt in a separate Text View. There, the same options are available. 2.2.2 Universal Action: Attach Image 2.3 Inference Actions1 Inference Actions provide a suite of language tools for text generation and transformation. These tools enable summarization, clarification, concise writing, and tone adjustment for selected text. They can also correct spelling, expand and paraphrase text, follow instructions, answer questions, and improve text in other ways. Access a list of all available actions via the Universal Action or by setting the Hotkey trigger. ↩ Generate the result using the configured default strategy. ⌘↩ Paste the result and replace selection. ⌥↩ Stream the result and preserve selection. ⌃↩ Copy the result to clipboard. Tip: Write a detailed prompt directly into the Inference Action filter to customize your instruction or question. Inference Action Customization The inference actions are generated from a JSON file called actions.json, located in the workflow folder. You can customize existing actions or add new ones by editing the file directly or by editing actions.config.pkl and then evaluating this file with pkl. Edited November 19 by zeitlings Floating.Point, rudraadavee, cands and 2 others 5
zeitlings Posted May 29 Author Posted May 29 I want to call attention to some aspects that are easily overlooked and share some tips. Note the hidden options, i.e. modifiers mentioned in the documentation to make use of Multi-Line prompt editing Vocalization of the most recent answer (See here how to change the voice) The HUD to quickly check your most important current settings. Some third party proxies allow you to use certain models for free. For example, Groq currently grants access to Llama 3 70B, which is en par with Gemini 1.5 Pro according to some benchmarks. OpenRouter curates a list of free models that can be used with their API. The new API for Google's Gemini is currently in beta. If your API requests are detected as coming from the US, Gemini comes with a substantial free tier quota for both Gemini 1.5 Flash and Pro. I also encourage you to explore local LLMs. Meta recently released their Llama 3 model, and the "tiny" Llama 3 8B performs beautifully on newer Macs. According to some benchmarks, the 8B version competes with GPT 3.5 Turbo, and I can confirm that I was at least very positively surprised by its capabilities. Related: Ollama Workflow. iandol 1
rudraadavee Posted May 31 Posted May 31 The continue with existing chat action leads to nothing in the workflow editor. Overall pretty nice work.
rudraadavee Posted May 31 Posted May 31 hey messed with the workflow for a while and found that the "new chat" function has stopped working and also could you help me with the path in "openai_alternative_local_url_path" in case of local LLMs? thanks
zeitlings Posted May 31 Author Posted May 31 (edited) 6 hours ago, rudraadavee said: hey messed with the workflow for a while and found that the "new chat" function has stopped working I've just checked all the possible ways to start a new chat, and they all work on my end. (Without more information, I can't infer where it stopped working for you or why.) Don't forget to keep an eye on the debugger to potentially identify any problems more easily. 6 hours ago, rudraadavee said: also could you help me with the path in "openai_alternative_local_url_path" in case of local LLMs? Sure, what's the problem? Edited May 31 by zeitlings
dood Posted June 6 Posted June 6 Hi @zeitlings, I'm curious about the integration with Exa.ai. If I add an API key, does every message I send through the workflow get filtered through Exa.ai? Or is only certain messages based on how the prompt is formulated? Thanks!
zeitlings Posted June 6 Author Posted June 6 Only messages that the model deems to require real-time information will elicit a detour through exa.ai. OpenAI dubbed this tool- or function calling. I caution the model to be extremely conservative about calling the function, so ideally it should only happen when really necessary. Whenever it does happen, you will notice that a "Searching" notification is injected into the conversation! This is when the external API is contacted, otherwise exa sees nothing, and even then it doesn't see your message, but only a search query constructed by the model to get the information it needs.
dood Posted June 6 Posted June 6 The Exa.ai integration works great! I'm unfortunately unable to start a new chat, though. The following error appears in the debug log: [16:11:00.309] STDERR: Ayai · GPT Nexus[Run Script] mkdir: /Users/*/Library/Application Support/Alfred/Workflow Data/com.zeitlings.gpt.nexus: No such file or directory mv: rename /Users/*/Library/Caches/com.runningwithcrayons.Alfred/Workflow Data/com.zeitlings.gpt.nexus/chat.json to /Users/*/Library/Application Support/Alfred/Workflow Data/com.zeitlings.gpt.nexus/archive/20240606-1611-b910df14f9c640c983679ff663ed008e.json: No such file or directory
zeitlings Posted June 6 Author Posted June 6 Thanks for the error message. For a quick fix, you can try replacing this [[ -d "$archive" ]] || mkdir "$archive" with this [[ -d "$archive" ]] || mkdir -p "$archive" here
iandol Posted June 7 Posted June 7 Wow, amazing set of supported interfaces, thank you @zeitlings -- beautiful icon too!!! On github, alpha.1 is newer than alpha.2, I assume alpha.2 is the better one to download though? zeitlings 1
zeitlings Posted June 7 Author Posted June 7 3 hours ago, iandol said: Wow, amazing set of supported interfaces, thank you @zeitlings -- beautiful icon too!!! Thanks 🤗 3 hours ago, iandol said: On github, alpha.1 is newer than alpha.2, I assume alpha.2 is the better one to download though? Is it? I noticed that the sorting looks wrong when viewing all releases, do you mean this? But alpha.2 should be set as the latest release. So yes, alpha.2 has some fixes and changes over alpha.1. However, mind that the problem with the data directory not being created is not fixed yet. iandol 1
philocalyst Posted August 17 Posted August 17 Love the utility! Thank you for putting this together. I use a dark theme, and I like larger text in general, and I don't see options to edit the font size or colors of the interface... Is this something that could be implemented?
zeitlings Posted August 17 Author Posted August 17 Hey @philocalyst, thanks! Glad to hear it. At the bottom of the configuration you can find a section called "Accessibility" where the text size can be changed. The available sizes are enforced by Alfred's text view and cannot be customized further. Generally, the look of the text view is determined by your current theme, i.e. changes in color or font size there will be reflected in the Ayai chat window. Unfortunately, ad hoc changes to the theme of a specific object (such as the text view used for the chat window), without affecting the theme everywhere else, is currently not possible. philocalyst 1
zeitlings Posted August 21 Author Posted August 21 Ayai GPT Nexus alpha.5 Added Inference Actions 🥳 Requires accessibility access Inference Actions are evaluated with pkl and can be freely modified or extended Added configuration option: Generate Added Environment Variable openai_alternative_shared_finish_reasons to handle additional stop sequences #2 Updated models Updated annotated chat service images Updated workflow documentation Inference Actions provide a suite of language tools for text generation and transformation. These tools enable summarization, clarification, concise writing, and tone adjustment for selected text. They can also correct spelling, expand and paraphrase text, follow instructions, answer questions, and improve text in other ways. The results will be either streamed, pasted or copied to the clipboard. The default behaviour can be set via the Generate configuration option or selected en passant via the key modifier combinations. Ayai will make sure that the frontmost application accepts text input before streaming or pasting anything, and will simply copy the result to the clipboard if it does not. This requires accessibility access, which you may need to grant in order to use inference actions. ↩ Generate the result using the configured default strategy. ⌘↩ Paste the result and replace selection. ⌥↩ Stream the result and preserve selection. ⌃↩ Copy the result to clipboard.
philocalyst Posted August 24 Posted August 24 Just a general concern, which caused another problem as I tried to fix it, for some reason when I send in a query through the workflow it will somehow use 40,000+ tokens?! Then I tried to shorten my context window (even though I'm not within the actual chat view so it shouldn't matter) and now it's saying messages: first message must use the "user" role Is this expected behavior? Don't know what to do, I find it very convenient just not worth the 20 cents per query.
zeitlings Posted August 24 Author Posted August 24 Hey @philocalyst Keep in mind: Quote If you opt in to using more expensive models, be aware of their prices and consider starting new conversations often to avoid paying for unnecessarily large chat histories included in the context window. If you enable live-search (and use ChatGPT) there will be an additional ~500 tokens used per request. The system prompt also comes on top each time. You can also limit the count of messages that are included in the prompt via the configuration, effectively shortening the context window: 4 hours ago, philocalyst said: messages: first message must use the "user" role How exactly did you try to shorten the context window? If you tried to shorten it via the configuration, and the chat history contains a tool call ("Searching"), I have a hunch that the truncation could be the problem. In that case, try changing the range a bit for now (+1 or +2 or +3, -1 or -2 ...). If that solves the problem, then the way how the context is truncated might actually need to be refined.
philocalyst Posted August 24 Posted August 24 Got it. So my history was growing exponentially and that was what was reflected in the tokens. For some reason I thought if I wasn't in the chat window it was a new conversation each time. The context option is still misleading to me, I wish it was clearer those are messages included, it was my first assumption but I feel it's fair to say how "context" is dealt with in AI engineering is by tokens not chat messages. Thank you for being so kindly dealing with these issues, it's generous of you.
zlebandit Posted August 30 Posted August 30 Hi all. I use this wonderfull Workflow but since this morning, I've got this error : [File System Error] Chat file does not exist. I don't know how to manage it ; can someone help ?
zeitlings Posted August 30 Author Posted August 30 (edited) Hey @zlebandit I'd be interested in the sequence of events that led you to this scenario, if you recall. Which version of the Workflow are you using? As a first aid measure, try starting a new conversation or canceling response generation (this will clean up some of the control-critical files that may be lingering due to some unknown exception, but shouldn't). Edited August 30 by zeitlings
zlebandit Posted August 31 Posted August 31 Hello and thank you for your answer. I use de Alpha 5 version and I was doing an integration in a new workflow in order to make an "action menu" for Universal Action with prompts that I often need.
zeitlings Posted September 1 Author Posted September 1 Hey @zlebandit I see. Make sure you use the External Trigger object with the identifier "chat.new" when you are sending the argument (prompt) and variables to the Nexus via an external workflow! That way your previous conversations will be preserved and the chat file won't be missing. zlebandit 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now