Gold Posted April 12 Share Posted April 12 14 hours ago, Vero said: @Gold As @iandol said in his earlier reply, cost increases as the chat grows so if you're using a long-term chat window, extra tokens will be used as the previous conversations are being included as context in the chat. If you don't need the context of your previous questions, use Cmd + Return to clear the chat and start with a fresh slate, which will ensure you're not using additional tokens in the background for context. In terms of costs, GPT 4.5 Turbo is $10/million queries, whereas GPT 3.5 Turbo is $0.50/million queries, so it may not only depend on how much context is being included, but will also depend which model is being used by the other services you're using. Thanks @Vero, but I've been doing exactly that; starting fresh for each ChatGPT session. Unfortunately, I still find Alfred's API costs far more expensive than the previous app I've been using for over a year: MacGPT. My inputs have been short, and while MacGPT gives significantly longer responses, Alfred’s are always brief. Alfred's ChatGPT still cost significantly more and I'm trying to figure out why. I use the same GPT-4 Turbo model on both. Link to comment
vitor Posted April 12 Author Share Posted April 12 I don’t know what to tell you more than what’s been said. The Alfred workflow follows the API. We’re not privy to your usage (nor do we want to be, we firmly believe in privacy and not collecting user data), but it does not seem accurate to say Alfred’s responses are always brief because the text comes 100% from the API response. There’s nothing that would cause shorter replies (and there have been no other reports of it) unless you’ve set up a system prompt to tell it to make brief answers (you can confirm that in the workflow’s configuration). Again, we don’t have (and don’t want to have) any control over how OpenAI charges you. The workflow follows the API as it is documented and this can be verified by looking at the code which is available in its entirety. If anyone spots an opportunity for optimisation (and I know lots of people have been looking at the code) they are welcome to suggest it. Anything further than that regarding your usage only OpenAI can say. Link to comment
CharlesVermeulen Posted April 12 Share Posted April 12 This is a noob question, but then again, I'm a noob. I'm running the free version of Chatgpt, but when I try to run the workflow, i get the error message " [You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.] I'm confused, since it's free and I haven't exceeded my quota. any advice? thanks a lot Link to comment
iandol Posted April 13 Share Posted April 13 (edited) 4 hours ago, CharlesVermeulen said: This is a noob question, but then again, I'm a noob. I'm running the free version of Chatgpt There is no "free" version of the API according to the pricing page: https://openai.com/pricing There is a free web page interface (https://chat.openai.com), but that is not the API. For the API they give you some tokens at the start if I remember correctly, then you must pay-per-use. If you want free: use a local LLM (the model runs on your Mac, no costs and no privacy concerns as OpenAI hoovers up all your data), or use a wrapper tool like https://openrouter.ai (that can utilise OpenAI, Claude, or many different open source free models using a single unified API). Sadly there is a small API incompatibility between this workflow and OpenRouter (@vitor may accept a pull request to fix it but I haven't had time...), so that leaves you with: give your credit card details to OpenAI, or use a local model... Edited April 13 by iandol CharlesVermeulen 1 Link to comment
Gold Posted April 13 Share Posted April 13 18 hours ago, vitor said: I don’t know what to tell you more than what’s been said. The Alfred workflow follows the API. We’re not privy to your usage (nor do we want to be, we firmly believe in privacy and not collecting user data), but it does not seem accurate to say Alfred’s responses are always brief because the text comes 100% from the API response. There’s nothing that would cause shorter replies (and there have been no other reports of it) unless you’ve set up a system prompt to tell it to make brief answers (you can confirm that in the workflow’s configuration). Again, we don’t have (and don’t want to have) any control over how OpenAI charges you. The workflow follows the API as it is documented and this can be verified by looking at the code which is available in its entirety. If anyone spots an opportunity for optimisation (and I know lots of people have been looking at the code) they are welcome to suggest it. Anything further than that regarding your usage only OpenAI can say. Got it. It's no real problem to me as long as it's expected behavior. I appreciate the attention to the matter. vitor 1 Link to comment
Vero Posted April 13 Share Posted April 13 10 hours ago, CharlesVermeulen said: I'm running the free version of Chatgpt @CharlesVermeulen To expand on @iandol's response: OpenAI offer some API free credit on account creation (it's now $5 I believe) which is valid for 3 months, then expires. If your credit has expired, or you've used it up, you'll see the error message you've received. Reiterating what has been said before; The ChatGPT web-based interface is separate from the API, so while the website is free, there is a cost to the API, however small, so you'll likely need to add a few dollars of credit here: https://platform.openai.com/account/billing/overview Cheers, Vero CharlesVermeulen 1 Link to comment
jpg2webp Posted April 13 Share Posted April 13 The search chat history feature doesn't work on macOS 10.15.7 Catalina Alfred: 5.5 Workflow: v2024.7 Debugging information as follows: [22:58:03.942] Logging Started... [22:58:11.886] ChatGPT / DALL-E[Keyword] Processing complete [22:58:11.890] ChatGPT / DALL-E[Keyword] Passing output '' to Script Filter [22:58:11.900] ChatGPT / DALL-E[Script Filter] Queuing argument '(null)' [22:58:11.970] ChatGPT / DALL-E[Script Filter] Script with argv '(null)' finished [22:58:11.972] ERROR: ChatGPT / DALL-E[Script Filter] Code 1: /Users/jpg2webp/Library/Caches/com.runningwithcrayons.Alfred/Workflow Scripts/A3768CAA-7E3C-4D74-AA12-7030A4958E13: execution error: Error: TypeError: dirContents(archiveDir) .filter(file => file.endsWith(".json")) .toReversed is not a function. (In 'dirContents(archiveDir) .filter(file => file.endsWith(".json")) .toReversed()', 'dirContents(archiveDir) .filter(file => file.endsWith(".json")) .toReversed' is undefined) (-2700) After some searching I found that this `toReversed` function was introduced in ES2023, according to the MDN Web Docs description, the minimum compatible version of Safari for this function is 16+, while last supported version of Safari on macOS Catalina system is 15.6.1 🤣 Since Alfred is still compatible with Catalina, can this problem be solved? (This comment is translated using chatgpt because my native language is not English) Link to comment
dan22 Posted April 15 Share Posted April 15 Hi, Loving the new ChatGPT workflow. Could anyone let me know how to keep the Alfred Window open with the query results ? I usually need to keep the Chatgpt output on screen and would like to avoid it going away when clicking on a different window or app. Thanks, Daniel Link to comment
CarREFuse Posted April 15 Share Posted April 15 (edited) Is there any plan that allows users to use different base url instead of OpenAI? I am using the third-party chatgpt api which has the same request format as OpenAI, and hope I can assemble it in this workflow. Edited April 15 by CarREFuse Link to comment
vitor Posted April 15 Author Share Posted April 15 26 minutes ago, dan22 said: Could anyone let me know how to keep the Alfred Window open with the query results ? That’s not available, but you can copy the output (or just the last answer) with the shortcuts listed at the bottom 20 minutes ago, CarREFuse said: Is there any plan that allows users to use different base url instead of OpenAI? See the first item in the FAQ. Link to comment
dan22 Posted April 15 Share Posted April 15 4 hours ago, vitor said: That’s not available, but you can copy the output (or just the last answer) with the shortcuts listed at the bottom Ok thanks for your reply. I do use the shortcuts to copy the last answer and then paste it to a text editor so I can keep it on the screen but I find it frustrating, it would be great just to be able to "pin the alfred window", also useful in other workflows. Link to comment
guppy16 Posted April 19 Share Posted April 19 I've not used Alfred in some years but my colleage just convinced me to come back to it after sharing this workflow with me. Workflow installed...but not getting any results. After enabling debug, I'm getting 10:39:48.632] Logging Started... [10:39:53.412] ChatGPT / DALL-E[Keyword] Processing complete [10:39:53.422] ChatGPT / DALL-E[Keyword] Passing output 'testing 123' to Arg and Vars [10:39:53.423] ChatGPT / DALL-E[Arg and Vars] Processing complete [10:39:53.424] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Run Script [10:39:53.489] ChatGPT / DALL-E[Run Script] Processing complete [10:39:53.496] ChatGPT / DALL-E[Run Script] Passing output '' to Automation Task [10:39:53.498] ERROR: ChatGPT / DALL-E[Automation Task] Task not found 'com.alfredapp.automation.core/files-and-folders/path.exists' Any pointers? I literally just installed Alfred before this workflow and it's a "clean" machine that hasn't seen Alfred before. Link to comment
vitor Posted April 19 Author Share Posted April 19 Welcome @guppy16, You have to install the Automation Tasks. Link to comment
guppy16 Posted April 19 Share Posted April 19 29 minutes ago, vitor said: Welcome @guppy16, You have to install the Automation Tasks. AHHHHHHH..that did it! Thanks! Link to comment
gatto Posted April 22 Share Posted April 22 (edited) Hello!! Marvelous workflow, but a question: is it possible to have a config field to set the API endpoint? As has been suggested already in this topic, I'm using a locally run LLM that has the same API hooks than openAI, so it's only a question of being able to point the workflow to my server instead of chatgpt's. Thanks. Edited April 22 by gatto Link to comment
vitor Posted April 22 Author Share Posted April 22 @gatto It is indeed possible. See the second post for instructions on how to accomplish that. gatto 1 Link to comment
gatto Posted April 22 Share Posted April 22 7 minutes ago, vitor said: @gatto It is indeed possible. See the second post for instructions on how to accomplish that. Found: environment variables. Link to comment
vitor Posted May 2 Author Share Posted May 2 Updated to 2024.10. Make context window configurable. Link to comment
RF3D Posted May 6 Share Posted May 6 On 4/13/2024 at 3:03 AM, Vero said: @CharlesVermeulen To expand on @iandol's response: OpenAI offer some API free credit on account creation (it's now $5 I believe) which is valid for 3 months, then expires. If your credit has expired, or you've used it up, you'll see the error message you've received. Reiterating what has been said before; The ChatGPT web-based interface is separate from the API, so while the website is free, there is a cost to the API, however small, so you'll likely need to add a few dollars of credit here: https://platform.openai.com/account/billing/overview Cheers, Vero Might be worth pointing this out in the documentation for this workflow! For weeks, I was scratching my head why I get the "Quota exceeded" message when I had no problems sending ChatGPT requests on the ChatGPT website and it showed me plenty of credits there. Of course, credits were at $0 on the API website... Link to comment
wjb Posted May 7 Share Posted May 7 Hey, I've paid for GPT4, but after downloading the Alfred extension and trying to post something, it tells me: [The model gpt-4 does not exist or you do not have access to it.] Can you please help? Link to comment
vitor Posted May 7 Author Share Posted May 7 Welcome @wjb, You need to use the API, the ChatGPT Plus subscription is something separate. See the FAQ for more information. If that’s not the issue, please provide the information requested under “How do I report an issue?”. Link to comment
otrai Posted May 9 Share Posted May 9 Is it possible to copy some of the features from ChatFred into the ChatGPT / DALL-E - OpenAI integrations workflow? ChatFred used to provide the same function as this workflow, yet it came with additional, and extremely useful features: automatic copy to clipboard for ChatGPT responses and DALL-E images text transformation combined prompts aliases universal actions integration among others Unfortunately, the developer was unable to maintain it due to personal reasons (I tried to help but I don't have the experience yet), but the features it provides could be used in this workflow. I really appreciate the Alfred team for making this workflow as it has made me almost as efficient as I was with ChatFred and I think that if the team could integrate some of these features it would be very useful for Alfred PowerPack users. Link to comment
magic Posted May 10 Share Posted May 10 I got a erorr at the Text View Step {"error":{"code":null,"message":"You must provide n=1 for this model.","param":null,"type":"invalid_request_error"}} Any advice? Link to comment
vitor Posted May 11 Author Share Posted May 11 You’re using DALL-E 3 but telling it to generate more than one image. That model can only do one. Link to comment
vitor Posted May 13 Author Share Posted May 13 Updated to 2024.11. Add GPT-4o model. iandol 1 Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now