Jump to content

ChatGPT / DALL-E - OpenAI integrations


Recommended Posts

14 hours ago, Vero said:

@Gold As @iandol said in his earlier reply, cost increases as the chat grows so if you're using a long-term chat window, extra tokens will be used as the previous conversations are being included as context in the chat.

 

If you don't need the context of your previous questions, use Cmd + Return to clear the chat and start with a fresh slate, which will ensure you're not using additional tokens in the background for context.

 

In terms of costs, GPT 4.5 Turbo is $10/million queries, whereas GPT 3.5 Turbo is $0.50/million queries, so it may not only depend on how much context is being included, but will also depend which model is being used by the other services you're using.

 

Thanks @Vero, but I've been doing exactly that; starting fresh for each ChatGPT session. Unfortunately, I still find Alfred's API costs far more expensive than the previous app I've been using for over a year: MacGPT. My inputs have been short, and while MacGPT gives significantly longer responses, Alfred’s are always brief. Alfred's ChatGPT still cost significantly more and I'm trying to figure out why. I use the same GPT-4 Turbo model on both.

api-cost-comparison.png

Link to comment

I don’t know what to tell you more than what’s been said. The Alfred workflow follows the API.

 

We’re not privy to your usage (nor do we want to be, we firmly believe in privacy and not collecting user data), but it does not seem accurate to say Alfred’s responses are always brief because the text comes 100% from the API response. There’s nothing that would cause shorter replies (and there have been no other reports of it) unless you’ve set up a system prompt to tell it to make brief answers (you can confirm that in the workflow’s configuration).

 

Again, we don’t have (and don’t want to have) any control over how OpenAI charges you. The workflow follows the API as it is documented and this can be verified by looking at the code which is available in its entirety. If anyone spots an opportunity for optimisation (and I know lots of people have been looking at the code) they are welcome to suggest it. Anything further than that regarding your usage only OpenAI can say.

Link to comment

This is a noob question, but then again, I'm a noob. 

I'm running the free version of Chatgpt, but when I try to run the workflow, i get the error message " 

[You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.]

I'm confused, since it's free and I haven't exceeded my quota. 

any advice?

 

thanks a lot

Link to comment
4 hours ago, CharlesVermeulen said:

This is a noob question, but then again, I'm a noob. 

I'm running the free version of Chatgpt

 

There is no "free" version of the API according to the pricing page:

 

https://openai.com/pricing

 

There is a free web page interface (https://chat.openai.com), but that is not the API. For the API they give you some tokens at the start if I remember correctly, then you must pay-per-use.  

 

 

If you want free: use a local LLM (the model runs on your Mac, no costs and no privacy concerns as OpenAI hoovers up all your data), or use a wrapper tool like https://openrouter.ai (that can utilise OpenAI, Claude, or many different open source free models using a single unified API). Sadly there is a small API incompatibility between this workflow and OpenRouter (@vitor may accept a pull request to fix it but I haven't had time...), so that leaves you with: give your credit card details to OpenAI, or use a local model...

 

Edited by iandol
Link to comment
18 hours ago, vitor said:

I don’t know what to tell you more than what’s been said. The Alfred workflow follows the API.

 

We’re not privy to your usage (nor do we want to be, we firmly believe in privacy and not collecting user data), but it does not seem accurate to say Alfred’s responses are always brief because the text comes 100% from the API response. There’s nothing that would cause shorter replies (and there have been no other reports of it) unless you’ve set up a system prompt to tell it to make brief answers (you can confirm that in the workflow’s configuration).

 

Again, we don’t have (and don’t want to have) any control over how OpenAI charges you. The workflow follows the API as it is documented and this can be verified by looking at the code which is available in its entirety. If anyone spots an opportunity for optimisation (and I know lots of people have been looking at the code) they are welcome to suggest it. Anything further than that regarding your usage only OpenAI can say.

 

Got it. It's no real problem to me as long as it's expected behavior. I appreciate the attention to the matter.

Link to comment
10 hours ago, CharlesVermeulen said:

I'm running the free version of Chatgpt

 

@CharlesVermeulen To expand on @iandol's response: OpenAI offer some API free credit on account creation (it's now $5 I believe) which is valid for 3 months, then expires. 

 

If your credit has expired, or you've used it up, you'll see the error message you've received.

 

Reiterating what has been said before; The ChatGPT web-based interface is separate from the API, so while the website is free, there is a cost to the API, however small, so you'll likely need to add a few dollars of credit here:

https://platform.openai.com/account/billing/overview

 

Cheers,
Vero

Link to comment

The search chat history feature doesn't work on macOS 10.15.7 Catalina

 

Alfred: 5.5
Workflow: v2024.7

 

Debugging information as follows:

 

[22:58:03.942] Logging Started...
[22:58:11.886] ChatGPT / DALL-E[Keyword] Processing complete
[22:58:11.890] ChatGPT / DALL-E[Keyword] Passing output '' to Script Filter
[22:58:11.900] ChatGPT / DALL-E[Script Filter] Queuing argument '(null)'
[22:58:11.970] ChatGPT / DALL-E[Script Filter] Script with argv '(null)' finished
[22:58:11.972] ERROR: ChatGPT / DALL-E[Script Filter] Code 1: /Users/jpg2webp/Library/Caches/com.runningwithcrayons.Alfred/Workflow Scripts/A3768CAA-7E3C-4D74-AA12-7030A4958E13: execution error: Error: TypeError: dirContents(archiveDir)
    .filter(file => file.endsWith(".json"))
    .toReversed is not a function. (In 'dirContents(archiveDir)
    .filter(file => file.endsWith(".json"))
    .toReversed()', 'dirContents(archiveDir)
    .filter(file => file.endsWith(".json"))
    .toReversed' is undefined) (-2700)

 

After some searching I found that this `toReversed` function was introduced in ES2023, according to the MDN Web Docs description, the minimum compatible version of Safari for this function is 16+, while last supported version of  Safari on macOS Catalina system is 15.6.1 🤣

Since Alfred is still compatible with Catalina, can this problem be solved?
(This comment is translated using chatgpt because my native language is not English)

Link to comment

Hi,
Loving the new ChatGPT workflow.
Could anyone let me know how to keep the Alfred Window open with the query results ?
I usually need to keep the Chatgpt output on screen and would like to avoid it going away when clicking on a different window or app.
Thanks,
Daniel

Link to comment

Is there any plan that allows users to use different base url instead of OpenAI?

I am using the third-party chatgpt api which has the same request format as OpenAI, and hope I can assemble it in this workflow.

Edited by CarREFuse
Link to comment
26 minutes ago, dan22 said:

Could anyone let me know how to keep the Alfred Window open with the query results ?

 

That’s not available, but you can copy the output (or just the last answer) with the shortcuts listed at the bottom

 

20 minutes ago, CarREFuse said:

Is there any plan that allows users to use different base url instead of OpenAI?

 

See the first item in the FAQ.

Link to comment
4 hours ago, vitor said:

That’s not available, but you can copy the output (or just the last answer) with the shortcuts listed at the bottom

 

Ok thanks for your reply.
I do use the shortcuts to copy the last answer and then paste it to a text editor so I can keep it on the screen but I find it frustrating, it would be great just to be able to "pin the alfred window", also useful in other workflows.

Link to comment

I've not used Alfred in some years but my colleage just convinced me to come back to it after sharing this workflow with me.  Workflow installed...but not getting any results.  After enabling debug, I'm getting

 

10:39:48.632] Logging Started...
[10:39:53.412] ChatGPT / DALL-E[Keyword] Processing complete
[10:39:53.422] ChatGPT / DALL-E[Keyword] Passing output 'testing 123' to Arg and Vars
[10:39:53.423] ChatGPT / DALL-E[Arg and Vars] Processing complete
[10:39:53.424] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Run Script
[10:39:53.489] ChatGPT / DALL-E[Run Script] Processing complete
[10:39:53.496] ChatGPT / DALL-E[Run Script] Passing output '' to Automation Task
[10:39:53.498] ERROR: ChatGPT / DALL-E[Automation Task] Task not found 'com.alfredapp.automation.core/files-and-folders/path.exists'

 

Any pointers?  I literally just installed Alfred before this workflow and it's a "clean" machine that hasn't seen Alfred before.

Link to comment

Hello!! Marvelous workflow, but a question: is it possible to have a config field to set the API endpoint?

 

As has been suggested already in this topic, I'm using a locally run LLM that has the same API hooks than openAI, so it's only a question of being able to point the workflow to my server instead of chatgpt's.

Thanks.

Edited by gatto
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...