Jump to content

ChatGPT / DALL-E - OpenAI integrations


Recommended Posts

Hi Vitor,

 

congrats for this workflow, ChatGPT is a game changer in my daily tasks 👍 

 

Unfortunately I cannot make dalle work 😞

 

MacOs : 13.6.6 (22G630)

Alfred : 5.5

Extension : v2024.5

 

Here is a video about the bug: https://www.dropbox.com/scl/fi/k9vqtqml3t0bevx59nyd6/Screenshot-2024-03-31-13-08-49.mp4?rlkey=naxx4lpi1e3pnltqewa4revoy&dl=0

 

Here is the console debug:

 

```

[13:09:09.065] ChatGPT / DALL-E[Keyword] Passing output 'a blue house' to Text View
[13:09:09.084] ChatGPT / DALL-E[Text View] Running with argument 'a blue house'
[13:09:09.123] ChatGPT / DALL-E[Text View] Script with argv 'a blue house' finished
[13:09:09.139] ERROR: ChatGPT / DALL-E[Text View] Code 1: /Users/potsky/Dropbox/Alfred/Alfred.alfredpreferences/workflows/user.workflow.4EC95523-5715-421D-ABB4-3D09789FBFDF/dalle: execution error: Error: TypeError: undefined is not an object (near '....js.map(p => p.path.js)...') (-2700)

```

 

Reverting to `dalle` instead of `ci` does not change anything...

 

Any idea? Thanx!

Edited by potsky
Link to comment

Love the workflow!

 

I have a question about how to add some custom prompts on text selected with universal text actions. What is the best way to do this? Basically to setup some universal actions that pre-append a command like "summarise this" or "break this into tasks" above the select text before it is submitted?

 

Is there an easy way to do this?

Link to comment
3 hours ago, Textdriven said:

Love the workflow!

 

Thank you.

 

3 hours ago, Textdriven said:

Basically to setup some universal actions that pre-append a command like "summarise this" or "break this into tasks" above the select text before it is submitted?

 

That is correct.

 

3 hours ago, Textdriven said:

Is there an easy way to do this?

 

Make a new workflow with a Keyword Input and connect it to an Arg and Vars Utility with your custom text plus {query} for the new input from the Keyword. Then connected it to a Call External Trigger Output set to open continue_chat from this workflow.

Link to comment

Hi. Thanks for the workflow! 

I am getting this error: 

 

We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.

 

Content of chat.json

 

[{"role":"user","content":"write a python snippet to read json from file"},{"role":"user","content":"write a python snippet to read json from file"},{"role":"user","content":"write a python snippet to read json from file"},{"role":"user","content":"We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com."},{"role":"user","content":" "}]

 

Alfred : 5.5

Extension : v2024.5

MacOS: 14.3 (23D56)

Link to comment

[08:59:18.157] ChatGPT / DALL-E[Keyword] Processing complete

[08:59:18.160] ChatGPT / DALL-E[Keyword] Passing output '' to Arg and Vars

[08:59:18.161] ChatGPT / DALL-E[Arg and Vars] Processing complete

[08:59:18.161] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Automation Task

[08:59:18.161] ChatGPT / DALL-E[Automation Task] Running task 'Does Path Exist?' with arguments (

    "/Users/dpatlazh/Library/Caches/com.runningwithcrayons.Alfred/Workflow Data/com.alfredapp.vitor.openai/chat.json"

)

[08:59:18.253] ChatGPT / DALL-E[Automation Task] Processing complete

[08:59:18.255] ChatGPT / DALL-E[Automation Task] Passing output 'true' to Conditional

[08:59:18.255] ChatGPT / DALL-E[Conditional] Processing complete

[08:59:18.256] ChatGPT / DALL-E[Conditional] Passing output 'true' to Arg and Vars

[08:59:18.256] ChatGPT / DALL-E[Arg and Vars] Processing complete

[08:59:18.256] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Text View

[08:59:18.275] ChatGPT / DALL-E[Text View] Running with argument ''

[08:59:18.425] ChatGPT / DALL-E[Text View] Script with argv '' finished

[08:59:18.426] ChatGPT / DALL-E[Text View] {"response":"### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n","behaviour":{"scroll":"end"}}

[08:59:31.481] ChatGPT / DALL-E[Text View] Processing complete

[08:59:31.483] ChatGPT / DALL-E[Text View] Passing output '' to Run Script

[08:59:31.544] ERROR: ChatGPT / DALL-E[Run Script] /Users/dpatlazh/Library/Caches/com.runningwithcrayons.Alfred/Workflow Scripts/7EFC000B-37E3-418B-A17E-BA7BFF2398E9: execution error: Error: TypeError: undefined is not an object (evaluating 'readChat(chatFile).findLast(message => message["role"] === "assistant")["content"]') (-2700)

[08:59:31.545] ChatGPT / DALL-E[Run Script] Processing complete

[08:59:31.545] ChatGPT / DALL-E[Run Script] Passing output '' to Transform

[08:59:31.545] ChatGPT / DALL-E[Transform] Processing complete

[08:59:31.545] ChatGPT / DALL-E[Transform] Passing output '' to Copy to Clipboard

[08:59:37.569] ChatGPT / DALL-E[Keyword] Processing complete

[08:59:37.573] ChatGPT / DALL-E[Keyword] Passing output '' to Arg and Vars

[08:59:37.574] ChatGPT / DALL-E[Arg and Vars] Processing complete

[08:59:37.574] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Automation Task

[08:59:37.574] ChatGPT / DALL-E[Automation Task] Running task 'Does Path Exist?' with arguments (

    "/Users/dpatlazh/Library/Caches/com.runningwithcrayons.Alfred/Workflow Data/com.alfredapp.vitor.openai/chat.json"

)

[08:59:37.587] ChatGPT / DALL-E[Automation Task] Processing complete

[08:59:37.590] ChatGPT / DALL-E[Automation Task] Passing output 'true' to Conditional

[08:59:37.590] ChatGPT / DALL-E[Conditional] Processing complete

[08:59:37.590] ChatGPT / DALL-E[Conditional] Passing output 'true' to Arg and Vars

[08:59:37.590] ChatGPT / DALL-E[Arg and Vars] Processing complete

[08:59:37.590] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Text View

[08:59:37.638] ChatGPT / DALL-E[Text View] Running with argument ''

[08:59:37.707] ChatGPT / DALL-E[Text View] Script with argv '' finished

[08:59:37.708] ChatGPT / DALL-E[Text View] {"response":"### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n","behaviour":{"scroll":"end"}}

[08:59:45.173] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[08:59:45.325] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[08:59:45.325] ChatGPT / DALL-E[Text View] {"rerun":0.1,"variables":{"streaming_now":true,"stream_marker":true},"response":"### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n### write a python snippet to read json from file\n\n"}

[08:59:45.432] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[08:59:45.562] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[08:59:45.563] ChatGPT / DALL-E[Text View] {"rerun":0.1,"variables":{"streaming_now":true},"response":"…","behaviour":{"response":"append"}}

[08:59:45.672] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[08:59:45.813] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[08:59:45.815] ChatGPT / DALL-E[Text View] {"response":"[We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)]  \n(Mon, 01 Apr 2024 15:59:45 GMT)","behaviour":{"response":"replacelast"}}

[09:46:45.180] ChatGPT / DALL-E[Keyword] Processing complete

[09:46:45.191] ChatGPT / DALL-E[Keyword] Passing output '' to Arg and Vars

[09:46:45.193] ChatGPT / DALL-E[Arg and Vars] Processing complete

[09:46:45.193] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Automation Task

[09:46:45.194] ChatGPT / DALL-E[Automation Task] Running task 'Does Path Exist?' with arguments (

    "/Users/dpatlazh/Library/Caches/com.runningwithcrayons.Alfred/Workflow Data/com.alfredapp.vitor.openai/chat.json"

)

[09:46:45.208] ChatGPT / DALL-E[Automation Task] Processing complete

[09:46:45.209] ChatGPT / DALL-E[Automation Task] Passing output 'true' to Conditional

[09:46:45.209] ChatGPT / DALL-E[Conditional] Processing complete

[09:46:45.210] ChatGPT / DALL-E[Conditional] Passing output 'true' to Arg and Vars

[09:46:45.210] ChatGPT / DALL-E[Arg and Vars] Processing complete

[09:46:45.211] ChatGPT / DALL-E[Arg and Vars] Passing output '' to Text View

[09:46:45.223] ChatGPT / DALL-E[Text View] Running with argument ''

[09:46:45.308] ChatGPT / DALL-E[Text View] Script with argv '' finished

[09:46:45.316] ChatGPT / DALL-E[Text View] {"response":"### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n","behaviour":{"scroll":"end"}}

[09:46:54.937] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[09:46:55.089] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[09:46:55.097] ChatGPT / DALL-E[Text View] {"rerun":0.1,"variables":{"streaming_now":true,"stream_marker":true},"response":"### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n### write a python snippet to read json from file\n\n[Answer Interrupted]\n\n### write a python snippet to read json from file\n\n"}

[09:46:55.193] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[09:46:55.310] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[09:46:55.320] ChatGPT / DALL-E[Text View] {"rerun":0.1,"variables":{"streaming_now":true},"response":"…","behaviour":{"response":"append"}}

[09:46:55.417] ChatGPT / DALL-E[Text View] Running with argument 'write a python snippet to read json from file'

[09:46:55.557] ChatGPT / DALL-E[Text View] Script with argv 'write a python snippet to read json from file' finished

[09:46:55.565] ChatGPT / DALL-E[Text View] {"response":"[We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)]  \n(Mon, 01 Apr 2024 16:46:55 GMT)","behaviour":{"response":"replacelast"}}

[10:21:13.170] Logging Started...

Link to comment

Could you provide a short video with the debugger open? In it, please also make sure to start a new conversation with ⌘↩ so I can see it happen with a clean slate and better see the timings with the actions, to see if I gather the cause.

Link to comment

Please try this:

  1. Delete the workflow.
  2. Delete ~/Library/Application Support/Alfred/Workflow Data/com.alfredapp.vitor.openai
  3. Delete ~/Library/Caches/com.runningwithcrayons.Alfred/Workflow Data/com.alfredapp.vitor.openai
  4. Install this version.

Let me know how you get on. If it still fails, I’ll need the new debugger output and the chat.json file (not the contents, but the actual file).

Link to comment

Funnily enough, I don’t know. Your report was the first one I ever got for this issue. But what I have locally already has a ton of changes from the current release version, so trying to reproduce your issue would be extra hard since it’s effectively a different workflow. So I finished up the changes I was making today, cleaned it up a bit and gave you an early version, thinking that whatever it was might’ve been addresses already. Apparently it was! Deleting the directories was to be extra sure there wasn’t some other interim thing causing the problem (that’s still a possibility). So whatever you were facing ends up no longer mattering, since the new release will be closer to what you have now.

Link to comment
On 4/1/2024 at 7:00 AM, vitor said:

Make a new workflow with a Keyword Input and connect it to an Arg and Vars Utility with your custom text plus {query} for the new input from the Keyword. Then connected it to a Call External Trigger Output set to open continue_chat from this workflow.

Hi Vitor, thanks for creating this workflow - opens up so many new ways to leverage AI!!

 

I have a question about this idea. I'm aiming to create several workflows that apply specific instructions to the given text e.g. I'm trying to create a workflow that takes the selected text and improves the writing based on a custom prompt/instructions.

 

Using your guidance, I've managed to get it working. However, I wanted to ask if there's a way to display only the OpenAI output, without showing the entire prompt I entered in the Args and Vars step? I've included what I'm seeing below.

 

[Bonus] Include some output text to display before the output from chatgpt e.g.

"Here's the improved text:

<chatgpt response>"

 

 

Alfred Preferences.png

chat window with prompt.png

Edited by Alfred0
Link to comment

I'm really enjoying this ChatGPT workflow so far.

 

However, I'm curious if anyone else is concerned about the cost of usage? I've noticed that my prompt queries in Alfred's ChatGPT workflow are costing me 3x to 5x times higher than what I experienced with my former ChatGPT app(s).

Link to comment
On 4/7/2024 at 4:00 AM, Alfred0 said:

I have a question about this idea. I'm aiming to create several workflows that apply specific instructions to the given text e.g. I'm trying to create a workflow that takes the selected text and improves the writing based on a custom prompt/instructions.

 

@Alfred0 — you can have a look at Kiki (https://github.com/afadingthought/kiki-ai-workflow), which supports different profiles and I think is designed more for your workflow. I also use different system prompts and variables to "guide" the LLM (in my case using BetterTouchTool, but the underlying idea is the same, have an LLM for editing, one for creative writing, one for coding support or geek stuff etc.). I believe @vitor wants to keep this workflow "lean'n'clean" in terms of the core feature set, and the great thing about Alfred is how easy it is to modify workflows for specific purposes 😍

Edited by iandol
Link to comment
4 hours ago, Gold said:

I'm really enjoying this ChatGPT workflow so far.

 

However, I'm curious if anyone else is concerned about the cost of usage? I've noticed that my prompt queries in Alfred's ChatGPT workflow are costing me 3x to 5x times higher than what I experienced with my former ChatGPT app(s).

 

Don't forget there are local LLMs, which while less powerful, are totally free to use (and easier to customise), and also other "wrapper" tools which give you more flexibility in terms of what to use as the model backend (like https://openrouter.ai/).

 

I don't use OpenAI myself but I assume costs increase as message context grows? If so, one way to keep costs down could be to "prune" the previous message context, but that does make the chat less accurate. One way to do that already is to start new chats whenever you don't really need the previous message context?

Edited by iandol
Link to comment

Thanks for your insight @iandol. Those are both sensible approaches but not interested in local LLMs at this time. But it hadn't occurred to me to 'prune' with new chats; as I usually just allow the chat logs to accumulate. It's possible that the extended chat logs have been a factor in the increased API costs. I'll experiment with this method for a few weeks and observe if it aids in reducing expenses. 

Link to comment

Do you see a log of your individual request activity in the OpenAI accounts page? With OpenRouter (screenshot below), each request is shown with the tokens sent and received and then the costs (in this case Mistral is an opensource and therefore free-to-use model), that could help you work out if pruning may help?

 

image.thumb.png.0326f21d53a61212c98c774e50d4ee63.png

Link to comment
On 4/7/2024 at 9:20 PM, Gold said:

However, I'm curious if anyone else is concerned about the cost of usage? I've noticed that my prompt queries in Alfred's ChatGPT workflow are costing me 3x to 5x times higher than what I experienced with my former ChatGPT app(s).

 

@Gold If you open the workflow's configuration, which models are you using? You can see the pricing of OpenAI's different models on this page:

https://openai.com/pricing#language-models

Link to comment

Workflow Version: v2024.7

Alfred Version: 5.5 [2257]

MacOS Version: Sonoma 14.4.1

 

I installed the Workflow and put in my API Key. When i toggle Alfred and type the keyword (chatgpt) nothing happens. Pressing Enter doesn't change anything.

 

image.png.1968fc1fe25cb18addeebac62a931354.png

Link to comment

I am unable to use GPT-4 despite having API for it.

 

[09:04:37.890] ChatGPT / DALL-E[Text View] Script with argv 'is this gtp-4?' finished
[09:04:37.893] ChatGPT / DALL-E[Text View] {"rerun":0.1,"variables":{"streaming_now":true},"response":"As of my last update in April 2023, I am based on GPT-3 technology. If there have been new developments or releases after that time, I would not be able to provide information or features related to those updates, including a potential GPT-4 model. OpenAI periodically updates their models and offerings, so for the most current information on what technology is","behaviour":{"response":"replacelast","scroll":"end"}}
[09:04:37.996] ChatGPT / DALL-E[Text View] Running with argument 'is this gtp-4?'

 

Does anyone have any advice? 

Link to comment

@naukc You need to install the Automation Tasks.

 

@3point ChatGPT (and any other LLM) doesn’t “know” anything, it strings text together. Check OpenAI’s own models page; GPT-4’s cut-off date is April 2023 while GPT-3’s cut-off date is September 2021. So the answer you were given is contradictory, it can’t both have a cut-off date of April 2023 and be GPT-3. In other words, you are using GPT-4. Asking it what model it is is particularly unreliable, I’ve been reproducing that wrong answer for quite a while.

Link to comment
On 4/9/2024 at 1:10 AM, Vero said:

 

@Gold If you open the workflow's configuration, which models are you using? You can see the pricing of OpenAI's different models on this page:

https://openai.com/pricing#language-models

 

Thanks @Vero. Yes, I'm aware of their pricing page and token costs. I'm just still a bit confused as to why it's costing significantly more to use ChatGPT through Alfred compared to other apps. The model I'm using on Alfred is GPT-4 Turbo (preview).

 

Link to comment

@Gold As @iandol said in his earlier reply, cost increases as the chat grows so if you're using a long-term chat window, extra tokens will be used as the previous conversations are being included as context in the chat.

 

If you don't need the context of your previous questions, use Cmd + Return to clear the chat and start with a fresh slate, which will ensure you're not using additional tokens in the background for context.

 

In terms of costs, GPT 4.5 Turbo is $10/million tokens, whereas GPT 3.5 Turbo is $0.50/million tokens, so it may not only depend on how much context is being included, but will also depend which model is being used by the other services you're using.

Edited by Vero
Corrected my wording, pricing is on a tokens basis, not a queries basis
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...