Jump to content

yohasebe

Member
  • Posts

    14
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by yohasebe

  1. Thanks!

     

    Currently, the closest to your suggestion is to use the Tab -> Enter. Pressing the Tab key shifts focus from the text area to the "Send Message" button. Once the focus is on the button, pressing Enter will send the query.

  2. An Alfred workflow that removes duplicate Finder tabs and windows and arranges them into a single or dual-pane 👓 layout for a cleaner desktop experience

     

    https://github.com/yohasebe/finder-unclutter

     

    finder-unclutter@2x.png

     

    Finder Unclutter does the following all at once:

    • Unminimize all Finder windows
    • Unduplicate all Finder tabs
    • Merge all Finder windows and tabs
    • Organize Finder in a single/dual pane layout
    • Position Finder in a specified area of the desktop

     

    finder-unclutter.gif

     

    screenshot.png

    dual-pane-position.png

     

    single-pane-position.png

     

    config.png

  3. That is correct if you have installed only "mpv" and "sox" and not others listed on the GitHub page.

     

    If you have installed all the dependencies listed on the page, the following will remove all the dependencies listed on the GitHub README page of the workflow if you have installed all of them.

     

    brew uninstall pandoc mpv sox jq duti
    brew autoremove

     

    Thank you for your kind words!

  4. Hi, thank you for your questions. I'm sorry for not getting back to you sooner!

     

    Quote

    However, I don’t use speech to text and text to speech. As such I prefer not to install the additional depencies if given a chance. Are these dependencies a must-install or is it optional?  

     

    Among the dependencies, only `pandoc`, `jq`, and `duti` are necessary, but `mpv` and `sox` are optional if you do not use speech-to-text and text-to-speech. In version `2.8.6`, an option has been added in the settings to hide speech-related buttons on the web UI.

     

    Quote

    The other thing I'd like to explore is how to save the chat history into Drafts. I understand that there is an export button which export history.json but I am not sure how this can be imported into Drafts. I don't suppose this history is saved into OpenAI chat history, right? Most API calls are not captured by OpenAI website.

     

    Exporting chat data on this workflow only saves the data to the history.json file, which is simply a plain text file in JSON format.

     

    I have not thought about using history.json for purposes other than loading chat data at a later time. However, since the file consists of just a simple JSON object, other software programs can likely handle it. Please give it a try!

  5. Hi, I am the author of this workflow for using OpenAI's API from Alfred 5. Here are the features:

     

    • Interactive query/chat with GPT-3.5 / GPT-4
    • Image generation using DALL-E API
    • Voice-to-text using Whisper API

     

    Github Repo: https://github.com/yohasebe/openai-chat-api-workflow

     

    Dependencies:

    • OpenAI API key
    • Pandoc (available on Homebrew) for converting Markdown to HTML
    • Sox (available on Homebrew) for recording audio from a microphone
    • jq (available on Homebrew) for converting Markdown to HTML
    • duti (available on Homebrew) for using Google Chrome instead of Safari (optional)

    image.png

     

    • web-interface.thumb.png.456ad5abccb0576a419b1e90457e34e9.png

    chat.png

    prompt-enhancement.png

  6. @vitor Thank you for the great suggestions. I updated the workflow accordingly.

     

    1. Set default search path (`~`).

    2. Fixed typo in the description.

    3. Removed unnessesary Ruby files.

    4. Set `~/Library` to be ignored for better performance.

     

    The intermediary step before the Script Filter is left as is for now. It may not be essential, but I wanted to use the action modifiers with ⌘ and ⌥.

     

    The latest version 1.4.0 has been pushed to the Github repo. Thanks again for your great work!

     

  7. @Andrew It does (or appears to) happen no matter what is on the screen. I am attaching a MP4 of what it looks like. The result text is read out 10 seconds after the start, but the above messages are also read out before and after that.

     

    https://user-images.githubusercontent.com/18207/222877765-5a67a1d5-6348-48ce-8bd7-7be084f6c0b5.mp4

     

    I tried sleep times of 1.5, 2.0, and 2.5 seconds on my M1 MacBook Air and found that 1.5 seconds was too short. I think I'll go with 2.5 seconds. This also does not seem to matter.

  8. @Andrew Thank you for your valuable advice. I have confirmed that adding a delay workflow object works. I'm sorry that I made this topic in the bug category even though it is not!

     

    There is one more thing that is not clear to me. VoiceOver reads the following a few times and finally speaks the result text:

     

    "Currently you are on a system dialog."

    "In system dialog, content is empty."

     

    This is another problem I have been told by the blind user of the workflow. Any ideas?

  9. Hi,

     

    I develop a workflow to use OpenAI's text completion API on Alfred.

     

    OpenAI Text-Completion Workflow for Alfred

    https://github.com/yohasebe/openai-text-completion-workflow

     

    A blind user of this workflow has reported that the "speak" feature of the workflow does not work. I have checked this myself and found that when VoiceOver is activated, neither VoiceOver nor the default Voice Synthesizer in MacOS will perform speech synthesis.

     

    Could you please take a look at this option and fix the problem? When VoiceOver is not activated, the "speak" feature works as expected.

     

    PS: It is great that Alfredo can be a very useful tool for the blind.

     

    2023-02-28_19-27-40.png

×
×
  • Create New...