######   ##    ##
 ##    ##  ##   ##
 ##        ##  ##
  ######   #####
       ##  ##  ##
 ##    ##  ##   ##
  ######   ##    ##


PromptLab
PromptLab is a Raycast extension for creating and sharing powerful,
contextually-aware AI commands using placeholders, action scripts, 
more.

PromptLab allows you to create custom AI commands with prompts that
utilize contextual placeholders such as {{selectedText}}, {{todayEv
or {{currentApplication}} to vastly expand the capabilities of Rayc
PromptLab can also extract information from selected files, if you 
so that it can tell you about the subjects in an image, summarize a
and more.

PromptLab also supports "action scripts" -- AppleScripts which run 
the AI's response as input, as well as experimental autonomous agen
features that allow the AI to run commands on your behalf. These 
capabilities, paired with PromptLab's extensive customization optio
open a whole new world of possibilities for enhancing your workflow
AI.
Raycast Store Page
My Other Extensions
View On GitHub
Support Development

Feature Overview

 - Create, Edit, Run, and Share Custom Commands
 - Multiple Command Types (Detail, List, Chat, No-View, and Dialog 
 - Numerous Contextual Placeholders
   - AppleScript, JXA, Shell Script, and JavaScript Placeholders
   - External Data Placeholders (APIs, Websites, and Applications)
   - Custom Placeholders Specified in JSON 
 - Extract Text, Subjects, QR Codes, and more from Images and Video
 - Quick Access to Commands via Menu Bar Item
 - Import/Export Commands
 - Save & Run Commands as Quicklinks
 - Run Scripts Upon Model Response
 - Execute Siri Shortcuts and use Output in Prompts
 - PromptLab Chat + Autonomous Command Execution Capability
 - Multiple Chats, Chat History, and Chat Statistics
 - Chat-Specific Context Data Files
 - Upload/Download Commands to/from PromptLab Command Store
 - Use Custom Model Endpoints
 - Favorite Commands, Chats, and Models
 - Option to Speak Responses
 - Option to Use Spoken Input 


Top-Level Commands

 - New PromptLab Commands
 - My PrompLab Commands
 - PromptLab Command Store
 - PromptLab Chat
 - Manage Models
 - PromptLab Menuu Item
 - Import PromptLab Commands


Create Your Own Commands

You can create custom PromptLab commands, accessed via the "My Prom
Commands" command, to execute your own prompts acting on the conten
selected files. A variety of useful defaults are provided, and you 
find more in the PromptLab Command Store.


Placeholders

When creating custom commands, you can use placeholders in your pro
that will be substituted with relevant information whenever you run
command. These placeholders range from simple information, like the
current date, to complex data retrieval operations such as getting 
content of the most recent email or running a sequence of prompts i
succession and amalgamating the results. Placeholders are a powerfu
to add context to your PromptLab prompts.

A few examples of placeholders are:

 - {{clipboardText}} - The text content of your clipboard
 - {{selectedFiles}} - The paths of the files you have selected
 - {{imageText}} - Text extracted from the image(s) you have select
 - {{lastNote}} - HTML of the most recently modified note in Notes.
 - {{date format="d MMMM, yyyy"}} - The current date with optional 
 - {{todayEvents}} - Events scheduled for today
 - {{youtube:[URL]}} - Transcript of YouTube video at URL
 - {{url:[URL]}} - The visible text at the specified URL
 - {{as:...}} - The result of the specified AppleScript code
 - {{js:...}} - The result of the specified JavaScript code


Action Scripts

When configuring a PromptLab command, you can provide AppleScript c
execute once the AI finishes its response. You can access the respo
text via the response variable in AppleScript. Several convenient h
for working with the response text are also provided, as listed bel
Action Scripts can be used to build complex workflows using AI as a
content provider, navigator, or decision-maker.

Provided Variables:

 - input - The selected files or text input provided to the command
 - prompt - The prompt component of the command that was run.
 - response - The full response received from the AI.

Provided Handlers:

 - split(theText, theDelimiter)
 - trim(theText)
 - replaceAll(theText, textToReplace, theReplacement)
 - rselect(theArray, numItems)


Custom Configuration Fields

When creating a command, you can use the Unlock Setup Fields action
enable custom configuration fields that must be set before the comm
be run. You'll then be able to use actions to add text fields, bool
(true/false) fields, and/or number fields, providing instructions a
see fit. In your prompt, use the {{config:fieldName}} placeholder, 
camel-cased, to insert the field's current value. When you share th
command to the store and others install it, they'll be prompted to 
out the custom fields before they can run the command. This is a gr
to make your commands more flexible and reusable.


Chats

Using the "PromptLab Chat" command, you can chat with AI while maki
of features like placeholders and selected file contents. Chat are 
preserved for later reference or continuation, and you can customiz
chat's name, icon, color, and other settings. Chats can have "Conte
Data" associated with them, ensuring that the LLM stays aware of th
files, websites, and other information relevant to your conversatio
Within a chat's settings, you can view various statistics highlight
you've interacted with the AI, and you can export the chat's conten
(including the statistics) to JSON for portability.


Autonomous Agent Features

When using PromptLab Chat, or any command that uses a chat view, yo
choose to enable autonomous agent features by checking the "Allow A
Run Commands" checkbox. This will allow the AI to run PromptLab com
on your behalf, supplying input as needed, in order to answer your 
queries. For example, if you ask the AI "What's the latest news?", 
might run the "Recent Headlines From 68k News" command to fulfil yo
request, then return the results to you. This feature is disabled b
default, and can be enabled or disabled at any time.


Custom Model Endpoints

When you first run PromptLab, you'll have the option to configure a
model API endpoint. If you have access to Raycast AI, you can just 
everything as-is, unless you have a particular need for a different
You can, of course, adjust the configuration via the Raycast prefer
at any time.

To use any arbitrary endpoint, put the endpoint URL in the Model En
preference field and provide your API Key alongside the correspondi
Authorization Type. Then, specify the Input Schema in JSON notation
{prompt} to indicate where PromptLab should input its prompt. 
Alternatively, you can specify {basePrompt} and {input} separately,
example if you want to provide content for the user and system role
separately when using the OpenAI API. Next, specify the Output Key 
output text within the returned JSON object. If the model endpoint 
a string, rather than a JSON object, leave this field empty. Finall
specify the Output Timing of the model endpoint. If the model endpo
returns the output immediately, select Synchronous. If the model en
returns the output asynchronously, select Asynchronous.

OpenAI API Example:

 Model Endpoint: https://api.openai.com/v1/chat/completions
 Authorization Type: Bearer Token
 API Key: Your API key
 Input Schema: { "model": "gpt-4", "messages": [{"role": "user", 
"content": "{prompt}"}], "stream": true }
 Output Key Path: choices[0].delta.content
 Output Timing: Asynchronous

Anthropic API Example:

 Model Endpoint: https://api.anthropic.com/v1/complete
 Authorization Type: X-API-Key
 API Key: Your API Key
 Input Schema: { "prompt": "\n\nHuman: {prompt}\n\nAssistant: ", "m
"claude-instant-v1-100k", "max_tokens_to_sample": 300, "stop_sequen
["\n\nHuman:"] , "stream": true }
 Output Key Path: completion
 Output Timing: Asynchronous


Troubleshooting

If you encounter any issues with the extension, you can try the fol
steps to resolve them:

 1. Make sure you're running the latest version of Raycast and Prom
I'm always working to improve the extension, so it's possible that 
issue has already been fixed.

 2. If you're having trouble with a command not outputting the desi
response, try adjusting the command's configuration. You might just
to make small adjustments to the wording of the prompt. See the Use
Resources section below for help with prompt engineering. You can a
adjusting the included information settings to add or remove contex
the prompt and guide the AI towards the desired response.

 3. If you're having trouble with PromptLab Chat responding in unex
ways, make sure the chat settings are configured correctly. If you 
trying to reference selected files, you need to enable "Use Selecte
As Context". Likewise, to run other PromptLab commands automaticall
need to enable "Allow AI To Run Commands". To have the AI remember 
information about your conversation, you'll need to enable "Use 
Conversation As Context". Having multiple of these settings enabled
sometimes cause unexpected behavior, so try disabling them one at a
to see if that resolves the issue.

 4. Check the PromptLab Wiki to see if a solution to your problem i
provided 
there.

 5. If you're still having trouble, create a new issue on GitHub wi
detailed description of the issue and any relevant screenshots or 
information. I'll do my best to help you out!


Output Examples


Useful Resources
Placeholders Guide
Best Practices for Prompt Engineering with OpenAI API
Brex's Prompt Engineering Guide
Techniques to Improve Reliability
Menu
root