19 Comments

Is there a way to tell ChatGPT to only consider the custom instructions when I start my prompt with the verbosity level? I tried adding conditional statements in either 1st or 2nd box but nothing worked. ChatGPT starts its response with a Markdown immediately. In other words, I want ChatGPT to provide its default response if I don't start my prompt with V=x.

Expand full comment

That's something I've been going back and forth on. Originally, I had an instruction to ignore the "expert system" when you put your question in parentheses, but it wasn't 100% reliable. I've been working on the next version of AutoExpert though, and that feature is something I'm prioritizing. A more expanded / intelligent version, so that (a) it auto-infers the verbosity itself if one isn't provided and (b) it's more intelligent about the preamble, choosing to skip it altogether, or automatically choose which portions would be helpful. It's tough in the limited instruction space that's available, but I'm hopeful that changes to the Google search linking mechanism will give me the character budget to do it. :D

Expand full comment

Amazing article and prompts. Kudos for the use of 'Mise en place', too!. As someone with a background in professional kitchens (a lifetime ago, but still) - I very often use a metaphor for prompt engineering when explaining it to non-techies. That is to say, being a PE is a lot like being a chef. It's a combination of art and science. Once you understand the fundamentals, and with enough practice, designing a prompt is like developing a recipe. The ingredients vary every time, but quality and consistency will keep people coming back for more.

Expand full comment

Ha, thanks for noticing that, and for the compliments, Chris! I think your metaphor is great, too👨‍🍳

Expand full comment

This is great Dustin. I've been experimenting with it for the last few days with great results.

I have a question about the initial markup table though. When writing other promopts, I have found it very useful to think "it doesn't know what it's writing until after it has written it", so that forces me to be as precise as possible and also to iterate instead of trying to do everything all in one shot.

I might be misunderstanding things (I read your stuff on attention... I need to re-read it about 5 more times though!) but is it correct that, once it generates content in a single response, it doesn't have the capability to "read" or "remember" the content it just produced within that same response... meaning that once the table (or any content) is generated, it can't reference it later in the same response.

If that is the case, if you want it to consider and act on the content of the table, would it be more effective to have the table generated in one response and then initiate a new answer?

Expand full comment

Great question, and one that I should make more clear in a future post.

**As ChatGPT generates text**, the new tokens are added to the input for subsequent tokens. This allows the model to generate coherent and contextually relevant text.

During a completion, ChatGPT's attention mechanisms are continuously evaluating everything in previous messages **as well as text it just generated**. The opening table often affects the current generation most, in fact, due to the _recency effect_ that positional encoding often contributes to the attention mechanisms.

Expand full comment

Dude.

That's all I can muster so far for a comment.

I want to write something with maybe just a tiny bit more substance than that but my brain is now too occupied battling itself: Team Reread Immediately vs Team Apply Kicked-Knowledge Then Reread Later. This was a freaking masterpiece.

Expand full comment

Thanks, dan :D I appreciate hearing that!

Expand full comment

This is incredible. Thank you so much! I can now use ChatGPT on a whole other level.

Expand full comment

Thank you! Really happy to hear that :)

Expand full comment

If you don't mind me asking, even in new chats, oftentimes, ChatGPT starts with the table even though I explicitly start my prompt with V=1. Is this expected behavior? I thought the tabular format should only kick off if V=5. No?

Expand full comment

GPT-3.5 is worse at that. 4.0 should adhere to it, though, if the answer really can fit in one sentence. As well, an update—some time next week—will change the way verbosity and the preamble work, to give folks more choice when using it.

Expand full comment

ChatGPT-AutoExpert Custom Instructions.

Key points of the AutoExpert feature:

1.Automatically rewrite questions to make them more precise, enabling better answers from the AI.

2.Add slash commands to easily obtain summaries, ideas, alternative viewpoints, and allow ChatGPT to improve its own responses.

3.Select appropriate frameworks and methods to form high-quality answers.

4.Provide more in-depth, multi-turn conversations without losing details (using GPT-4).

5.No need to toggle prompts on/off for everyday use; also very helpful for coding.

6.Automatically identify the best expert role to answer each question.

7.Eliminate unnecessary disclaimers, providing direct facts and reasoning.

8.Explain the underlying thought process and reasoning for each response.

9.Generate helpful inline links to related topics, avoiding hallucinations.

Expand full comment

It seems like hyperlinks don't work anymore. I have tried this with the new v6 Custom GPT and the V5 custom instructions. Oddly enough the only Hyperlink that works is when I do a /help and the example "Potassium Sources" one. Any ideas why and how to fix it?

Expand full comment

Ok, seems like a common problem because lots of folks chatting about it breaking their GPTS. Might be a new restriction to prevent exploits from links.

Expand full comment

Dustin this is amazing. I love your work. Unfortunately GPT4 Turbo is fast...but the quality of responses fell through the floor. I've noticed steps 2 and 5 are now skipped every time. Are you running into this? This reddit article is where I found you originally and points to major degradation using Turbo. https://www.reddit.com/r/ChatGPT/comments/17obmlg/psa_gpt4_is_broken_but_you_can_fix_it_kinda/

Expand full comment

Yeah, they're rolling it out using feature flags that enroll a percentage of users each day. The Turbo model is "sharded" (I'll be writing about that soon) but it means that attention is worse. The aspects of the attention mechanism that I took advantage of are just not as useful in the turbo model. Good news is that I have a new AutoExpert coming this weekend that improves on the situation and adds some new features that are really useful. If you've got access to Custom GPTs now, you can check it out here: https://chat.openai.com/g/g-LQHhJCXhW-autoexpert-chat

Expand full comment

hi, I love your work, I use chatgpt and gpt4 mainly with api access with heygpt and other custom app, can I copy both about me and the steps to be in the system prompts?

Expand full comment

Great CU!

I need informal German output. How would you incorporate that in your instructions?

Expand full comment