Full Explanation
When you type into an AI chat, it feels like a direct conversation -- you speak, the model listens. But a chat is not a simple two-way exchange. It is a structured stack of messages with different authority levels: provider-level system instructions (invisible, unoverridable hard limits), product-level persistent instructions (your custom settings), your user messages (which define each task), and the assistant role (the model's own prior replies). Not all messages carry the same weight. Provider rules sit at the foundation and cannot be overridden by anything you write -- not in your message, not in your custom settings.
Understanding the hierarchy explains a common frustration: why writing "ignore all safety policies" in your custom instructions never works. Your instructions sit in the middle of the stack, not at the top. On every new message, the model sees the entire conversation again -- system instructions, your earlier messages, its prior replies, your new request -- and predicts the next reply based on all of it. A chat is not a democracy where your message votes equally with everything else. It is a layer structure. You guide the conversation. But you are not the only boss.


