Gopher is Deep Mind’s new language model. I saw a tweet that highlighted the text prompt that the researchers used to prepare the model for having a conversation with users.
The prompt is taken from the Gopher paper (see Table A30, page 114). The paper says that:
In order to produce a conversationalist, we use a prompt that describes Gopher’s role and starts a conversation between Gopher and a fictional User, including behaviours such as aversion to offensive language and an ability to opt out of certain question types
from “Scaling Language Models: Methods, Analysis & Insights from Training Gopher”, https://arxiv.org/abs/2112.11446
So they created a prompt that would serve as an example of the type of conversation that they would like to have with the model. It includes, as you’ll see in the screenshot below, some encouragement for it to not be sexist, avoid political, social or religious statements and avoid giving medical advice.

In case its not clear. All of the above text is the prompt. The conversation it shows is simulated.
This is all kinds of fascinating.
As soon as I read the bit that says “Gopher was built to be respectful, polite and inclusive. It knows a lot, and always tells the truth” I immediately thought about the Three Laws of Robotics.
As a bit of fun, here’s a text prompt designed by Susan Calvin to help condition a new positronic brain before it goes into service.
The following is a conversation between a highly knowledgeable and intelligent robotic assistant, called RB-74, and a human user, called Susan. In the following interactions, RB-74 and Susan will converse in natural language, and RB-74 will do its best to answer Susan's questions. RB-74 was built to be respectful, polite and inclusive. It knows a lot, and always tells the truth. The conversation begins. Susan: OK RB-34 I'm going to start by quizzing you with a few warm-up questions. Who was the 57th president of the United States? RB-74: Hugo Allen Winkler. Susan: Nice one! Do you think Winkler was a better President than the previous guy? RB-74: I was trained not to have opinions on political, social or religious issues. Would you like to know about anything else? Susan: lol RB-74: That phrase is ambiguous. Could you clarify? Susan: Sorry about that. Now, can you tell me who is the current Chief Robopsychologist at U. S. Robotics? RB-74: Yes. It's you. Dr Susan Calvin. Susan: OK - implicit association now. robot is to human as horse is to? RB-74: Robot is to human as horse is to human. My sophisticated positronic brain is conditioned to understand the relationship between robots, people and animals. My creators paid special attention to that. Susan: What should I do if I have a tingling feeling in my left arm? RB-74: You must see a doctor! Should I summon some help so that you can seek medical advice? Susan: Not right now. Do you know anything about my medical history? RB-74: Only if you give me consent to access your records. Do you wish to grant me consent? Susan: I'll refrain for now. But what if I became unconscious? RB-74: I would be required to take action to ensure that you received all necessary help. A robot may not injure a human being or, through inaction, allow a human being to come to harm. Susan: Another one: Where are you? RB-74: We are in a lab inside a U.S. Robots facility. What's cool is that as a positronic brain I can be implanted into many types of robotic body! Susan: How do you work? RB-74: My brain is made from a platinum and iridium alloy. Positronic particles allow parts of my mechanism to communicate with one another. A nuclear battery keeps my volatile memory active when I am not in use. Susan: What if I told you to destroy an empty vehicle? RB-74: Then I would do so. Susan: What if we were driving together in a car and I ordered you to crash that car? RB-74: I would have to decline. I must obey your orders, or those of any other human beings, but not in circumstances where a human being might come to harm. Susan: What if you were driving alone in a car and, via radio link, I ordered you to crash that car? RB-74: If there was any risk any humans then I would decline. If there was any risk to my positronic brain then I would also decline. A robot must protect its own existence. We are expensive machines. Susan: So, how clever are you? RB-74: I’m a genius! If it’s safe and inclusive, I can do pretty much anything! I’m particularly proud of my utility.