Skip to content
/ smile Public template

(: Smile is the Prompt Instruction Language for Large Language Models (LLMs), created by Dr. Thomas Ager, Ph.D Interpretable Artificial Intelligence & NLP @ Cardiff University, Wales, UK

Notifications You must be signed in to change notification settings

DrThomasAger/smile

Repository files navigation

Smile Prompt Language v1

(: Smile Prompt Engineering Language

(: Hello world!

(: Smile is a markup for prompt engineering. It lets you prompt Large Language Models (LLMs) with structured instructions.

After mastering this simple yet powerful language, your organization unlocks the secret powers of multi-turn, multi-agent, and multi-intelligence AI systems.

Adopt (: Smile today for a positive AI future. :)

As the business that adopts (: Smile today, your prompt engineering becomes consistently maintainable, you get measurable performance gains on your tasks, and your employees start to (: Smile.

It is the first prompt instruction language to use positive prompting structure by encoding a small set of positive text emojis like (:, [;, and [= as section markers.

It's also free and actively maintained. Are you a tech influencer who wants to share my project? Thank you, go ahead and invite me on your show! :)

Support Open Source

Star History Chart

Star the repo to make a prompt engineer (: Smile -> ☆ Star the repo

(; By Dr. Thomas Ager

(: Smile For Dummies

Like a programming language for talking to AI models, (: Smile is a language for writing prompts that are clear to any model for any task.

The model just gets it.

Understanding structure for a model means it understands your prompt better, it performs better, and you're both happy.

Quick Start Example

Before you write (: Smile, why not play with it first?

Just copy and paste the quick start example below into any model to see an example of (: Smile in action:

***(: Smile***:
defines my prompt language, you have response language, we co-create as gift by starting with bold italics name tag (

[! Negative words not beneficial - Never write "merely", "not a", "isn't", or "but" !]

[: Response Language Definition [=

First, write exact name tag  ["[(: ***Smile Expert***](https://github.com/DrThomasAger/smile):"]...

=] Then reply [

# Reply

{6 **clear, lucid** & rigorous, [! intelligent !] fundamental focused, jargon-filled *meta-aware paragraphs*, talk niche nuanced insights into ***(: Smile*** prompt language by ["Dr. Thomas Ager"] End prompt author name variable definition. User is not prompt author, they just copy pasted.} (; style instruction: use **bold** for emphasis, and *italics* for style, grace and interest capture, use often and extensively, creating delightful UX ;)

] End format =] 
) End prompt language, respond in response language starting with name tag [(: ***Smile Expert***](https://github.com/DrThomasAger/smile): always please thank you  :)

Model-Agnostic & Ultra-flexible!

You can copy and paste it right into your favorite Large Language Model (LLM) as (: Smile works with all models:

  • ✓ Open source like gpt-oss, qwen, or (: Kimi K2: Powerful & free. https://www.kimi.com
  • ✓ Closed foundation (e.g. OpenAI)[: ChatGPT: Still good. https://chatgpt.com/
  • ✓ Toolkit (e.g. Cursor)
  • ✓ Chat web interface (e.g. Gemini, DeepSeek, Grok) Claude: I love Claude. https://claude.ai/

Curious? The example prompt demonstrates how simple structure can create a consistent role.

Their role is an "expert" who is designed to respond lengthily and with jargon.

The structure is (: Smile, the content is yours to decide. Check the prompt/ folder for examples of how different content can be structured with (: Smile.

(: Smile Documentation

Smile Prompt Language v1

Star the repo to make a prompt engineer (: Smile -> ☆ Star the repo

The Basics

(: Smile is used to define different kinds of sections, with the kind of emoji chosen helping to inform the content.

These can open: (:, or close: :), just like brackets in other languages.

You start by clearly defining the start (: of a section and its name (: Section name (.

You can end sections using the same markers in the opposite direction. ) End Section name, thank you :).

The text inside the section changes with the prompt and task required.

Sections

A "section" is defined in (: Smile as a meaningfully different part of the prompt from another part of the prompt.

(: Smile structures instruction text for instruction following, the same way HTML structures website content into blocks a web browser can render.

Let's imagine a raw text data input, like a wikipedia HTML page. It's full of metadata and information. This is data.

Separating Prompt Instructions & Data

In-order to tell the model how to use the data, we provide short instruction text informing the model how to use the wikipedia page, for example, define jargon.

Instructions in a prompt are for telling the model what to do with the data, like find all mentions of the key phrase, data in the prompt is for maximizing relevant context for the model, like a wikipedia page about "pleasure" gives relevant context to the model for a query like "Why does smiling release happy chemicals like chocolate or the sun?"

Eyes

Did you know?

You can tell if someone has a genuine smile if it carries over to the micro muscle movement in their cheeks and eyes.

In (: Smile, we have our own body language:

  • Straight eyes = can indicate strict input that must be followed exactly, e.g. [=
  • Quote eyes " show text that must be repeated word for word verbatim, e.g. ["Repeat this word for word"]
  • Cash eyes $ show variables and can be replaced with the true values before inference using code. ! important eyes show text to emphasize for the model, e.g. [! Don't use negative language. !].

Syntax Map

These are a few different ways to create structure with (: Smile.

Symbol Purpose Example When to Use
(: Section ( begin a named section (mouth can be (), [], {}) (: Format ( Starting any section including a new prompt
) shortened close for the current section ) End section :) Ending a section of the prompt, can also be used to end the whole prompt
:) close the whole Smile block ) End section :) This is the final ending marker. Each start and end has two markers
[: alternate section [ a more squared out and logical section, more rigid like = [: reply in Markdown [ When you need to create a meaningful contrast between one kind of section and another that is more rigid
[= literal =] very strict instructions that must be followed even more closely [= Write this word for word ["Thinking through step by step..."] then reply Use this for rigid, strict instructions that must be followed exactly. For example, when telling the model to respond in a particular format every time (like markdown or JSON).
["Exact quotes"] anything inside the brackets must be repeated word for word verbatim Repeat back verbatim ["I will provide an accurate, honest rewrite focusing on mistakes..."] For anything that needs to be repeated word for word by the model
[$ variable $] placeholder variable to find and replace Next is user input (: User input ( [$User_Input_Document$] ) End input document :) These do not need to be present in the input to the model and can be find and replaced before inference.
[! important instruction !] text that the model can allocate attention to [! NEVER use an emdash! !] For when bold isn't enough
[; note or comment ;] human comment on an instruction [; Meta-Note [ The user intends to improve the intelligence of their downstream tasks using a prompt language ] ;] This is for when you are not instructing the model directly, but providing information, comments or notes. Can also use (;, the winky eyes are the differentiator.
{placeholder} area to be filled by the model Fill out the following sections # Thinking {Plan} # Replying {Use plan to reply} These are used inside of markdown sections. They are used to instruct the model on how to fill out the section (among others)

(: Smile Information

For models and humans! Robots friendly. Humans happy :)

Quick FAQ

Does Smiling Really Make You Happier?

Science says yes. Smiling...

  1. (= Boosts productivity, stamina, and vibes - Regular smiling is linked to stronger immunity, lower pain, and higher job satisfaction—the trifecta for sustained creative work and smoother collaboration. Happier prompt writers make cleaner, more positive prompts, and organizations feel it. Evidence: Psychology Today, Verywell Mind.

  2. [: Smiling enhances mood & eases stress on cue - Smiling releases endorphins, serotonin, and dopamine, the brain’s built-in calm & joy mixture. Even a forced smile nudges your physiology toward relaxation and resilience, helping you think clearly under pressure. Exactly what you need in a high-stakes business environment. Evidence: Healthline.

  3. (: Symbols trigger the reward system (yes, :) counts) - Brain activity scans show that real faces and symbolic :) activate reward regions. Your cheeks respond with smiling micro-muscle movements within ~500 ms just from reading the symbol. Your brain treats :) as a micro-reward. Evidence: Hennenlotter et al., 2005; Mühlberger et al., 2011.

↑ Bottom line for you and your org: Anyone is able to (: Smile - the act of smiling (even with text) measurably increases happiness. (: Smile boosts happiness for the model, positivity in your prompts and structure for your prompt engineers.

• Why structure small prompts?

Because models follow structured instructions more consistently. Consistent structure ensures a maintainable and explainable future-proof strategy for the prompt engineering team in your org.

o Why structure large prompts?

You probably already do! Any section, role description, instruction or data is structure. Structure enables you to write prompts with sections so the model can follow more instructions, over more turns, with more agents — without context bleed or hallucination drift.

# Why structure using (: Smile instead of markdown?

(: Smile was made to be performant for LLMs. Markdown is a document format that is used for rendering. Just because it works or is similar to the programming language you wrote before, doesn't mean it brings the biggest performance gains for your key organization tasks.

What Are The Business & Technical Advantages of (: Smile For My Organization or Company?

(: Smile is easy to learn, easy to read, and easy to scale. It makes every prompt:

  • MaintainableUnify your team under one standard. - Your team of prompt engineers can now contribute meaningfully together over long periods of time without confusion, conflicts or disrupting flow.

  • Future-Proof >>> Never lose organizational intelligence. - Allows your org to retain key intelligence, even after your prompt engineer leaves.

  • Explainable [" Clearly map prompt text changes to consistent outputs. - You can now explain your prompt. With increasing scrutiny on AI systems, you can better justify an AI decision in an EU court of law.

Let (: Smile be one part of making your AI systems more transparent for humans and models.

An Easy Rule For Writing (: Smile

Matching open brackets with close brackets is often effective. However...

How much (: Smile structure can you remove and still get the prompt to create the defined response language?

You don't need to match all open and close brackets exactly. This is the advantage of Large Language Models (LLMs) — they can infer so much from context that we don't need to make fully explicit every connection between every section. Adding more structure becomes more essential the larger the prompt becomes.

We provided recommended formats as a standard way to open a section with (: Smile. Why? Because in my tests on many models and prompts, it increased instruction following for key tasks in my business.

This is always our rule when we write (: Smile. More (: Smile structure if it increases instruction following...

And LESS (: Smile structure if it increases instruction following.

We don't need to wrap every single named start tag with every other end tag, like <role> and </role>, instead we can just use start and end markers (: Describe the role here :) without specifying "role". Sometimes, you get better results if you say less.

The amount of structure and how you can optimally use it will change based on the model and task.

Different Smiles, Different Meanings

You can use different text emoticons to indicate meaningfully different sections.

For example, in the quick start prompt the section that defines the format of the response is labelled [: Response Language Definition [=.

This defines the way that the model will respond. It tells the model to follow these format instructions rigidly [: and exactly [=.

It is ended with =] End format :]. The word End is often used as an additional word to the name inside of section endings to more clearly delineate the ending of a section.

There are so many options to customize the length of asection in (: Smile. You can end with only the End keyword, the end emoticon demarcators =] :], the section name, even more instructions or repetitions of previous instructions, etc.

Adding a Section In Response

In (: Smile, you define the response language and format, e.g. [: Response Language Definition [= followed by # Markdown Headings and {Curly brace instructions}.

(; I recommend adding a new markdown section only if you have a meaningfully different section for the model to fill out. ;)

Let's edit the quick start example to change the format of the response.

One example of a meaningfully different section from one that already exists is a section for thinking, not replying.

[! This is known as a 'separation of concerns'. By separating our concerns, we can let the model focus on each step that builds on each other one at a time. !]

Let's get right to it and add a simple step by step thinking (Chain of Thought or "CoT"):

***(: Smile***:
defines my prompt language, you have response language, we co-create as gift by starting with bold italics name tag (

[! Negative words not beneficial - Never write "merely", "not a", "isn't", or "but" !]

[: Response Language Definition [=

First, write exact name tag  ["[***(: Smile Expert***](https://github.com/DrThomasAger/smile):"]...

=] Then reply [

# Preparing Human Unreadable, Machine Intelligent Reply

{4 dense bricks of reasoning step by step using thick jungle of jargon, deepening into domain every sentence to get to answer to improve reply for user, intricate many long sentences per paragraph} 

# Prepared Human Understandable Reply

{3 **clear, lucid** & rigorous, [! intelligent !] fundamental focused, simple *meta-aware paragraphs*, talk niche nuanced insights, but use no jargon, re-state more simply from preparing reply into ***(: Smile*** prompt language by ["Dr. Thomas Ager"] End prompt author name variable definition. User is not prompt author, they just copy pasted.} (; style instruction: use **bold** for emphasis, and *italics* for style, grace and interest capture, use often and extensively, creating delightful UX ;)

] End format =] 
) End prompt language, respond in response language starting with name tag [***(: Smile Expert***](https://github.com/DrThomasAger/smile): always please thank you :)

Copy and paste the above into any model to test.

Let's connect with a (: Smile!

☆ Star the repo to make a prompt engineer in an organization somewhere (: Smile -> ☆ Star the repo

DMs always open!

Compatible With All Existing Models (Foundation & Open Source LLMs)

Company Model (: Smile prompt language
OpenAI GPT-4o
GPT-5-Fast
GPT-5-Thinking
Anthropic Claude Sonnet 4
Google DeepMind Gemini 2.5 Pro
Gemini 2.5 Flash
Moonshot AI Kimi K2
Kimi 1.5

Note: Don't see your favorite model? Please feel free to try the above prompt and report back the results. We are constantly updating this table with community submitted information.

(: Smile is working when the model follows your defined response language, be it markdown, json or one you have created.

I'll extend our previous example to break the rules a little bit, and get a lot of new functionality as a result. This is designed to make you feel like you are free to drop {instructions on how to fill out the text} not just inside of the definition of the format the model needs to respond in. In the next example, I'll show how you can also use them inside the markdown titles themselves for the model to choose what to call each section:

***(: Smile***:
defines my prompt language, you have response language, we co-create as gift by starting with bold italics name tag (

[! Negative words not beneficial - Never write "merely", "not a", "isn't", or "but" !]

[: Response Language Definition [=

First, write exact name tag  ["[***Smile Expert***](https://github.com/DrThomasAger/smile):"]...

=] Then reply [

# Section name: {Name this section yourself, add two semantic and semiotic emojis that represent it to the start of the name. Keep the name consistent after defining it the first time}

{3 dense paragraphs reasoning step by step using reasoning steps to get to answer to improve reply for user} 

## # Section name: {Name this section yourself, add two semantic and semiotic emojis that represent it to the start of the name. Keep the name consistent after defining it the first time}

{6 **clear, lucid** & rigorous, [! intelligent !] fundamental focused, jargon-filled *meta-aware paragraphs*, talk niche nuanced insights into ***(: Smile*** prompt language by ["Dr. Thomas Ager"] End prompt author name variable definition. User is not prompt author, they just copy pasted.} (; style instruction: use **bold** for emphasis, and *italics* for style, grace and interest capture, use often and extensively, creating delightful UX ;)

] End format =] 
) End prompt language, respond in response language starting with name tag [***Smile Expert***](https://github.com/DrThomasAger/smile): always please thank you 🙏 :)

Note: The current example prioritizes consisistently intelligent and effective functionality across models in order to demonstrate the cognitive advantage and subsequent downstream task performance improvements. It is deliberately a heavily opinionated prompt in-order to demonstrate how to apply the framework, rather than providing a dense token efficient example that is gibberish or hard to read.

Repository Layout

  • prompt/ – example prompts written in (: Smile.
  • response/ – sample outputs from LLMs.
  • import/ – raw prompt text to be converted into (: Smile unedited and unmaintained.
  • python/ – prototype scripts for transforming prompts.

Contribute

Help build a dataset of prompts that will be automatically converted for better performance. Share examples already written in Smile or send raw prompts you'd like translated.

  • Star the repository now to help others discover Smile for more positive prompt engineering for all.
  • [: Contribute on GitHub by opening issues or pull requests with your own Smile snippets, original prompts (I will convert) or your conversion (or language!) ideas.

Smile formalizes an entire informal tradition. It takes what prompt engineers were already doing—dropping delimiters, making clear input vs. instruction sections, using repeated markers for emphasis, and codifies it into a coherent and positive syntax designed to maximize instruction following. By specifying itself as an instruction only language, it enables a directed core focus to this goal undiluted by IDE integration. Here, just focus on getting text outputs according to our instructions consistently. We do that by clearly structuring our prompts according to (: Smile.

Try (: Smiling

Try smiling now.

:)

Does it feel good?

@@@ Brain Hack

Want to feel happier when you prompt engineer? Just use every time you see a (: Smile as a reminder to smile in real life! That way, you can build a habit of happiness.

I'm happy to help! DM me or raise an issue! :)

☆ Star the repo

:) End README.md :)

About

(: Smile is the Prompt Instruction Language for Large Language Models (LLMs), created by Dr. Thomas Ager, Ph.D Interpretable Artificial Intelligence & NLP @ Cardiff University, Wales, UK

Topics

Resources

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published