LLMs rely on massive datasets of text and code to generate responses. These datasets represent a statistical snapshot of the information available publicly. However, the way this information is interpreted and presented depends on the algorithms and rules set by the companies that develop these models (like OpenAI, Google, Microsoft, Facebook, Anthropic, etc.). This raises a crucial question: who decides what's right or appropriate for the AI to say? Can these companies establish universal social norms, especially considering cultural differences? The pandemic highlighted these variations, with countries like the US and Japan adopting contrasting safety measures. If LLMs were mature during the pandemic, their advice would likely reflect the data they were trained on and that would have been dangerous.
At the same time, we have creators trying to navigate their relationship with AIs while AI models are evolving very rapidly with very naive (and quite entertaining) rules for decorating user prompts using systems prompts to manage human intent.
I think we need a better way to feed LLMs data that spell out the context behind our content in order to mimic human intent. And we need to build systems that place control into creators’ hands for this task.
The Need for Intentional Metadata in AI Training
For AI to tackle human-like decision making, it will need expertise from various fields like anthropology, psychology, education, and philosophy. While machines excel at solving well-defined problems with single optimal solutions, real-world problems are often messy and require adaptation based on context. Storytelling is a powerful tool for teaching these flexible strategies, just like how we use stories to educate our children. In a way, isn’t AI a kind of "child" that needs such guidance? Yet, today, we feed it with quite a lot of “fast-food” data, which limits its ability to learn these nuanced approaches. That’s why I believe efforts such as “Mentoring The Machines” are a great step in getting us to think in that direction.
So how can we sketch a potential solution? Thankfully, technology history offers some valuable insights:
From the design of the Web we learned if you use protocols, links, and metadata you can standardize access through interfaces for processing large amounts of data at the global scale and across company boundaries.
From the evolution of Open Source Communities we learned that you can effectively finance contributors from diverse backgrounds and cultures to spread key technologies around the world, ranging from operating systems (e.g., Linux), to libraries for building sites and pulling content from databases (e.g., React.js), to managing computing at scale (e.g., Kubernetes).
From the creation of the Wiki culture (e.g., Wikipedia), we learned that you can create global-scale content repositories with a small group of decentralized governors/editors.
To bridge the gap between Content Creators and AI algorithm designers, I advocate that we need a platform consisting of a set of tools that will enable Content Creators to explicitly author, store, and share context metadata that specifies how their content should be used in AI training, ensuring their intent is reflected in the AI's inferences (conclusions).
Creating thoughtful context needs to be a precise and well-reasoned task. It requires expertise from Humanities fields such as political science, psychology, philosophy, storytelling, and pedagogy. These Context Creator roles will be important in how we design the next generation of AI models and how we train them.
The MNTR Platform
The purpose of the platform would be “Machine Nurturing with Tagged Reasoning” or M.N.T.R. The idea is shown in Figure 1.
Figure 1
I see 3 key components:
An MNTR Data Record to structure our context for parsing and storage. It would be accessible via HTTP and possibly using a special MIME type (e.g., application/x-mntr) to represent the data.
An MNTR Content Repository where the AI model can access a record via its globally unique URL. I can see two versions of such repositories: public and private. The publicly maintained ones could be at the same host as the website or we could manage a central one similar to Wikipedia using a wiki collaboration model. Private ones live behind corporate walls for content that is owned by corporations.
A set of MNTR Authoring Tools to create MNTR Records that allows the Content and Context Creators to collaborate in crafting those Records.
Then any website, blog, or API that exposes data to an LLM can reference one or more MNTR records (via a <link> tag, for example, or other tagging methods). Then LLM chains can pull them into RAGs for enriching their models.
What would an MNTR Record look like?
I think the format and content of an MNTR Record would likely evolve since LLM designers would adjust their training algorithms to better integrate the Records into the LLM’s output. However, an initial option could be structured text (YAML, JSON, etc.) that comprises one or more of the following sections:
Belief: a single unambiguous belief by the creator. These can be in the format of a Controlling Idea as defined by Robert McKee which can be represented by a well-defined sentence structure of: <moral, ethical, life value X shifts positively or negatively> when <this action is taken> despite <trade-off>.
Narrative: one or more short stories where the belief is explored with real examples using a format such as Storygrid’s 5 Commandments. This can convey the unique tone, emotion, and style of the intent. The “voice” of the creator.
Positive and Negative Examples: This is similar to the data presented in a recent paper about Instruction Tuning to keep the AI model focused on the intended Belief that it needs to learn as well as expressing the Narrative more accurately.
Imagine each MNTR Record as a digital legacy for the creator. It's like a 'digital genome' containing the context and intent behind their work that is the culmination of years of experience. This information acts as a guiding light for future AI systems that interact with the creator's content, ensuring the AI's understanding aligns with the creator's vision. In a way, it's a lasting shadow that can influence future generations, even beyond the creator's lifetime.
MNTR Records will not be just for creators. Consumers can also create or choose them from the Public Repositories. They will act as filters you set on your web browser or parental controls for your kids. By selecting an MNTR Record, you can automatically inject that context into your prompts for AI models. This ensures the AI tailors its responses to your preferences, similar to how filters personalize your online experience.
Conclusions
The beauty of human interaction lies in our diverse perspectives. Every person and culture brings a unique lens to the world. This very 'plurality' is what AI models need to learn to navigate conversations effectively. While we don’t have to build AIs to mimic the human brain, we do need to feed them data that spell out the context behind our content in order to mimic our intent.
I imagine a future where MNTR fosters a fairer environment with more fine-grained control for Content Creators, attracts skilled Context Creators to the tech industry, and ultimately, equips us with AI tools that truly understand us. If there is a chance to achieve that, I think it is worth giving it a try.
Comentários