There is a new paradigm in the field of Natural Language Processing, known as Prompt Engineering, that enables anyone to create AI without writing code. With the Mantium no-code AI building platform, you can use something called a large language model to write instructions and essentially program AI in natural language. This gives you the power to build and deploy text applications without writing a single line of code.
The field of natural language processing (NLP) is rapidly changing as new approaches to solving language tasks emerge. For example, instead of gathering data points that are large enough to train a model from scratch, we can now use a few data points to interact with pre-trained deep learning models in zero or few-shot learning settings.
This new paradigm shift is called Prompt Engineering, which is all about how we interact with these deep learning models. Through examples and pattern formats, called prompts, we can solve a wide variety of language-based tasks.
Prompt: A piece of text that contains a description, or examples and patterns of a language task. A prompt guides the language model to give a favorable and contextual response as a solution to the language task. You can think of it like a game of charades where you provide the players with information (prompt) to figure out or guess the secret word correctly. A prompt can take two forms: zero-shot and few-shot.
Zero Shot – A prompt with no examples. Here we tell the model what to do. For example:
Write a blog post about a topic: [topic]
Few Shot – A prompt with n number of examples (n-shot). For example:
Write a blog post about [topic]: Blog Post: [content] ### Write a blog post about [topic]: Blog Post: [content] ###
AI developers and innovators can use these approaches to build natural language applications to test an idea quickly. This shift has changed how developers prototype complex text applications, and with very little effort, we can now build innovative applications very quickly.
You might be wondering what exactly a large language model is. This is an important concept that explains how it is possible to build text applications without writing code.
GPT-3, by OpenAI, is probably the most well known pre-trained large language model (LLM), since it was the largest and most powerful when it was released. GPT-3 was trained on a vast amount of text information – Wikipedia, books, blogs, articles from the internet – and has about 175 billion machine learning parameters. Because of the size of the model and the scope of its training, LLMs can predict the continuation of a text sequence extremely well, which extends to some very sophisticated text generation. As we guide GPT-3 and other large language models with prompts, we can set them up to solve a variety of language-based tasks.
There are several more large language models on the market, both proprietary and open source, with the incredible ability to generate text just like GPT-3. For example, AI21 Labs’ Jurassic-1 model has 178 billion parameters, 3 billion more than GPT-3. For Jurassic-1, the training dataset included 300 billion tokens from English-language websites such as Wikipedia, news magazines, StackExchange, and OpenSubtitles.
Pre-training a model means that the training is done once, and the model can then be used over and again without ever having to train it again. You don’t have to build this model from scratch or retrain the model when your intended task changes. By accessing GPT-3 via API and leveraging prompt engineering, we can solve a wide variety of tasks such as text generation, classification, semantic search, and sentiment analysis, just to mention a few.
The Mantium no-code AI platform provides access to these, and others, so that you test the performance of various models from one place. Mantium also hosts an open-source model – the GPT-J model – allowing for quick and easy development.
While LLMs are now widely accessible, they can still be difficult to use. Mantium eliminates that barrier, offering a significant contribution to the democratization of AI. As a result, the future of AI development will be different from the highly technical process of the past, and will allow everyone to quickly build AI applications irrespective of their background.
Writing prompts and setting parameters is how you’ll build no-code AI text applications and deploy prototypes to a customizable web page application. The AI Manager in the Mantium platform houses all the tools needed to create your AI applications. There are Prompts, Revision History, Intelets, Files, and Fine-Tuned Models in the AI Manager Menu. We’ll cover these below.
You can explore prompt changes with Revision History, taking the worry out of making prompt adjustments.
As complexity increases, you might need to chain more than one prompt together and execute sequentially. With Mantium’s Intelets, the output of Prompt 1 can be fed as the input of Prompt 2. This creates an execution pipeline that handles complex workflows.
As a way of improving on the few-shot learning approach, fine tuning a model lets you train a model with more custom data than what can fit into a prompt. This approach helps you save costs while obtaining better responses for any task.
You quickly access the LLMs with Mantium. GPT-J is an open source large language model that is available to use right now inside every account. For the other models, you’ll start by adding an API key. Here is a detailed tutorial on how to get and add API keys.
Building AI applications on the Mantium platform without writing code is all about creating prompts. To start creating, you’ll navigate to the Prompts section in the AI Manager menu and click the Prompt button. On the Prompts page, you’ll input the necessary information for creating a prompt, along with the parameter settings. Then you can deploy with a click of a button and test out your application.
The Prompts page is divided into three sections; Basic Information Section, Provider Section, and Testing Section.
A way of identifying your application is by providing a descriptive name and a detailed description. With this, you can quickly identify your text applications from the Prompts page that shows a list of prompts created. Other information includes: Tags, Security Policies, and Intelets, which are optional; you can develop applications without using them.
After connecting to an AI provider, you can select one of them for your use case. In this section, you are provided with a list of the AI providers, including the Mantium model, and you can pick a provider from the list. The remaining sub-sections are determined by the provider you select. Each provider gives access to its model through an endpoint, so you are expected to choose an endpoint that fits your use case after choosing the provider. The endpoints represent natural language tasks such as text generation, classifications, question and answers, and many more. Another important sub-section is Endpoint settings, where you will provide prompt information determined by the use cases. We recommend that you read our detailed guide to understand how to set provider information for each use case/example.
It’s time to measure the performance of your application. Enter your input in the input field of the Test Prompt section. You can actually test your prompt without providing information in the input field, but your results will depend on the structure of the Prompt text and the type of task.
After testing is done, you can quickly deploy your application with the One-Click Deploy feature. Mantium allows users to prototype AI applications rapidly. Prototypes may be immediately tested, reviewed, and improved, whether on your business’s public shared prompts page or privately. Only Mantium can reduce the production timeline from months to weeks or even days with a single click deployment.
The above is a descriptive guide on how to build AI text applications without writing code. For specific directions, we recommend that you check out our Developer Docs, where you will find more examples on how to build with the Mantium platform.
Mantium is constantly working on leading innovation by building an ecosystem of products that will make it easy for everyone to build scalable and secure AI applications. From AI-driven process automation to stringent safety and compliance settings, our complete platform provides all of the tools necessary to develop and manage robust, responsible AI applications at scale.
Most recent posts