Generative artificial intelligence models like ChatGPT, trained as they are on large language or diffusion models, are on track to radically transform the way people live and work. But to get the accurate and relevant results you want and need from your AI model, you have to provide the right kinds of inputs — which is where prompt engineering skills come in. In this practical guide, packed with concrete examples, data scientist James Phoenix and generative AI instructor Mike Taylor teach the ins and outs of crafting text- and image-based prompts that will yield desirable outputs.
Follow basic principles to optimize your AI model.
Prompt engineering describes the techniques by which a user can develop prompts that spur an AI model like ChatGPT to generate a desirable output. Well-crafted prompts provide sets of instructions in text — either to large language models (LLMs) like ChatGPT or image-related diffusion AIs like Midjourney. The results of good prompt engineering will be substantial outputs. In general, LLMs are trained on essentially the entire internet but can be refined. Generic inputs will create predictable outputs, but carefully fashioned prompts can provoke more precise and compelling responses.
Large language models essentially predict what happens next in a sequence — beginning with the prompt. Prompting — and prompt engineering, as an art and skill — observes basic principles:
- Provide clarity as to the kind of response you seek — If, for instance, you want a list of product names, tell the AI the category of the product and provide any additional context that can help boost output accuracy and relevance — for example, ask ...
James Phoenix has taught more than 60 data science courses at General Assembly and through his company, Vexpower. Mike Taylor created the international marketing agency Ladder and also teaches generative AI through Vexpower.
Comment on this summary