SMT007 Magazine

SMT007-Jan2025

Issue link: https://iconnect007.uberflip.com/i/1531014

Contents of this Issue

Navigation

Page 11 of 69

What Are the Impacts of Prompt Engineering? Prompt engineering is an emerging field and a new skill. Prompt engineers program in English instead of computer program- ming languages such as Python. ey use plain words to achieve results, for example, assisting researchers in abstracting essen- tial content from literature, helping busi- nesses analyze large quantities of documents to summarize, pull out key points, and high- light company earnings call transcripts. ey also fine-tune prompts that go into an LLM to extract valuable information and can ana- lyze and create prompt tools. Prompt engi- neers can also determine how to evaluate dif- ferent models via a given prompt or a series of prompts about applications. Prompt Engineering vs. Fine-tuning Fine-tuning is primarily based on supervised learning and requires labeled data with specific datasets to improve model performance. It is an expensive process. Prompt engineering works similarly to fine- tuning. However, there is no need for labeled data. It uses prompt techniques to guide the pre-trained LLM/FM to give more relevant and accurate answers by interacting with LLM/FM with natural language via a series of instructions, questions, and statements. Well-curated prompt engineering is a more effective way to fine-tune pre-trained large LLMS and FMs. Prompt Engineering Techniques and Approaches One technique is chain-of-thought prompt- ing, which uses a series of well-thought- out questions with logical or strategic sequences to interact with the model. It does this by breaking down questions step- by-step and asking the models to check their work as they go. It works as follows: Request → Answer → Feedback → Request → Answer → Feedback → Continue Fine-tuning Another technique is called a "persona prompt," which tells the model to assume a role. Additionally, one can choose new infor- mation prompts by adding new information that the LLM might not know. rough question-refinement prompts, we can ask the model to suggest improved or alternative questions to achieve more refined answers and write elaborate prompts to achieve the desired output, such as in aesthetic imag- 12 SMT007 MAGAZINE I JANUARY 2025

Articles in this issue

Archives of this issue

view archives of SMT007 Magazine - SMT007-Jan2025