Editor’s Note: As enterprises accelerate their AI adoption journeys, prompt engineering is becoming a critical capability for driving consistent and business-aligned outcomes. In this thought piece, Murali Krishnan, Senior Business Analyst at McLaren Strategic Solutions, outlines a structured approach to prompt design, from defining objectives to crafting effective AI queries. His framework blends practical examples with enterprise context, making it a valuable guide for organizations looking to unlock measurable value from AI.
AI has come to stay and is bringing sweeping changes to the way we work, whether it is hype or reality. Careers are getting remodelled towards leveraging AI usage as an assistant and moving further as autonomous Agentic AI. Prompt Engineering, AI Output Validator, and Expert Reviewers are becoming new keywords and new career paths.
This article attempts to review the importance of structuring the prompts provided to the LLMs for the desired output and to meet the user objectives. At the outset, it appears that the user should be the actor and have a free hand, who needs to decide and create the structure of the prompt. This is true as each user has a specific way of thinking and a specific set of objectives to accomplish. Yet we can adopt a certain standard structure for prompts that can be used to derive the required output and can function as a guide to beginners using AI.
This also, will also keep evolving as we go on and understand how the LLMs produce the desired output using different techniques and complex technology-based back-end processing.
Context:
A mid-level bank providing services to the retail segment is looking to offer a new wealth management app product for its retail customer segment, restricted to individuals. The bank is a private mid-level tech-savvy bank, specializing in retail banking services, and has several other SW and apps to cater to its customers in the Indian geography. The app should provide self-service features for the customer. Technology envisaged to be used for the app is React, APIs, and the DB is MySQL.
1. Define Objectives and Outcomes for AI Assistance
The user can provide an elaborate introduction covering the scope and expectations from the AI assistant / LLM, with personas.
Suggested format:
a. As a Product Owner, I am looking for features and sub-features to be implemented in the newly conceptualized Wealth Management app.
b. As a tester, I am looking to evaluate an existing software app with specified features in Wealth Management.
c. As a security tester, I am looking to evaluate the security aspects of a Wealth Management app.
d. As a compliance officer, I am looking to evaluate Compliance aspects in a banking software app catering to Wealth Management services.
2. Establish the Scope of Work in Prompt Design
Suggested format:
a. Provide only customer-centric features to be developed, for self-service by customers for the Wealth Management app.
b. System testing for an SW app that is being developed, given the specific features.
c. Evaluate web security and network security aspects for the app.
d. Evaluate external compliance requirements for the Wealth Management app with respect to KYC and AML.
e. List the out-of-scope items like – app will be used by individual customers only, in India geography, in Indian currency.
3. Define Tasks and Expected Deliverables
Suggested format:
a. Identify and provide a list of feature descriptions, functional specifications, and User Stories for the new app.
b. Derive and provide test scenarios and Test cases for the SW app features and user stories.
c. Provide Security features in the Wealth Management app to be evaluated.
d. Provide a compliance feature list / Compliance testing checklist.
4. Specify Data Depth and Output Format
Suggested format:
a. Provide the requirements in User story format with Acceptance criteria in PDF format.
b. Provide test cases easy for test automation using the xx tool, and in a format compatible with the xx test management tool.
c. Provide the Security aspects grouped by App, Network, and other types of Security aspects to be considered in a checklist format.
d. Provide Compliance features in a checklist format.
5. Allow for Refinement and Iteration in Prompts (Optional)
Suggested format:
a. There is a possibility of app enhancement to meet other customer segments and include a few more asset types.
b. Test cases should be such that they can be automated and can be enhanced later to include integration with other external systems.
c. Other types of security aspects could also be listed.
d. Any other technology enhancements are possible for Compliance aspects, and those details can also be mentioned.
6. Define Personas to Guide AI Interaction
This will give more clarity to produce the output for the scope and improve the relevance of the output.
Suggested format:
a. Providing the end users of the application – Customers, Bank users, Admin users.
b. Provide the role the requesting user is going to play in the project – like Product Owner, Business Analyst, Tester, Compliance Officer.
7. Provide Examples for Consistency in AI Outputs
Examples help to demonstrate the expected format, style, or structure of the output.
Suggested format:
a. Provide the user stories for each feature, persona-wise, with numbering and in the mapped format – Feature ID > Feature > Sub Feature ID > Sub Feature > User Story ID > User Story.
b. Provide the Test cases tagged to the User story and Acceptance criteria, classified based on priority, along with a Test Coverage matrix.
c. Provide a Compliance checklist in Excel format with unique IDs and prioritize them from a criticality standpoint.
The structuring of prompts will provide clarity to the LLMs to generate the specific outputs desired. But this also could influence the output and can create biases in the output generated. As we use the Assistant and take it through multiple iterations of input-output generation, the models will learn further and fine-tune outputs based on user needs. We can already observe LLMs providing user-customized outputs.
Murali’s perspective highlights that prompt engineering is no longer just a technical skill, but a strategic enabler for business outcomes. At McLaren Strategic Solutions, we see enterprises increasingly adopting structured prompt frameworks to improve efficiency, compliance, and customer-centric innovation in AI-driven projects.
By aligning prompt engineering best practices with enterprise needs, organizations can move beyond experimentation and unlock measurable business value from generative AI.
Build & scale AI models on low-cost cloud GPUs.
McLaren Strategic Solutions Data Engineering services empower businesses with robust, scalable data pipelines, unlocking actionable insights for smarter decision-making and innovation.
McLaren Strategic Solutions provides advanced AI services, harnessing the power of artificial intelligence to drive innovation, streamline operations, and deliver transformative business outcomes
Table of Contents Share with your community! Home Blog Future of AI in Finance 2025:
Sustainable By Design: The ESG Journey Episode 2 – with ,Dimple George, Ramakrishna Ganesh &
Table of Contents Share with your community! Home Blog Prompt Engineering in AI: Structuring the