Project Management

Project Management Central

Please login or join to subscribe to this thread

Topics: Artificial Intelligence
When using AI systems, what are some best practices for ensuring the results you receive are accurate, relevant, and aligned with your original goals?
avatar
Sarah Philbrick
PMI Team Member
Product Leader | AI Training Portfolio| PMI Asheville, NC, USA

Validating and checking outputs is critical when working with AI systems like Generative AI. Such validation approaches may include establishing clear criteria, implementing strong testing protocols, and continuous refinement.

In your experience with AI, what are some best practices for ensuring the results you receive are accurate, relevant, and aligned with your original goals?

Sort By:
< 1 2 3 4 5 6 7 8 9 10 11 ... 33 >
avatar
Sergio Luis Conte Helping to create solutions for everyone| Worldwide based Organizations Buenos Aires, Argentina
AI is a broader term. Generative AI is just an ancient model but everything "explode" when Google published the new architecture called transformer in 2017. So, with that said, take into account that generative AI is just "predictive test with steroids" just simplifying the model. With that said, two key points has to be taking into account when somebody works with AI: 1-human in the loop. 2-AI without Data (today called data science discipline or big data or whatever) is the same thing that live without oxygen. Talking about generative AI all related to technology has almost not impact with relation to all related to non-technological roles and activities. What you stated about accuracy and things like that are easy to implement because there are a lot inside disciplines like statistics. Most of them to make things "a priori" to prevent instead of cure. Few organizations taking into account that when generative AI environments are put in place almost a new business unit has to be created where roles like lawyers, linguistic, diversity and inclusion specialist must be hire to help on put it in place.
...
4 replies by Ashley Villegas, Booma Pugazhenthi, Joey Perugino, and Rup Kumar BK
Jul 12, 2024 5:00 PM
Ashley Villegas
...
Yes, I agree with the statement "predictive text on steroids". ChatGPT is now considered "old" and dependent on business structure an organization may or may not need to have additional human resources for a new business unit. At my previous organization the entire staff was challenged to conduct prompt engineering as it relates to individual departments as opposed to creating a new arm in the business. More experienced developers were responsible for model training.
Sep 04, 2024 9:14 PM
Booma Pugazhenthi
...
AI requires human oversight and quality data to be effective. While generative AI has revolutionized text prediction, its impact on non-tech sectors is still limited. Implementing AI systems demands a multidisciplinary approach, involving specialists from various fields to ensure responsible and effective deployment.
Oct 01, 2024 7:35 AM
Joey Perugino
...

Very good analogy Sergio



"AI without Data (today called data science discipline or big data or whatever) is the same thing that live without oxygen."



I like it :-)

Oct 02, 2024 11:02 AM
Rup Kumar BK
...
Putting the content to the test for expert scrutiny has been my take on using the responses generated by AI. We tend to fall victim to AI Hallucination if we don't verify the data and responses generated by AI which could sometimes be catastrophic if major project decisions are made merely based on the output of AI.
avatar
Oliver Chitsamatanga Head ICT Projects| Zimbabwe Power Company( ZPC) Harare, Zimbabwe, Zimbabwe
A very good question and also difficult to answer as well. However you have to go to the basics and say as far as you are concerned, how well are you versed with the subject at hand ?. There are facts which the AI will generate and if you can verify these facts the more reliable the generated response will be. The fewer the facts then it means that the Generative AI response is far from meeting your original goals. Then it becomes very critical that you review the accuracy , relevancy and the alignment of the response to your original need. Unfortunately there are no clearly defined metrics that one can use a model to evaluate an AI generated response. So from my personal experience I basically restrict AI to an area where i have sound knowledge of , else it becomes almost impossible to verify details generated by an AI if you venture into unchartered territory. However with long usage and exposure your confidence also tend to increase as well.
The best practice  and protocol to follow  would be to consult subject matter expects  to validate the AI generated response before making critical decisions based on it to avoid any  inherent associated risks which you might be not aware of.
avatar
Giorgos Sioutzos Business Analyst| Netcompany Athens,, Greece
Providing the specific context in clear and consise way is essential.
...
1 reply by Jose Quintal-Aviles
Aug 23, 2024 11:56 AM
Jose Quintal-Aviles
...
Yes Giorgios. When working with AI we have to be more more specific, describing the persona, the context, the task/request, etc. Vague prompt will produce general answers and not the information we really want.
Also, it is important to evaluate the response with the eyes of a project manager, which seems to be a forgotten actor in all this AI trend.
avatar
Keith Novak Tukwila, Wa, USA
Like with any new tool, you need to test the results before you scale up.

Think about if you were to manually model a very complex problem in a spreadsheet. You don't build all the links and formulas first and then evaluate your final output. You build and test sections of the bigger solution first and then add on layers once you have validated the functionality.
...
1 reply by Don La Faso
Sep 26, 2024 12:05 PM
Don La Faso
...
hummmm...that sounds pretty efficient, concise and...agile...totally agree eventhough we do know sometimes things aren't always so simplistic.
avatar
Elmar Sänger Managing Director| Saenger & Partner Unternehmensberater Habichtswald, Germany
That's a very good question. In my response, I am assuming that the question refers to an LLM-based chatbot.
From my experience, the best results are achieved the more context I provide to the LLM. This means providing as much information as possible that describes both the project itself and the project context.
A second very important step is the quality of the request, also known as the prompt for the LLM. This is similar to human communication, where the quality of the question determines the quality of the answer. Therefore, a good prompt strategy is required, for example:
1. Data and context about the project
2. The goal of the request
3. The task that the LLM should fulfill
4. The format in which the output should be delivered.

In subsequent requests, it is possible to build on the context and results of the previous request. It is important that this process takes place within a chat, as otherwise the context is lost.
...
2 replies by Angel Romero and Vanessa Eldridge
Sep 09, 2024 3:22 PM
Vanessa Eldridge
...
Yes, the old saying, 'garbage in, garbage out." If you want clear, concise outputs, you have to be sure that what you are putting in is accurate and your goals are clearly defined.
Sep 10, 2024 7:54 PM
Angel Romero
...
I agree with Elmar's response. You need to be specific. I would use CREATE to ensure I get the best response and continue to refine until I have what I am looking for from AI.
avatar
Hakam Madi Independent Consultant Amman, Jo, Jordan

This could be done by fine-tuning the chat context to fine-tuning the model using several strategies, such as Examples or few or many shots.
I'm currently working on a project. In my system Instruction [which Could be the scoping prompt if you are not accessing the API], I have the request and the verification method and criteria, so at the end of each output, I receive the confidence level achieved by AI.

With some training, I developed it further to output only results with an 85% confidence level or else provide an explanation or ask for clarification. This, btw, surprisingly jammed all the previous hallucinations.

avatar
Omar Jabbar Project Management and Technology Transformation Consultant| OGreen IT Service Inc. Ontario, Canada
I don't disagree with the answers above, but I keep it very simple. Make sure your data is clean, ask specific questions, and review the outcome. All of this will depend on the AI tools you are using and your needs for using them. Once you have this figured out, you will be good to go.
Continuing review and improvement are essential in this case.
I hope that helps.
Regards,
avatar
Mashhood Ahmed Project Manager - PMO| PMAssistant.ai Edmonton, Canada
have a well structured prompt, understand Project injection, drifting, leaking and AI Hallucination. Here are some common elements of well structure prompt.

●Instruction - a specific task or instruction you want the model to perform
●Context - external information, Persona or additional context that can steer the model to better responses
●Input Data - the input or question that we are interested to find a response for
●Output Indicator - the type or format of the output
●Response Tone – Tone of the response
...
1 reply by anonymous
Jul 24, 2024 11:37 AM
anonymous
...
Nice formula, Mashhood! Thnaks
Some of my items may be redundant but the most important things in my experience so far is:

Be precise and clear.
Be sure you explain jargon or specialized terminology
Provide the context for all of your requests
Be sure you provide the outcomes you are expecting
Experiment and refine as you go

I've found breaking down big problems can be better refined by chunking the whole into natural sections and working to refine each section and then working to put them back together.
avatar
Jabin Geevarghese George Global Service Delivery Leader - Enterprise Architect and Fintech Transformation| Tata Consultancy Services Ltd.

When using AI systems is very hard to set the precision or accuracy of the responses. I love bringing in the Agile mindset here pretty much imagine if you are mentoring someone you do a Q&A and based on the reponses of your Mentee you give the feedback so that Mentee can align his/her thoughts in the direction that we hint similarly review the AI responses and using our rationale judgement





1- Give Feedback to the AI system



2- Rework on your promp and be specific on what is expected



3- Keep it short and conscise, guage the responses and slowly we can tune the AI system in a way to get the best output



4- Now the Tech. Solution that comes in for accuracy is havig specific set of APIs that talk to real and accurate data sources or use 2-3 outputs of LLMs and then analyze and bring the best in output.

< 1 2 3 4 5 6 7 8 9 10 11 ... 33 >

Please login or join to reply

Content ID:
ADVERTISEMENTS

"A behaviorist is someone who pulls habits out of rats."

- Anonymous

ADVERTISEMENT

Sponsors