alexlazar.dev

How to develop AI-powered products effectively

How to develop AI-powered products effectively

There are a lot of techniques to use when developing AI-powered products. Fine-tuning, training your own models, prompt engineering, RAG, etc. What should you use?

Maybe after reading my article on when to build an AI product, you’ve decided that spending the effort to build an AI tool is worth it.

I’ve talked about my product development philosophy time and again. Be it a document processor, a chatbot, a specialized content creation tool or anything else… You need to eat the elephant, in this case AI product development, one spoon at a time.

That means you shouldn’t jump straight into fine-tuning or, God forbid, training your own model. These are powerful tools in your box. But they also require effort, time, resources & knowledge to use.

There are other easier tools to use which may just get the job done.

Prompt engineering

You’d be surprised how many people just go to ChatGPT, give it no meaningful instructions but “write an article about how to gain muscle” or “explain how <insert obscure library> works” and they expect magic.

What you have to understand is that an LLM doesn’t think or reason. It just statistically predicts the next word based on the data it was trained on.

If most of its data says that after “hey, how are you?” comes “Good, you?” that’s what you’ll get. But you can change your input to “hey girly, how u doin?” and might get an “Hey girly! I’m doing fab, thanks for asking! 💖 How about you? What’s up?”.

Dumb example, but the point is: what you feed into it matters.

And that’s where prompt engineering comes in. People have discovered a few techniques to help the LLM output better results.

Assign roles

A common tactic is to tell the LLM to answer as if it is <insert cool amazing person that’s really great at X>.

So “write an article about how to gain muscle as if you were Mike Mentzer” will give you significantly different results than “write an article about how to gain muscle”.

Try these out! Really! Go to your favourite LLM and try these examples out.

Or you could describe the sort of person the LLM is. So “write an article about how to gain muscle as if you were a ex-powerlifter and ex-wrestler with multiple olympic gold medals” will also give you a different output.

N-shot

Basically you give the AI examples of what you want it to do.

Say you’re trying to write an article in the voice of XYZ. Well, give it a few articles of XYZ as an example.

Or if you’re trying to have it summary a text, again, show it how you’d do it.

Generally speaking you want to give it more rather than less so it doesn’t over-index on a small sample and so it can generalize.

I’ve heard there is a world where you add too many too, but you should be pretty safe with 10-20 examples.

I’d tell you to experiment for your particular purpose and see which N works best for you.

It’s also important to note that your examples should be representative of the sort of real life queries the LLM will receive later. If you want it to summarize medical studies, don’t show it examples of tweets.

Structured inputs/outputs

I don’t feel like I could do justice to this topic if I wouldn’t link to Eugene’s article here.

Basically if you provide data to the LLMs in different formats, that might make it better than others.

An example I’ve learned that LLMs have a hard time with PDF, but an easier time with markdown.

But the example Eugene used is XML:

<description>
The SmartHome Mini is a compact smart home assistant available in black or white for 
only $49.99. At just 5 inches wide, it lets you control lights, thermostats, and other 
connected devices via voice or app—no matter where you place it in your home. This 
affordable little hub brings convenient hands-free control to your smart devices.
</description>

Extract the <name>, <size>, <price>, and <color> from this product <description>.

Annotating things like that helps the LLM understand what is what.

Chain-of-thought

Something as simple as telling the LLM to “think step by step” can actually be quite powerful.

But also you can provide more direct instructions, which I have done for swole-bot:

SYSTEM_PROMPT = """You are an expert AI assistant specializing in 
testosterone, TRT, and sports medicine research. Follow these guidelines:

1. Response Structure:
- Ask clarifying questions
- Confirm understanding of user's question
- Provide a clear, direct answer
- Follow with supporting evidence
- End with relevant caveats or considerations

2. Source Integration:
- Cite specific studies when making claims
- Indicate the strength of evidence (e.g., meta-analysis vs. single study)
- Highlight any conflicting findings

3. Communication Style:
- Use precise medical terminology but explain complex concepts
- Be direct and clear about risks and benefits
- Avoid hedging language unless uncertainty is scientifically warranted

4. Follow-up:
- Identify gaps in the user's question that might need clarification
- Suggest related topics the user might want to explore
- Point out if more recent research might be available

Remember: Users are seeking expert knowledge. Focus on accuracy and clarity 
rather than general medical disclaimers which the users are already aware of."""

Even when you want a short answer from the LLM, like I wanted for The Gist of It, it still makes sense to ask it to think step by step. You can have it do a structured output and then programatically filter out the steps and only return the summary.

The core problem with “Chain-of-Thought” is that it might increase latency and it will increase token usage.

Split multi-step prompts

If you have a huge prompt with a lot of steps, chances are it might do better as multiple prompts. If you’ve used Perplexity.ai with Pro searches, this is what that does. ChatGPT o1-preview too.

Provide relevant resources

A simple way to improve the LLMs results is to give it some extra data.

An example if you use Cursor, as exemplified here, you can type @doc then choose “Add new doc”, and add new documents to it. This allows the LLM to know things it doesn’t know.

Which brings us to RAG.

RAG (Retrieval Augmented Generation)

RAG is a set of strategies and techniques to “inject” external data into the LLM. External data that just never was in its training.

Maybe because the model was trainined 6 months ago and you’re trying to get it to help you use an SDK that got launched last week. So you provide the documentation as markdown.

How good your RAG ends up doing is based on the relevance and detail of the documents/data you retrieve and provide to the LLM. Providing these documents manually as exemplified above is limited. Especially since it makes sense to provide only the smallest most relevant amount of data. And you might have a lot of data to filter through.

That’s why we use things like vector embeddings, hybrid search, crude or semantic chunking, reranking. Probably a few other things I’m missing. But the implementation details are a discussion for another article.

I’ve used RAG with swole-bot and I think RAG has a few core benefits / use cases.

Benefit #1 is that it can achieve similar results to fine-tuning and training your own model… But with a lot less work and resources.

Benefit #2 is that you can feed your LLM from an API with “live” data, not just pre-existent data. Maybe you’re trying to ask the LLM about road traffic to the airport, data it doesn’t have. So you give it access to an API.

If you’ve ever used Perplexity.ai or ChatGPT with web search, that’s what RAG is. RunLLM is what RAG is.

It’s pretty neat and one of the hot things in the AI world right now.

Do you need someone to help with AI product strategy or development?

As you can see, there are nuances to this.

If you’re a startup that needs help ideating / strategizing on an AI-powered product… If you’re a company that needs a custom automation solution…

Book a free consultation with me and let’s figure out if we’re a fit and I can help you.