5 likes
·
7.1K reads
10 comments
I want full code for this!
Can u please provide the full code for new learners
Great post. One small suggestion on returning the JSON response using pyyaml.
Thanks! Yeah I could try that.
Full code is not available please provide full code
The YAML template which you have specified in the "template" can be simplified by converting the multi-line to single say by using online convertors. It makes a lot of difference as each space and line adds up to the token count. You don't want to exceed the token count nor spend money unnecessarily.
Please try to flatten the resume input. A multi-line to single-line conversion will save a ton of tokens and cost involved with the Open AI GPT based models.
Sorry, I am not sure the exact reason why you decided to go with the conversation memory-based approach. However, the good news is that it's not required for this use case. The reason for optimization is to save cost by reducing the token size. Also improve the overall performance by asking for what is required.
OpenAIChat is deprecated. We need to go with the ChatOpenAI.
chatopenai = ChatOpenAI( model_name="gpt-3.5-turbo") llmchain_chat = LLMChain(llm=chatopenai, prompt=prompt)
Excellent article - illustrated in a very simple manner and very practical as well. I liked it.
The code provided is not complete. For example you are extracting the text from the pdf but you have included the call to the function in your code. Just for completeness of the article and for first time learners, I would suggest sharing the whole code or providing a link to the whole structures code.