OpenAIChat is deprecated. We need to go with the ChatOpenAI. chatopenai = ChatOpenAI( model_name="gpt-3.5-turbo") llmchain_chat = LLMChain(llm=chatopenai, prompt=prompt)
@ranjancse
AI Innovator, Builder
Nothing here yet.
Nothing here yet.
No blogs yet.
OpenAIChat is deprecated. We need to go with the ChatOpenAI. chatopenai = ChatOpenAI( model_name="gpt-3.5-turbo") llmchain_chat = LLMChain(llm=chatopenai, prompt=prompt)
Sorry, I am not sure the exact reason why you decided to go with the conversation memory-based approach. However, the good news is that it's not required for this use case. The reason for optimization is to save cost by reducing the token size. Also improve the overall performance by asking for what is required.
Sorry, this specific usecase of generating the fake resume is super expensive especially if you are considering the Open AI models. However, the best thing one can do is to go with the Json Resume Schema based fake builder. Please take a look at the below one. github.com/jsonresume/jsonresume-fake
The YAML template which you have specified in the "template" can be simplified by converting the multi-line to single say by using online convertors. It makes a lot of difference as each space and line adds up to the token count. You don't want to exceed the token count nor spend money unnecessarily.