94 likes
·
3.1K reads
6 comments
·Mar 18, 2024
Mar 18, 2024
Great read. Bookmarked. Thanks for sharing.
1
·
·1 reply
Author
·Mar 18, 2024
Thanks for reading Tinz Twins! Glad you found this insightful.
1
·
·Mar 19, 2024
Mar 19, 2024
Wow! You're back! It felt like ages!
1
·
·1 reply
Author
·Mar 20, 2024
Ahh, thank you!
·
·Mar 24, 2024
Mar 24, 2024
Amazing man, I really wanted to ask you how you handled the latency of this? How long a user had to wait to get the response?
1
·
·1 reply
Author
·Mar 27, 2024
Thanks for your comment, Ashish!
As I mentioned in the "Notes and Recommendations" section, I used the ElevenLabs eleven_turbo_v2
model since it’s more suitable for tasks demanding low latency, intentionally set the max_tokens
OpenAI parameter to 100
, and used a custom prompt to reduce the number of characters returned to 100
. This worked perfectly for my use case.
·