5 comments
API limits are most of the problems when working with LLMs service providers.
Did you try load balancing?
API limits are most of the problems when working with LLMs service providers.
Did you try load balancing?
Thanks for the article. Liked the style. Unfortuantly I got stuck, when I try and run note I get command not found returned by zsh
The cli setup worked on my mac. On windows I could not get it to be registered globally. Just got command not found. ChatGPT plus myself could not get it to work. :(
Words of praise shall fail to justify the clever, thoughtful and brilliance in this piece of content! A great primer for someone willing to know the world of designing, delineating the intricate boundary between art and design! Further I loved how you clearly state the interconnected network of art-design-engineering emphasizing how all of these distinct yet are individually important for a complied and appealing application! P.s the last P.s just blew my mind on the clever engagement strategy! Very happy to read such content!
Well justified. 👍
Very informative, I really could not understand why my nextjs app is slow despite using server-actions and caching. Looks like if you are building a portal, it's unlikely to be greatly scalable with server actions.
"Error handling is done by returning an error in the catch block of a try-catch statement.
That's then sent up to your nearest error UI boundary which you define like error.tsx."
I'm pretty sure the error is sent up only if you throw?