LLMs appear to have boundless possibilities, and it seems like they can be used in all tasks. We can craft prompts for composing emails in a given context as well as write creative prompts to generate code in any programming language. Image a situati...
rito.hashnode.dev8 min read
Alien Wizbot Crypto Recovery got my scammed funds back, they are the best and legitimate place where you can hire a real hacker.
This is really cool and detail
Thanks for sharing this Ritobroto Seth and for answering a similar question I had about the optimization or robustness of the query. 🙌🏽
An insightful demonstration of querying SQL databases using GPT prompts, showcasing its potential applications in various tasks.
Thanks Mayuri for your comment. Actually querying DB with LLM is still in the experimental phase and there are still a lot of open threads here.
A lot of things need to be standardized and perfected before making it production ready. Additionally, there are security concerns too that need to be addressed to prevent unwanted data leakages.
To answer your question about how optimized the query is, right now there is no benchmarking tool to measure it. Also, the response from the LLMs is non-deterministic, if ask the same question to LLM twice it might happen that both times it may return with 2 different answers. So in our case, it might be possible that the first time it returns with an optimized query and the second time it may return with an optimized query.
Waw, I really loved going through it. But it made me think of how optimized or say robust is the query output. love to know your opinion.
karthik ale
good