@ainielsen
Bike shop mechanic
Nothing here yet.
Nothing here yet.
Hey Karen! Thanks for the comment! Certainly you are right that defining "relevant knowledge" is critical to the discussion. Without that definition there is no way to have any rigorous contemplation of the topic. Coming up with the definition would be a thesis in its own right. In the end, while I think my overall premise is valid, I would not be surprised or disappointed if a rigorous study found out I was wrong. Ultimately, for me, it doesn't matter that the details or conclusion are really accurate. Just the mental exercise of thinking about why we are (or at least, I am) inclined to think that because I'm older I know more has been helpful to me. And, in full honestly, I feel I learn a lot more when I keep this internal discussion at the front of my mind.
No extra security concerns at all. In fact, that's one reason I decided to write the function. This function uses bind variable and, although it does concatenate on unescaped column names, those columns names are defined by the developer at design time, not run time, and they can't be changed at run time. The only security concern is with the developer, which is the same as it is for a hard coded query.
Jeleeb, the short answer is that without the scalar subquery the function will run for every row. If it is in the "select" portion it will run for every row returned. If it is in the "where" portion it will run for every row evaluated. That means a SQL to PL/SQL context switch for every row. That is very expensive. When you use a scalar subquery, it will cache the result of the function (up to a limit depending on your db version). If the same bind variable is used, it won't rerun the function, it will just take the value from the cache. It is significantly faster because it avoids the context switch AND it doesn't have to run the function. In the case of (select my_function(empno) from dual) it won't actually help because the value of empno will be different for every row. Hence, it will need to run the query every time. So, there is no benefit. But, it doesn't really hurt either. So, if you are just in the habit of ALWAYS doing it, then you can decide to not do it when you are sure it won't help. Regarding point 8, it's incredibly complicated to come up with rules as to when one way or another is faster. It's easier to have a representative data set and just test it out. I suspect with 23c the database may automatically rewrite the query based upon the data profile to get the best performance. We'll see if that happens, but for now, I suggest following point 1, and then using the method that gives you the best (fastest) result.
Thanks for the great comment. I have not tested it, but I think wwv_flow_id.next_val would have the same benefits as an identity column. wwv_flow_id.next_val is ever increasing; it is also highly sparse, globally unique, and extremely hard to guess. I greatly value all of these characteristics. I just want Oracle to make it easier to use wwv_flow_id.next_val in QuickSQL :)
sssuarez Just one more tip...OADG is accessible via APIs (note: I wasn't able to post links, so I put a space between https:// and the rest of the URL): https:// docs.oracle.com/en/database/oracle/apex/22.1/aeapi/APEX_DG_DATA_GEN.html In particular, if you want to generate data into just one table, you can use APEX_DG_DATA_GEN.GENERATE_DATA https:// docs.oracle.com/en/database/oracle/apex/22.1/aeapi/GENERATE_DATA-Procedure-Signature-1.html Use p_blueprint_table to specify the table name.
Great blog post, Sandra! I have a little hack for you. If you want to re-sequence a column, just right click on the "read only" sequence number and remove readonly="true". Supported? Definitely not, but it works just fine :) Also, you should be able to generate directly into tables even if you create a blueprint from scratch. You just need to make sure the table names, columns, constraints, etc. all work.