简体   繁体   English

Pl / SQL批量绑定/更快的更新语句

[英]Pl/SQL Bulk Bind/ Faster Update Statements

I'm having problems using Bulk Bind in PL/SQL. 我在PL / SQL中使用批量绑定时遇到问题。 Basically what I want is for a table(Component) to update its fieldvalue dependent on the Component_id and fieldname. 基本上我想要的是一个表(Component)根据Component_id和fieldname更新其fieldvalue。 All of these are passed in as paramaters (the type varchar2_nested_table is effectively and array of strings, one element for each update statement that needs to occur). 所有这些都作为参数传递(类型varchar2_nested_table实际上是字符串数组,每个需要执行的update语句一个元素)。 So for instance if Component_id = 'Compid1' and fieldname = 'name' then fieldvalue should be updated to be 'new component name'. 因此,例如,如果Component_id ='Compid1'和fieldname ='name',则fieldvalue应该更新为'new component name'。

I typed up the code below in relation to this http://www.oracle.com/technetwork/issue-archive/o14tech-plsql-l2-091157.html . 我输入了以下有关此http://www.oracle.com/technetwork/issue-archive/o14tech-plsql-l2-091157.html的代码。 The code works but is no faster than a simple loop that performs an update for every element in the IN parameters. 该代码可以工作,但其速度不比对IN参数中的每个元素执行更新的简单循环更快。 So if the parameters have 1000 elements then 1000 update statements will be executed. 因此,如果参数具有1000个元素,则将执行1000个更新语句。 I also realise I'm not using BULK COLLECT INTO but I didn't think I needed it as I don't need to select anything from the database, just update. 我也意识到我没有使用BULK COLLECT INTO,但是我认为我不需要它,因为我不需要从数据库中选择任何东西,只需更新即可。

At the moment both take 4-5 seconds for 1000 updates. 目前,两者都需要4-5秒才能完成1000次更新。 I assume I'm using the bulk bind incorrectly or have a misunderstanding of the subject as in examples I can find people are executing 50,000 rows in 2 seconds etc. From what I understand FORALL should improve performance by reducing the number of context switches. 我假设我错误地使用了大容量绑定或对该主题有误解,例如在示例中我发现人们在2秒内执行了50,000行等。据我了解,FORALL应该通过减少上下文切换次数来提高性能。 I have tried another method I found online using cursors and bulk binds but had the same outcome. 我尝试了使用游标和批量绑定在网上找到的另一种方法,但结果相同。 Perhaps my performance expectations are too much? 也许我对性能的期望过高? I don't think so from seeing others results. 我不认为看到别人的结果。 Any help would be greatly appreciated. 任何帮助将不胜感激。

create or replace procedure BulkUpdate(sendSubject_in IN varchar2_nested_table_type,
fieldname_in IN varchar2_nested_table_type,fieldvalue_in IN   varchar2_nested_table_type) is


TYPE component_aat IS TABLE OF component.component_id%TYPE
  INDEX BY PLS_INTEGER;
TYPE fieldname_aat IS TABLE OF component.fieldname%TYPE
  INDEX BY PLS_INTEGER;
TYPE fieldvalue_aat IS TABLE OF component.fieldvalue%TYPE
  INDEX BY PLS_INTEGER;

fieldnames fieldname_aat;
fieldvalues fieldvalue_aat;
approved_components component_aat;


PROCEDURE partition_eligibility
IS
BEGIN
  FOR indx IN sendSubject_in.FIRST .. sendSubject_in.LAST
  LOOP
    approved_components(indx) := sendSubject_in(indx);
    fieldnames(indx):= fieldname_in(indx);
    fieldvalues(indx) := fieldvalue_in(indx);
  END LOOP;
END;


PROCEDURE update_components
IS
BEGIN
  FORALL indx IN approved_components.FIRST .. approved_components.LAST
    UPDATE Component
      SET Fieldvalue = fieldvalues(indx)
      WHERE Component_id = approved_components(indx)
      AND Fieldname = fieldnames(indx);
END;

BEGIN
  partition_eligibility;
  update_components;
END BulkUpdate;

Whenever, we are submitting PL/SQL blocks into oracle server always SQL statements are executed. 每当我们将PL / SQL块提交到oracle服务器时,总是执行SQL语句。 Through SQL engine and also procedural statements are executed. 通过SQL引擎以及过程语句也可以执行。 Through Procedural statement executor. 通过程序声明执行器。 This procedural statement executor is available in PL/SQL engine, whenever we are using large amount of loading through SQL, PL/SQL statements always oracle server executes these statements separately through these engines. PL / SQL引擎中提供了此过程语句执行器,每当我们通过SQL使用大量负载时,PL / SQL语句始终由oracle服务器通过这些引擎分别执行。

This type of execution methodology always content switching execution methodology degrades performance of the application. 这种类型的执行方法总是内容切换执行方法会降低应用程序的性能。 To overcome this problem, oracle introduced “bulk bind” process using collections, ie in this method oracle server executes all SQL statements at a time. 为了克服这个问题,oracle引入了使用集合的“批量绑定”过程,即在这种方法中,oracle服务器一次执行所有SQL语句。 Bulk Collect: 批量收集:

Whenever we are using this clause automatically. 每当我们自动使用此子句时。 Oracle server selecting date wise and store it into collections. Oracle服务器明智地选择日期并将其存储到集合中。 Bulk collect clause used in 批量收集条款

  1. select…into…clause 选择……进入……条款
  2. cursor fetch statement 游标获取语句
  3. DML returning clauses DML返回子句

Also see PL/SQL Bulk Collect And Bulk bind 另请参阅PL / SQL批量收集和批量绑定

There is something else going on, I suspect your individual updates are each taking a lot of time, maybe because there are triggers or inefficient indexes. 还有其他事情在发生,我怀疑您的单个更新每次都花费很多时间,可能是因为存在触发器或无效索引。 (Note that if each statement is expensive individually, using bulk updates won't save you a lot of time since the context switches are negligible compared to the actual work). (请注意,如果每个语句单独花费很高,则使用批量更新不会节省大量时间,因为与实际工作相比,上下文切换可以忽略不计)。

Here is my test setup: 这是我的测试设置:

CREATE TABLE Component (
  Component_id NUMBER,
  fieldname    VARCHAR2(100),
  Fieldvalue   VARCHAR2(100),
  CONSTRAINT component_pk PRIMARY KEY (component_id, fieldname)
);

-- insert 1 million rows
INSERT INTO component 
  (SELECT ROWNUM, to_char(MOD(ROWNUM, 100)), dbms_random.string('p', 10) 
     FROM dual 
   CONNECT BY LEVEL <= 1e6);

CREATE OR REPLACE TYPE varchar2_nested_table_type AS TABLE OF VARCHAR2(100);
/

SET SERVEROUTPUT ON SIZE UNLIMITED FORMAT WRAPPED
DECLARE
   l_id    varchar2_nested_table_type;
   l_names varchar2_nested_table_type;
   l_value varchar2_nested_table_type;
   l_time  NUMBER;
BEGIN
   SELECT rownum, to_char(MOD(rownum, 100)), dbms_random.STRING('p', 10) 
     BULK COLLECT INTO l_id, l_names, l_value
     FROM dual
   CONNECT BY LEVEL <= 100000;
   l_time := dbms_utility.get_time;
   BulkUpdate(l_id, l_names, l_value);
   dbms_output.put_line((dbms_utility.get_time - l_time) || ' cs elapsed.');
END;
/

100000 rows updated in about 1.5 seconds on an unremarkable test machine. 在不起眼的测试机上,约1.5秒内更新了100000行。 Updating the same data set row by row takes about 4 seconds. 逐行更新同一数据集大约需要4秒钟。

Can you run a similar script with a newly created table? 您可以对新创建的表运行类似的脚本吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM