Is row-by-row processing really slow-by-slow? Emphatically: YES

articles: 

Often I see code that uses cursor loops to manage rows individually. If performance is an issue, I always try to see if the loops could be replaced with multi-row SQL statements. This (not very scientific) test shows performance of updating a set of rows to be doubled by changing the processing model:

orclz>
orclz> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.27
orclz>
orclz> DECLARE
  2    CURSOR c_sales IS
  3      SELECT * from sales FOR UPDATE;
  4  BEGIN
  5    FOR row IN c_sales
  6    LOOP
  7      UPDATE sales SET amount_sold = amount_sold+1 WHERE CURRENT OF c_sales;
  8    END LOOP;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:58.28
orclz>
orclz> rollback;

Rollback complete.

Elapsed: 00:00:22.18
orclz> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.18
orclz>
orclz> UPDATE sales SET amount_sold = amount_sold+1;

918843 rows updated.

Elapsed: 00:00:25.66
orclz> rollback;

Rollback complete.

Elapsed: 00:00:20.92
orclz>
When one considers the use of indexes (not touched in my example) and the possibilities of parallel processing, the benefits of simple SQL may become even more obvious. Of course there are times when cursor loops are needed, but the default position must be "don't use them unless you have to".
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com