Wednesday, December 30, 2009

SQL Injection ?

Ever heard of SQL Injection? Sounds like something medical but unfortunately not as benign as that.

Apparently it is a method used by hackers to redirect a website and mess with our data. I just read an interesting article about this on the eWeek Security Watch site and thought I'd share. Here's a link to the article:

http://securitywatch.eweek.com/sql_injection/an_unpleasant_anniversary_eleven_years_of_sql_injection.html?kc=EWKNLDAT12242009STR2

Thursday, December 17, 2009

How 11g Tries To Improve the Use of Bind Variables

I'm sure that all of us who have just about any Oracle experience have encountered the enigmatic SQL query that ran 2 seconds yesterday but 102 seconds today. What can cause this unpredictable behavior?

The answer is that many factors can cause this degradation, but one of the most common causes is execution plan invalidation. When the optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor, it is called "bind peeking". This usually has a positive effect on performance because it allows the optimizer to determine the selectivity of any "WHERE" clause condition as if literals have been used instead of bind variables, thus improving the quality of the execution plan generated for statements using bind variables.

Bind peeking becomes a problem when a bind variable is used with a table column that has skewed data. For example, we have a table of 20000 rows that contains a "part_num" column. Part number "A110" occurs 3 times in our table (3 rows) but part number "B235" occurs only 8000 times (800 rows). An index is created on the part_num column and when statistics are gathered, the skew in the data distribution is detected and a histogram is generated. When a cursor that contains a select query using bind variables for a conditional statement is first invoked, the optimizer can peek at the value of the user defined bind variable and determine the execution plan which is then stored in the shared pool. If the bind variable for part_num is set to "A110" at first cursor invocation, the optimizer peeks at the bind variable and sets the plan access accordingly. However, if the value of the bind variable is set to the value "B235" and the cursor is parsed again, the old plan will still be in place and will be used. Thus, an access plan not suited for optimal performance for the value of the bind variable is used, thereby degrading performance due to the difference in buffer gets that are required. Before 11g, one of the ways to take care of this was to flush the shared pool and cause the query to be reparsed with the new bind variable values.

In 11g, Oracle has taken a step to address the above problem by introducing Adaptive Cursor Sharing. ACS allows for more cursors for the same query containing bind variables. In other words, the optimizer will allow multiple execution plans to be used for a single statement that uses bind variables. This ensures that the best execution plan will be used depending on the bind value. So, taking the example above, 11g would allow two plans to be generated for the difference in selectivity of the part_num column bind variable values.

Friday, November 13, 2009

Oracle 11g and Automatic Memory Management (AMM)

I have been very busy for the last few months and haven't posted much on this blog but I thought I would share some notes on recent experiences installing and upgrading to Oracle 11gR1 and 11gR2 on Decipher,LLC projects. I plan to make this a series of postings.



Carrying forward the theme of improving and automating memory management that started with Oracle 9i, Oracle has included Automatic Memory Management (AMM) in 11g.
The purpose of AMM is to provide dynamic management of the PGA and SGA. The method by which this is done is to allocate one large chunk of memory at database configuration using two new initialization parameters: MEMORY_TARGET and MEMORY_MAX_TARGET. From Oracle 11g Documention:

"you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET). The instance then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and the instance program global area (instance PGA). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you cannot accidentally set the target memory size too high, and so that enough memory is set aside for the Oracle Database instance in case you do want to increase total instance memory in the future. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the instance also prevents you from setting the target memory size too low."

Before you configure your Oracle 11g instance, you should first make sure that you have enough shared memory on your server. On a Linux server, you do this by issuing:

# df -k /dev/shm
which will return the number of blocks in your shared memory filesystem, the number used, the usage percent, and the device on which it is mounted (/dev/shm).

# umount tmpfs
# mount -t tmpfs shmfs -o size=1200m /dev/shm
will adjust the shared memory filesystem size to that required size.

To make this change permanent you need to add the adjustment to the fstab file. To do this just edit the file /etc/fstab and add the line:

none /dev/shm tmpfs size=3000m 0 0
in this case, we set the size to 3GB

The next step is to optimally set your MEMORY_TARGET and MEMORY_MAX_TARGET initialization parameters.

The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter:

SQL> select * from v$memory_target_advice order by memory_size;

MEMORY_SIZE MEMORY_SIZE_FACTOR ESTD_DB_TIME ESTD_DB_TIME_FACTOR VERSION
----------- ------------------ ------------ ------------------- ----------
180 .5 458 1.344 0
270 .75 367 1.0761 0
360 1 341 1 0
450 1.25 335 .9817 0
540 1.5 335 .9817 0
630 1.75 335 .9817 0
720 2 335 .9817 0

While installing Oracle 11g or using DBCA to create a new database, you have the option of using AMM or not.

The main point of this post is to inform you that Oracle 11g will require quite a bit more shared memory if you choose to use the new Automatic Memory Management feature.

Saturday, July 25, 2009

ORA-01000: Too many open cursors

You are running a long process, such as an export/import or SQL*Loader script, and get the "ORA-01000: Too many open cursors" error from your Oracle database.

This problem could be encountered because Oracle can keep closed cursors in cache which are included in the total open cursors count. One way to determine how many cursors are actually open in a database is to submit the following SQL:

"select value, name from v$sysstat where statistic# in (2,3);"

The standard recommended solution to the ORA-01000 is to increase the OPEN_CURSORS database initialization parameter. However, there is a workaround:

Break down your process into smaller pieces. An example is, instead of issuing a request for a full database export, do your export on a schema or table basis. You will have the inconvenience of having to execute more than one script or command, however, you will get around this error without having to change database initialization parameters and doing a shutdown/restart of your database.

Monday, July 13, 2009

ORA-06401 NETCMN: invalid driver designator

When recently issuing a command to connect to a remote database, the Oracle “ora-06401” error was encountered.

Oracle documentation states the following cause, along with what it suggests to be the sole solution:

ORA-06401 NETCMN: invalid driver designator

Cause: The login (connect) string contains an invalid driver designator.

Action: Correct the string and re-submit.


This usually means that the command syntax is incorrect. However, other causes exist:

1. improper authorization for the user issuing the command
2. invalid connection string for the target database version
3. ^M characters present in the file tnsnanes.ora file. These can be seen by viewing the ascii file after it has been converted to binary or by editing the file in a UNIX/Linux editor, such as vi. These characters are usually created by editing the file with a non-Linux/Unix editor and then transferring it by FTP to the server.
Use the following vi editor command to remove all of these characters from the file:

:%s/.$//

Thursday, July 9, 2009

A Recovery Manager Setup Obscure Point

When granting privileges to the rman user account, make sure that you make the RECOVERY_CATALOG_OWNER privilege the default privilege.