Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

rekharaghuram
from rekharaghuram More from this publisher
05.11.2015 Views

CHAPTER 3 ■ FILES 111 IMPDP, however, just has to apply a simple XML transformation to accomplish the same—FOO, when it refers to a TABLESPACE, would be surrounded by FOO tags (or some other representation). The fact that XML is used has allowed the EXPDP and IMPDP tools to literally leapfrog the old EXP and IMP tools with regard to their capabilities. In Chapter 15, we’ll take a closer look at these tools in general. Before we get there, however, let’s see how we can use this Data Pump format to quickly extract some data from database A and move it to database B. We’ll be using an “external table in reverse” here. External tables, originally introduced in Oracle9i Release 1, gave us the ability to read flat files—plain old text files—as if they were database tables. We had the full power of SQL to process them. They were read-only and designed to get data from outside Oracle in. External tables in Oracle 10g Release 1 and above can go the other way: they can be used to get data out of the database in the Data Pump format to facilitate moving the data to another machine, another platform. To start this exercise, we’ll need a DIRECTORY object, telling Oracle the location to unload to: ops$tkyte@ORA10G> create or replace directory tmp as '/tmp' 2 / Directory created. Next, we’ll unload the data from the ALL_OBJECTS view. It could be from any arbitrary query, involving any set of tables or SQL constructs we want: ops$tkyte@ORA10G> create table all_objects_unload 2 organization external 3 ( type oracle_datapump 4 default directory TMP 5 location( 'allobjects.dat' ) 6 ) 7 as 8 select * from all_objects 9 / Table created. And that literally is all there is to it: we have a file in /tmp named allobjects.dat that contains the contents of the query select * from all_objects. We can peek at this information: ops$tkyte@ORA10G> !head /tmp/allobjects.dat ..........Linuxi386/Linux-2.0.34-8.1.0WE8ISO8859P1.......... 1 0 3 0 WE8ISO8859P1

112 CHAPTER 3 ■ FILES That is just the head, or top, of the file; binary data is represented by the ....... (don’t be surprised if your terminal “beeps” at you when you look at this data). Now, using a binary FTP (same caveat as for a DMP file!), I moved this allobject.dat file to a Windows XP server and created a directory object to map to it: tkyte@ORA10G> create or replace directory TMP as 'c:\temp\' 2 / Directory created. Then I created a table that points to it: tkyte@ORA10G> create table t 2 ( OWNER VARCHAR2(30), 3 OBJECT_NAME VARCHAR2(30), 4 SUBOBJECT_NAME VARCHAR2(30), 5 OBJECT_ID NUMBER, 6 DATA_OBJECT_ID NUMBER, 7 OBJECT_TYPE VARCHAR2(19), 8 CREATED DATE, 9 LAST_DDL_TIME DATE, 10 TIMESTAMP VARCHAR2(19), 11 STATUS VARCHAR2(7), 12 TEMPORARY VARCHAR2(1), 13 GENERATED VARCHAR2(1), 14 SECONDARY VARCHAR2(1) 15 ) 16 organization external 17 ( type oracle_datapump 18 default directory TMP 19 location( 'allobjects.dat' ) 20 ) 21 / Table created. And now I’m able to query the data unloaded from the other database immediately: tkyte@ORA10G> select count(*) from t; COUNT(*) ---------- 48018 That is the power of the Data Pump file format: immediate transfer of data from system to system over “sneaker net” if need be. Think about that the next time you’d like to take a subset of data home to work with over the weekend while testing. One thing that wasn’t obvious here was that the character sets were different between these two databases. If you notice in the preceding head output, the character set of my Linux database WE8ISO8859P1 was encoded into the file. My Windows server has this:

112<br />

CHAPTER 3 ■ FILES<br />

That is just the head, or top, of the file; binary data is represented by the ....... (don’t be<br />

surprised if your terminal “beeps” at you when you look at this data). Now, using a binary FTP<br />

(same caveat as for a DMP file!), I moved this allobject.dat file to a Windows XP server <strong>and</strong><br />

created a directory object to map to it:<br />

tkyte@ORA10G> create or replace directory TMP as 'c:\temp\'<br />

2 /<br />

Directory created.<br />

Then I created a table that points to it:<br />

tkyte@ORA10G> create table t<br />

2 ( OWNER VARCHAR2(30),<br />

3 OBJECT_NAME VARCHAR2(30),<br />

4 SUBOBJECT_NAME VARCHAR2(30),<br />

5 OBJECT_ID NUMBER,<br />

6 DATA_OBJECT_ID NUMBER,<br />

7 OBJECT_TYPE VARCHAR2(19),<br />

8 CREATED DATE,<br />

9 LAST_DDL_TIME DATE,<br />

10 TIMESTAMP VARCHAR2(19),<br />

11 STATUS VARCHAR2(7),<br />

12 TEMPORARY VARCHAR2(1),<br />

13 GENERATED VARCHAR2(1),<br />

14 SECONDARY VARCHAR2(1)<br />

15 )<br />

16 organization external<br />

17 ( type oracle_datapump<br />

18 default directory TMP<br />

19 location( 'allobjects.dat' )<br />

20 )<br />

21 /<br />

Table created.<br />

And now I’m able to query the data unloaded from the other database immediately:<br />

tkyte@ORA10G> select count(*) from t;<br />

COUNT(*)<br />

----------<br />

48018<br />

That is the power of the Data Pump file format: immediate transfer of data from system to<br />

system over “sneaker net” if need be. Think about that the next time you’d like to take a subset<br />

of data home to work with over the weekend while testing.<br />

One thing that wasn’t obvious here was that the character sets were different between<br />

these two databases. If you notice in the preceding head output, the character set of my Linux<br />

database WE8ISO8859P1 was encoded into the file. My Windows server has this:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!