What is a consideration when specifying DATA CAPTURE CHANGES?
A. Can be specified for capturing changes to an XML object.
B. To minimize logging, specify NOT LOGGED when DATA CAPTURE CHANGES is specified.
C. REFRESH TABLE statement is not allowed with a table defined with DATA CAPTURE CHANGES.
D. You cannot turn on DATA CAPTURE CHANGES if the table space is in advisory REORG- pending.
Given the following view definition: CREATE VIEW EMPD AS (SELECT D.DEPTNAME,
A. LASTNAME FROM DEPT D INNER JOIN EMP E ON D.DEPTNO = E.WORKDEPT); Can an UPDATE statement be used to update the view EMPD on joined tables DEPT and EMP?
B. No, an UPDATE statement against the EMPD view is not allowed at all.
C. Yes, the view EMPD can be updated directly via an UPDATE statement.
D. No, only a clone of the view EMPD can be updated via an UPDATE statement.
E. Yes, an UPDATE statement against the EMPD view is allowed if an INSTEAD OF trigger is defined on the view.
Which statement is true about table check constraints?
A. Only one constraint per column is allowed.
B. The LOAD utility cannot enforce the constraint.
C. A constraint placed on a table does not apply to a view defined on the table.
D. A row meets the requirement of the constraint if the condition evaluates to true or unknown.
You have to design a numeric column, which is also the primary key. The column should cover large (i.e., 123456789012345678) numbers. Which column data definition covers the requirements?
A. BIGINT
B. INTEGER
C. DECFLOAT (34)
D. DECIMAL (31,18)
During a recovery of a table space in a data sharing environment, what value is used to coordinate the log records across the DB2 members?
A. RBA
B. LRSN
C. ROWID
D. SYSPITRT
What WLM action will establish performance objectives for DDF threads and their related address space?
A. Insure the NUMTCB is set to a value greater than 10.
B. Specify DDF for the value of APPLENV to the name of the WLM environment.
C. Execute the WLM_REFRESH stored procedure for the DDF application environment.
D. Create a WLM service definition that assigns service classes to the DDF threads.
What is the maximum levels of backup that DB2 plan stability can support?
A. 1
B. 2
C. 3
D. 4
Your table space DB1.TS1 is currently index-partitioned. Which steps should be performed at a minimum to make it a partitioned-by-range table space?
A. Issue ALTER TABLESPACE with a valid SEGSIZE > 0
B. Make the table space table-partitioned, then issue ALTER TABLESPACE with a valid SEGSIZE > 0
C. Drop the partitioning index, ALTER the table definition by adding partition ranges to the existing table definition, then issue ALTER TABLESPACE with a valid SEGSIZE > 0
D. Drop the partitioning index, ALTER the table definition by adding partition ranges to the existing table definition, create a partitioned index and issue ALTER TABLESPACE with a valid SEGSIZE > 0
Your current backup strategy is to run COPY SHRLEVEL CHANGE for all your table spaces each night. You create FULL image copies once a week and INCREMENTAL copies during the rest of the days. After migrating to DB2 10, you plan to create the FULL image copies using FLASHCOPY YES and COPYDDN, that is you create a flash copy image copy (FCIC) and additionally a sequential copy. What happens to the incremental copies that you continue to create on all other days?
A. If DSNZPARM FLASHCOPY_COPY is set to YES, the incremental copies will automatically be incremental FLASHCOPY copies.
B. Nothing special. The only difference is that the FULL copy, which is the basis for the subsequent incremental copies has now been generated from the FCIC.
C. The COPY utilities with FULL NO option will not run. If there are entries in SYSIBM.SYSCOPY that indicate that FCICs exist for a give page set, INCREMENTAL copies cannot be taken anymore.
D. The first image copy utility execution after the FULL FCIC and sequential copy created from the FCIC will end up as FULL copy although you specify the FULL NO statement on the COPY utility control statement.
DB2 workfiles are critical to performance of applications. Which of the following will NOT assist with achieving best performance for workfile usage?
A. Create several 4K and 32K workfiles.
B. Assign workfiles to their own bufferpools.
C. Assign workfiles to their own DASD volumes.
D. Create one large workfile for the data sharing group.