Duration: 2 days
- Use Connector stages to read from and write to database tables
- Handle SQL errors in Connector stages
- Use the Unstructured Data stage to extract data from Excel spreadsheets
- Use the Big Data stage to read from and write to Hadoop HDFS files
- Use the Data Masking stage to mask sensitive data processed within a DataStage job
- Use the XML stage to parse, compose, and transform XML data
- Use the Schema Library Manager to import and manage XML schemas
- Use the Data Rules stage to validate fields of data within a DataStage job
- Create custom data rules for validating data
- Design a job that processes a star schema data warehouse with Type 1 and Type 2 slowly changing dimensions
- Use the Surrogate Key Generator stage to generate surrogate keys
- Unit 1: Accessing Databases
- Unit 2: Processing Unstructured Data
- Unit 3: Processing Big Data
- Unit 4: Data Masking
- Unit 5: Processing XML Data
- Unit 6: Using Data Rules
- Unit 7: Updating a Star Schema Database
- All units are accompanied by hands-on lab exercises.
This advanced course is for experienced DataStage developers seeking training in more advanced DataStage job techniques and who seek techniques for working with complex types of data resources.
- Complete DataStage Essentials course or equivalent
- and have at least one year of experience developing parallel jobs using DataStage
This course is also offered as web-based training. Please contact us for more information.