This is one more error I encountered today – and looks weird to me as I faced this first time
A DataStage job that contains a lookup fails with an error that is similar to the following error: Lookup_Tcodes,0: Error writing table file “/dsData/Datasets/lookuptable.20100217.rtesd”: File too large
To Resolve this
- Create a hash partition on the lookup data
Change the parallel configuration file to two or more nodes to break the lookup data into smaller pieces.
Except Hash partition – all other partitions give erroneous results as we are breaking data into smaller pieces
Change the configuration file at jobs – if you are unsure about impact on other jobs
Please feel free to leave comments or suggestions