Uploaded image for project: 'CDAP'
  1. CDAP
  2. CDAP-11106

Spark pipelines should be able to configure client resources

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0.0
    • Component/s: Pipelines, UI
    • Labels:
      None
    • Rank:
      1|hzy3cf:

      Description

      Pipelines are able to configure driver and executor resources, but are not able to configure client resources. In some pipelines, this can result in yarn constantly killing the client container because of memory usage.

      The workaround is to set a runtime argument 'task.client.system.resources.memory' to 1024 or some higher number before running the pipeline.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                ashau Albert Shau
                Reporter:
                ashau Albert Shau
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: