Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Excerpt
hiddentrue

When ecflow calls the default ECF_JOB_CMD this is spawned as a separate child process. This child process is then monitored for abnormal termination.

When this happens ecflow will call abort, and sets a special flag which that prevents ECF_TRIES from working.

Problem

Pure python tasks, the server does not honour ECF_TRIES ?

Solution

When ecflow calls the default ECF_JOB_CMD this is spawned as a separate child process. This child process is then monitored for abnormal termination.

When this happens ecflow will call abort, and sets a special flag which that prevents ECF_TRIES from working.

It should be noted that the default error trapping( bash/korn shells) will call exit(0),  hence the default  ECF_JOB_CMD will correct;y correctly handle ECF_TRIES.

Code Block
languagebash
titlehead.h
....
# Defined a error handler
ERROR() {
  echo "ERROR called"
  set +e                                # Clear -e flag, so we don't fail
  wait                                  # wait for background process to stop
  trap 0 1 2 3 4 5 6 7 8 10 12 13 15    # when the following signals arrive do nothing, stops recursive signals/error function being called
  ecflow_client --abort                 # Notify ecflow that something went wrong
  trap 0                                # Remove the trap
  exit 0                                # End the script. Notice that we call exit(0)
}

In our operations we have a specialized script(trimurti), that will detach from the spawned of process, i.e. by nohup or via a special program called ecflow_standalone.

This bypass bypasses the spawned process termination issue. Also, the korn Korn shell error trap uses wait, i.e. to wait for the background process to stop.

...

This process termination was captured by the ecflow server, causing an abort. , i.e. either when node state is aborted or submitted(due to ECF_TRIES)

This abnormal job termination prevents the aborted job from rerunning. When the second process starts running, the task in the server  server is already aborted, leading to zombies.


To fix your problem.

Use a dedicated script for job submission. and use a bash/

...

Korn shell to invoke your python scripts. Using

...

Korn shell trapping for robust error handling. Like above.

Alternatively, If you want to stick with pure python tasks, you need to detach from the spawned of process.  Modify your ECF_JOB_CMD

Code Block
edit ECF_EXTN .py # to correctly locate your script
edit ECF_JOB_CMD "nohup python $ECF_JOB$ > $ECF_JOBOUT$ 2>&1 &"


Alternatively, always make sure your python jobs exits cleanly after calling ecflow abort.  by calling sys.exit(0)

Code Block
 def signal_handler(self,signum, frame):
    print 'Aborting: Signal handler called with signal ', signum
    self.ci.child_abort("Signal handler called with signal " + str(signum));
    sys.exit(0)


Code Block
    def __exit__(self,ex_type,value,tb):
        print "Client:__exit__: ex_type:" + str(ex_type) + " value:" + str(value) + "\n" + str(tb)
        if ex_type != None:
            self.ci.child_abort("Aborted with exception type " + str(ex_type) + ":" + str(value))
            sys.exit(0)
            return False
        self.ci.child_complete()
        return False 


Content by Label
showLabelsfalse
max5
spaces~usa
showSpacefalse
sortmodified
reversetrue
typepage
cqllabel = "kb-troubleshooting-articleecflow-faqs" and label = "python" and type = "page" and space = "UDOC"
labelskb-troubleshooting-article

...