Hyperopt not working any more

When starting hyperopt with --multirun and n_jobs: 2 in the config, the following error occurs:

ew best mean test/success_rate: 0.00000!
Saving new best model at /storage/gdrive/Coding/ideas_deep_rl2/data/ac47785/reach_target-state-v0/10-31-37/epeardononsuc=0&goaselstr=future2&hinsamdonifsuc=0&learat=0.001574313693769229&nsamgoa=8&setfutretzerifdon=0&subtesper=0.7&timsca=150&100/best_model
QMutex: destroying locked mutex
QMutex: destroying locked mutex
NameError: name 'logger' is not defined

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/_internal/utils.py", line 379, in _run_hydra
    lambda: hydra.multirun(
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/_internal/utils.py", line 215, in run_and_report
    raise ex
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/_internal/utils.py", line 212, in run_and_report
    return func()
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/_internal/utils.py", line 382, in <lambda>
    overrides=args.overrides,
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/_internal/hydra.py", line 132, in multirun
    ret = sweeper.sweep(arguments=task_overrides)
  File "/storage/gdrive/Coding/ideas_deep_rl2/hydra_plugins/hydra_custom_optuna_sweeper/custom_optuna_sweeper.py", line 44, in sweep
    return self.sweeper.sweep(arguments)
  File "/storage/gdrive/Coding/ideas_deep_rl2/hydra_plugins/hydra_custom_optuna_sweeper/_impl.py", line 293, in sweep
    print(f"rv: {ret.return_value}")
  File "/storage/gdrive/Coding/ideas_deep_rl2/venv/lib/python3.6/site-packages/hydra/core/utils.py", line 209, in return_value
    ) from self._return_value
hydra.errors.HydraJobException: Error executing job with overrides: ['algorithm.learning_rates.0=0.001574313693769229', 'algorithm.n_sampled_goal=8', 'algorithm.subgoal_test_perc=0.7', 'algorithm.goal_selection_strategy=future2', 'algorithm.time_scales.0=150', 'algorithm.ep_early_done_on_succ=0', 'algorithm.hindsight_sampling_done_if_success=0', 'algorithm.set_fut_ret_zero_if_done=0', 'n_epochs=4']
python-BaseException

Process finished with exit code 1

I tested this with the following main.yaml in the test_environments branch:

defaults:
  # the name of the algorithm to be used ('td3', 'sac', 'dqn', 'ddpg', 'her2', 'hac')
  # here we use hydras config group defaults
  - algorithm: 'hac'
  - override hydra/job_logging: default
  - override hydra/sweeper: optuna
  - override hydra/sweeper/sampler: tpe
  - override hydra/launcher: custom_joblib # For multiprocessing, allows for n_jobs > 1. Comment this line to use the standard launcher which spawns a single process at a time. The standard launcher is much better for debugging.

# The name of the OpenAI Gym environment that you want to train on.

#env: 'Blocks-o1-gripper_random-v1'
#env: 'AntReacher-v1'
#env: 'ButtonUnlock-o1-v1'
#env: 'FetchReach-v1'
#env: 'AntMaze-v0'
# Currently supported envs:
# 'FetchPush-v1',
# 'FetchSlide-v1',
# 'FetchPickAndPlace-v1',
# 'FetchReach-v1',

# 'HandManipulateBlock-v0',
# 'Hook-o1-v1',
# 'ButtonUnlock-o2-v1',
# 'ButtonUnlock-o1-v1',

# 'AntReacher-v1',
# 'Ant4Rooms-v1',
# 'AntMaze-v0',multirun
# 'AntPush-v0',
# 'AntFall-v0',

# 'BlockStackMujocoEnv-gripper_random-o0-v1',
# 'BlockStackMujocoEnv-gripper_random-o2-v1',
# 'BlockStackMujocoEnv-gripper_above-o1-v1',
# 'BlockStackMujocoEnv-gripper_none-o1-v1',

env: 'reach_target-state-v0'
# 'close_drawer-state-v0'
# 'push_button-state-v0'
# 'slide_block_to_target-state-v0'
# 'turn_tap-state-v0'

seed: 0

# the path to where logs and policy pickles should go.
base_logdir: 'data'

# The pretrained policy file to start with to avoid learning from scratch again. Useful for interrupting and restoring training sessions.
restore_policy: null

# The number of training steps after which to evaluate the policy.
eval_after_n_steps: 2000

# The max. number of training epochs to run. One epoch consists of 'eval_after_n_steps' actions.
n_epochs: 30

# The number of testing rollouts.
n_test_rollouts: 10

# Max. number of tries for this training config.
max_try_idx: 399

# Index for first try.
try_start_idx: 100

# The n last epochs over which to average for determining early stopping condition.
early_stop_last_n: 3

# The early stopping threshold.
early_stop_threshold: 0.05

# The data column on which early stopping is based.
early_stop_data_column: 'test/success_rate'

# A command line comment that will be integrated in the folder where the results
# are stored. Useful for debugging and addressing temporary changes to the code..
info: ''

# The number of steps after which to save the model. 0 to never save, i.e., to only save the best and last model.
save_model_freq: 0

# The render_args specify how and when to render during training (first sublist) and testing (second sublist).
# 'record' is for video, 'display' for direct visualization, 'none' for not rendering at all.
# The numbers determine the number of epochs after which we render the training/testing.
# Example: [['display',10],['record',1]] means that we display every 10th training and record every testing run.
render_args: [['none',10],['none',1]]

# TODO: Currently, having a subfolder conf/hydra/output is buggy
# override default dirname config
hydra:
  run:
    # add git commit hash
    dir: ${base_logdir}/${git_label:}/${env}/${now:%H-%M-%S}
  sweep:
    dir: ${base_logdir}/${git_label:}/${env}/${now:%H-%M-%S} # This way, all trials within one hyperopt run are stored in a subfolder determined by the current time.
#    dir: ${base_logdir}/${git_label:}/${env} # This way, all trials within multiple hyperopt runs are stored in the same parent folder, without using the time subfolder.
    subdir: ${hydra.job.num}

commandline parameter when executing train.py
  sweeper:
    sampler:
      _target_: optuna.samplers.TPESampler
      seed: 123
      consider_prior: true
      prior_weight: 1.0
      consider_magic_clip: true
      consider_endpoints: false
      n_startup_trials: 10
      n_ei_candidates: 24
      multivariate: false
      warn_independent_sampling: true
    _target_: hydra_plugins.hydra_custom_optuna_sweeper.custom_optuna_sweeper.CustomOptunaSweeper
    direction: maximize
    study_name: hac_1_layer_rlb_reach_target
    storage: sqlite:///hac_1_layer_rlb_reach_target.db
    n_jobs: 2
    max_trials: 9999
    max_duration_minutes: 10000
    min_trials_per_param: 3
    max_trials_per_param: 9

    search_space:
      algorithm.learning_rates.0:
        type: float
        low: 6e-4
        high: 3e-2
        log: true
      #
      algorithm.n_sampled_goal:
        type: int
        low: 1
        high: 8
        step: 1
      #
      algorithm.subgoal_test_perc:
        type: float
        low: 0.0
        high: 0.7
        step: 0.1

      algorithm.goal_selection_strategy:
        type: categorical
        choices:
          - future
          - future2
          - rndend
          - rndend2

      algorithm.time_scales.0:
        type: int
        low: 30
        high: 150
        step: 10

      algorithm.ep_early_done_on_succ:
        type: int
        low: 0
        high: 2
        step: 1

      algorithm.hindsight_sampling_done_if_success:
        type: categorical
        choices:
          - 1
          - 0

      algorithm.set_fut_ret_zero_if_done:
        type: categorical
        choices:
          - 1
          - 0