Skip to content

Releases: MigOpsRepos/pg_dbms_job

Version 1.5

02 Oct 08:10
Compare
Choose a tag to compare

Mon Oct 02 2022 - Version 1.5.0

This release is maintenance release to fix one major issue.

  • Fix get_scheduled_jobs() to avoid executing already running jobs if it
    takes longer to complete than the frequency at which it is programmed.
    Thanks to Petru Ghita for the report.

Version 1.4

22 Aug 15:06
Compare
Choose a tag to compare

Tue Aug 22 2022 - Version 1.4.0

This release is maintenance release to fix two major issues.

  • Fix function dbms_job.interval() that was not using dybnamic SQL for the UPDATE statement. Thanks to Eric-zch for the report.
  • Fix installation of extension's update files that was not copied in the right place.

Version 1.3

16 Aug 21:17
Compare
Choose a tag to compare

Tue Aug 16 2022 - Version 1.3.0

This release is maintenance release since last release to fix issues
reported by users.

  • Fix extension file and regression test to remove the creation of the extension's schema if it not exist per CVE-2022-2625.
  • Fix case where pg_dbms_job does not run job anymore after a PG restart. Thanks to longforrich for the patch.

Version 1.2.0

11 Apr 14:49
Compare
Choose a tag to compare

Mon Apr 11 2022 - Version 1.2.0

This release is maintenance release since last release to fix issues
reported by users since the past height months. It also adds some
improvements.

  • Add configuration directive job_queue_processes to control the
    maximum number of job processed at the same time.
  • Keep entries in the jobs table to be able to monitor the duration
    of a task. Thank to Jagrab3 for the patch.
  • Allow strftime() escapes in log filename, for example to have a
    log file per week day use %a in the file name.
  • Add new configuration directive log_truncate_on_rotation to truncate file
    on rotation. When activated an existing log file with the same name as the
    new log file will be truncated rather than appended to. But such truncation
    only occurs on time-driven rotation, not on restarts.
    Thanks to K RamaKrishna Sastry for the feature request.
  • Allow pg_dbms_job to run on a standby server without reporting error.
    The daemon detects that it is running on a standby and disconnect
    immediately, it will try to connect 3 seconds later. Thanks to
    K RamaKrishna Sastry for the feature request.
  • Try to reconnect PostgreSQL after 3 seconds when the connection fail.
    Thanks to K RamaKrishna Sastry for the report.

Here is the complete list of other changes:

  • Fix an extract comma that make the update on failure fail.
  • Fix single execution mode with lasted change on connection error and
    update regression tests configuration file.
  • Prevent removing the pid file if pg_dbms_job can't find the
    configuration file and the -k option is used.
  • Do not die on error when pg_dbms_job reload the configuration file.
  • Explain the behavior of the update done on table dbms_job.all_scheduled_jobs
    after a job completion when it fails or it is successful.

Version 1.1.0

28 Aug 17:59
Compare
Choose a tag to compare

Sat Aug 28 2021 - Version 1.1.0

This is a maintenance release to fix some possible wrong behaviors, give control over other ones and improve the documentation.

  • Add configuration directive job_queue_processes to control the limit of jobs that can be run at the same time.
  • Fix insertion failure in job history table when PQSTATUS returned contains single quote.
  • Fix possible case where asynchronous jobs can be executed twice if they are not removed fast enough from the queue.
  • Fix regression test to use latest SQL version of the extension.
  • Add limitations on pg_dbms_job use and especially about the NOTIFY queue size limits. Thanks to Julien Rouhaud for the report.
  • Add missing information on how to stop or reload configuration of the scheduler.
  • Add information that unlike with cron-like scheduler, when the scheduler starts it executes all active jobs with a next date in the past.
  • Add information that jobs are executed with the role that has submitted the job and with the search path was used at job creation time.

To upgrade installed pg_dbms_job version 1.0.1 execute:

ALTER EXTENSION pg_dbms_job UPDATE;

after installing the new version using: make && sudo make install

Version 1.0.1

25 Aug 15:40
Compare
Choose a tag to compare

This PostgreSQL extension provided full compatibility with the DBMS_JOB Oracle module.

pg_dbms_job allows to manage scheduled jobs from a job queue or to execute immediately jobs asynchronously. A job definition consist on a code to execute, the next date of execution and how often the job is to be run. A job runs a SQL command, plpgsql code or an existing stored procedure.

If the submit stored procedure is called without the next_date (when) and interval (how often) attributes, the job is executed immediately in an asynchronous process. If interval is NULL and that next_date is lower or equal to current timestamp the job is also executed immediately as an asynchronous process. In all other cases the job is to be started when appropriate but if interval is NULL the job is executed only once and the job is deleted.

If a scheduled job completes successfully, then its new execution date is placed in next_date. The new date is calculated by evaluating the SQL expression defined as interval. The interval parameter must evaluate to a time in the future.

This extension consist in a SQL script to create all the objects related to its operation and a daemon that must be run attached to the database where jobs are defined. The daemon is responsible to execute the queued asynchronous jobs and the scheduled ones. It can be run on the same host of the database, where the jobs are defined, or on any other host. The schedule time is taken from the database host not where the daemon is running.

The number of jobs that can be executed at the same time is limited to 1000 by default. If this limit is reached the daemon will wait that a process ends to run a new one.

The use of an external scheduler daemon instead of a background worker is a choice, being able to fork thousands of sub-processes from a background worker is not a good idea.

The job execution is caused by a NOTIFY event received by the scheduler when a new job is submitted or modified. The notifications are polled every 0.1 second. When there is no notification the scheduler polls every job_queue_interval seconds (5 seconds by default) the tables where job definition are stored. This mean that at worst a job will be executed job_queue_interval seconds after the next execution date defined.