Skip to content

Commit

Permalink
v1.7.0 Merge
Browse files Browse the repository at this point in the history
v1.7.0 Merge
  • Loading branch information
dirtycajunrice authored May 6, 2019
2 parents 32ec88a + 21ad430 commit a0c1cdf
Show file tree
Hide file tree
Showing 21 changed files with 763 additions and 168 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@
.Trashes
ehthumbs.db
Thumbs.db
__pycache__
GeoLite2-City.mmdb
GeoLite2-City.tar.gz
data/varken.ini
.idea/
varken-venv/
venv/
logs/
__pycache__
21 changes: 19 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,24 @@
# Change Log

## [v1.6.8](https://github.com/Boerderij/Varken/tree/v1.6.8) (2019-04-18)
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.7...v1.6.8)
## [v1.7.0](https://github.com/Boerderij/Varken/tree/v1.7.0) (2019-05-05)
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.8...v1.7.0)

**Implemented enhancements:**

- \[ENHANCEMENT\] Add album and track totals to artist library from Tautulli [\#127](https://github.com/Boerderij/Varken/issues/127)
- \[Feature Request\] No way to show music album / track count [\#125](https://github.com/Boerderij/Varken/issues/125)

**Fixed bugs:**

- \[BUG\] Invalid retention policy name causing retention policy creation failure [\#129](https://github.com/Boerderij/Varken/issues/129)
- \[BUG\] Unifi errors on unnamed devices [\#126](https://github.com/Boerderij/Varken/issues/126)

**Merged pull requests:**

- v1.7.0 Merge [\#131](https://github.com/Boerderij/Varken/pull/131) ([DirtyCajunRice](https://github.com/DirtyCajunRice))

## [1.6.8](https://github.com/Boerderij/Varken/tree/1.6.8) (2019-04-19)
[Full Changelog](https://github.com/Boerderij/Varken/compare/1.6.7...1.6.8)

**Implemented enhancements:**

Expand Down
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM amd64/python:3.7.2-alpine

LABEL maintainers="dirtycajunrice,samwiseg0"

ENV DEBUG="False"
ENV DEBUG="True"

WORKDIR /app

Expand All @@ -12,6 +12,8 @@ COPY /varken /app/varken

COPY /data /app/data

COPY /utilities /app/data/utilities

RUN apk add --no-cache tzdata && \
python3 -m pip install -r /app/requirements.txt

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.arm
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM arm32v6/python:3.7.2-alpine

LABEL maintainers="dirtycajunrice,samwiseg0"

ENV DEBUG="False"
ENV DEBUG="True"

WORKDIR /app

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile.arm64
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM arm64v8/python:3.7.2-alpine

LABEL maintainers="dirtycajunrice,samwiseg0"

ENV DEBUG="False"
ENV DEBUG="True"

WORKDIR /app

Expand Down
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<p align="center">
<img width="800" src="https://bin.cajun.pro/images/varken_full_banner.png">
<img width="800" src="https://bin.cajun.pro/images/varken_full_banner.png" alt="Logo Banner">
</p>

[![Build Status](https://jenkins.cajun.pro/buildStatus/icon?job=Varken/master)](https://jenkins.cajun.pro/job/Varken/job/master/)
Expand All @@ -11,19 +11,19 @@

Dutch for PIG. PIG is an Acronym for Plex/InfluxDB/Grafana

Varken is a standalone command-line utility to aggregate data
from the Plex ecosystem into InfluxDB. Examples use Grafana for a
frontend
Varken is a standalone application to aggregate data from the Plex
ecosystem into InfluxDB using Grafana for a frontend

Requirements:
* [Python 3.6.7+](https://www.python.org/downloads/release/python-367/)
* [Python3-pip](https://pip.pypa.io/en/stable/installing/)
* [InfluxDB](https://www.influxdata.com/)
* [Grafana](https://grafana.com/)

<p align="center">
Example Dashboard

<img width="800" src="https://i.imgur.com/3hNZTkC.png">
<img width="800" src="https://i.imgur.com/3hNZTkC.png" alt="dashboard">
</p>

Supported Modules:
Expand All @@ -33,6 +33,7 @@ Supported Modules:
* [Tautulli](https://tautulli.com/) - A Python based monitoring and tracking tool for Plex Media Server.
* [Ombi](https://ombi.io/) - Want a Movie or TV Show on Plex or Emby? Use Ombi!
* [Unifi](https://unifi-sdn.ubnt.com/) - The Global Leader in Managed Wi-Fi Systems
* [Lidarr](https://lidarr.audio/) - Looks and smells like Sonarr but made for music.

Key features:
* Multiple server support for all modules
Expand All @@ -41,21 +42,20 @@ Key features:


## Installation Guides
Varken Installation guides can be found in the [wiki](https://github.com/Boerderij/Varken/wiki/Installation).
Varken Installation guides can be found in the [wiki](https://wiki.cajun.pro/books/varken/chapter/installation).

## Support
Please read [Asking for Support](https://github.com/Boerderij/Varken/wiki/Asking-for-Support) before seeking support.
Please read [Asking for Support](https://wiki.cajun.pro/books/varken/chapter/asking-for-support) before seeking support.

[Click here for quick access to discord support](http://cyborg.decreator.dev/channels/518970285773422592/530424560504537105/). No app or account needed!

### InfluxDB
[InfluxDB Installation Documentation](https://docs.influxdata.com/influxdb/v1.7/introduction/installation/)
[InfluxDB Installation Documentation](https://wiki.cajun.pro/books/varken/page/influxdb-d1f)

Influxdb is required but not packaged as part of Varken. Varken will create
its database on its own. If you choose to give varken user permissions that
do not include database creation, please ensure you create an influx database
named `varken`

### Grafana
[Grafana Installation Documentation](http://docs.grafana.org/installation/)
Official dashboard installation instructions can be found in the [wiki](https://github.com/Boerderij/Varken/wiki/Installation#grafana)
[Grafana Installation/Dashboard Documentation](https://wiki.cajun.pro/books/varken/page/grafana)
67 changes: 41 additions & 26 deletions Varken.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from varken import VERSION, BRANCH
from varken.sonarr import SonarrAPI
from varken.radarr import RadarrAPI
from varken.lidarr import LidarrAPI
from varken.iniparser import INIParser
from varken.dbmanager import DBManager
from varken.helpers import GeoIPHandler
Expand All @@ -28,13 +29,9 @@
PLATFORM_LINUX_DISTRO = ' '.join(x for x in linux_distribution() if x)


def thread():
while schedule.jobs:
job = QUEUE.get()
a = job()
if a is not None:
schedule.clear(a)
QUEUE.task_done()
def thread(job, **kwargs):
worker = Thread(target=job, kwargs=dict(**kwargs))
worker.start()


if __name__ == "__main__":
Expand All @@ -43,7 +40,8 @@ def thread():
formatter_class=RawTextHelpFormatter)

parser.add_argument("-d", "--data-folder", help='Define an alternate data folder location')
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging')
parser.add_argument("-D", "--debug", action='store_true', help='Use to enable DEBUG logging. (Depreciated)')
parser.add_argument("-ND", "--no_debug", action='store_true', help='Use to disable DEBUG logging')

opts = parser.parse_args()

Expand Down Expand Up @@ -72,10 +70,15 @@ def thread():
enable_opts = ['True', 'true', 'yes']
debug_opts = ['debug', 'Debug', 'DEBUG']

if not opts.debug:
opts.debug = True

if getenv('DEBUG') is not None:
opts.debug = True if any([getenv(string, False) for true in enable_opts
for string in debug_opts if getenv(string, False) == true]) else False

elif opts.no_debug:
opts.debug = False

# Initiate the logger
vl = VarkenLogger(data_folder=DATA_FOLDER, debug=opts.debug)
vl.logger.info('Starting Varken...')
Expand All @@ -98,72 +101,84 @@ def thread():
SONARR = SonarrAPI(server, DBMANAGER)
if server.queue:
at_time = schedule.every(server.queue_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_queue).tag("sonarr-{}-get_queue".format(server.id))
at_time.do(thread, SONARR.get_queue).tag("sonarr-{}-get_queue".format(server.id))
if server.missing_days > 0:
at_time = schedule.every(server.missing_days_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_missing).tag("sonarr-{}-get_missing".format(server.id))
at_time.do(thread, SONARR.get_calendar, query="Missing").tag("sonarr-{}-get_missing".format(server.id))
if server.future_days > 0:
at_time = schedule.every(server.future_days_run_seconds).seconds
at_time.do(QUEUE.put, SONARR.get_future).tag("sonarr-{}-get_future".format(server.id))
at_time.do(thread, SONARR.get_calendar, query="Future").tag("sonarr-{}-get_future".format(server.id))

if CONFIG.tautulli_enabled:
GEOIPHANDLER = GeoIPHandler(DATA_FOLDER)
schedule.every(12).to(24).hours.do(QUEUE.put, GEOIPHANDLER.update)
schedule.every(12).to(24).hours.do(thread, GEOIPHANDLER.update)
for server in CONFIG.tautulli_servers:
TAUTULLI = TautulliAPI(server, DBMANAGER, GEOIPHANDLER)
if server.get_activity:
at_time = schedule.every(server.get_activity_run_seconds).seconds
at_time.do(QUEUE.put, TAUTULLI.get_activity).tag("tautulli-{}-get_activity".format(server.id))
at_time.do(thread, TAUTULLI.get_activity).tag("tautulli-{}-get_activity".format(server.id))
if server.get_stats:
at_time = schedule.every(server.get_stats_run_seconds).seconds
at_time.do(QUEUE.put, TAUTULLI.get_stats).tag("tautulli-{}-get_stats".format(server.id))
at_time.do(thread, TAUTULLI.get_stats).tag("tautulli-{}-get_stats".format(server.id))

if CONFIG.radarr_enabled:
for server in CONFIG.radarr_servers:
RADARR = RadarrAPI(server, DBMANAGER)
if server.get_missing:
at_time = schedule.every(server.get_missing_run_seconds).seconds
at_time.do(QUEUE.put, RADARR.get_missing).tag("radarr-{}-get_missing".format(server.id))
at_time.do(thread, RADARR.get_missing).tag("radarr-{}-get_missing".format(server.id))
if server.queue:
at_time = schedule.every(server.queue_run_seconds).seconds
at_time.do(QUEUE.put, RADARR.get_queue).tag("radarr-{}-get_queue".format(server.id))
at_time.do(thread, RADARR.get_queue).tag("radarr-{}-get_queue".format(server.id))

if CONFIG.lidarr_enabled:
for server in CONFIG.lidarr_servers:
LIDARR = LidarrAPI(server, DBMANAGER)
if server.queue:
at_time = schedule.every(server.queue_run_seconds).seconds
at_time.do(thread, LIDARR.get_queue).tag("lidarr-{}-get_queue".format(server.id))
if server.missing_days > 0:
at_time = schedule.every(server.missing_days_run_seconds).seconds
at_time.do(thread, LIDARR.get_calendar, query="Missing").tag(
"lidarr-{}-get_missing".format(server.id))
if server.future_days > 0:
at_time = schedule.every(server.future_days_run_seconds).seconds
at_time.do(thread, LIDARR.get_calendar, query="Future").tag("lidarr-{}-get_future".format(
server.id))

if CONFIG.ombi_enabled:
for server in CONFIG.ombi_servers:
OMBI = OmbiAPI(server, DBMANAGER)
if server.request_type_counts:
at_time = schedule.every(server.request_type_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_request_counts).tag("ombi-{}-get_request_counts".format(server.id))
at_time.do(thread, OMBI.get_request_counts).tag("ombi-{}-get_request_counts".format(server.id))
if server.request_total_counts:
at_time = schedule.every(server.request_total_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_all_requests).tag("ombi-{}-get_all_requests".format(server.id))
at_time.do(thread, OMBI.get_all_requests).tag("ombi-{}-get_all_requests".format(server.id))
if server.issue_status_counts:
at_time = schedule.every(server.issue_status_run_seconds).seconds
at_time.do(QUEUE.put, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))
at_time.do(thread, OMBI.get_issue_counts).tag("ombi-{}-get_issue_counts".format(server.id))

if CONFIG.sickchill_enabled:
for server in CONFIG.sickchill_servers:
SICKCHILL = SickChillAPI(server, DBMANAGER)
if server.get_missing:
at_time = schedule.every(server.get_missing_run_seconds).seconds
at_time.do(QUEUE.put, SICKCHILL.get_missing).tag("sickchill-{}-get_missing".format(server.id))
at_time.do(thread, SICKCHILL.get_missing).tag("sickchill-{}-get_missing".format(server.id))

if CONFIG.unifi_enabled:
for server in CONFIG.unifi_servers:
UNIFI = UniFiAPI(server, DBMANAGER)
at_time = schedule.every(server.get_usg_stats_run_seconds).seconds
at_time.do(QUEUE.put, UNIFI.get_usg_stats).tag("unifi-{}-get_usg_stats".format(server.id))
at_time.do(thread, UNIFI.get_usg_stats).tag("unifi-{}-get_usg_stats".format(server.id))

# Run all on startup
SERVICES_ENABLED = [CONFIG.ombi_enabled, CONFIG.radarr_enabled, CONFIG.tautulli_enabled, CONFIG.unifi_enabled,
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled]
CONFIG.sonarr_enabled, CONFIG.sickchill_enabled, CONFIG.lidarr_enabled]
if not [enabled for enabled in SERVICES_ENABLED if enabled]:
vl.logger.error("All services disabled. Exiting")
exit(1)

WORKER = Thread(target=thread)
WORKER.start()

schedule.run_all()

while schedule.jobs:
Expand Down
13 changes: 13 additions & 0 deletions data/varken.example.ini
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
[global]
sonarr_server_ids = 1,2
radarr_server_ids = 1,2
lidarr_server_ids = false
tautulli_server_ids = 1
ombi_server_ids = 1
sickchill_server_ids = false
Expand Down Expand Up @@ -69,6 +70,18 @@ queue_run_seconds = 300
get_missing = true
get_missing_run_seconds = 300

[lidarr-1]
url = lidarr1.domain.tld:8686
apikey = xxxxxxxxxxxxxxxx
ssl = false
verify_ssl = false
missing_days = 30
missing_days_run_seconds = 300
future_days = 30
future_days_run_seconds = 300
queue = true
queue_run_seconds = 300

[ombi-1]
url = ombi.domain.tld
apikey = xxxxxxxxxxxxxxxx
Expand Down
4 changes: 4 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ networks:
services:
influxdb:
hostname: influxdb
container_name: influxdb
image: influxdb
networks:
- internal
Expand All @@ -13,6 +14,7 @@ services:
restart: unless-stopped
varken:
hostname: varken
container_name: varken
image: boerderij/varken
networks:
- internal
Expand All @@ -27,6 +29,7 @@ services:
restart: unless-stopped
grafana:
hostname: grafana
container_name: grafana
image: grafana/grafana
networks:
- internal
Expand All @@ -41,4 +44,5 @@ services:
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel
depends_on:
- influxdb
- varken
restart: unless-stopped
12 changes: 6 additions & 6 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
# Potential requirements.
# pip3 install -r requirements.txt
#---------------------------------------------------------
requests>=2.20.1
geoip2>=2.9.0
influxdb>=5.2.0
schedule>=0.5.0
distro>=1.3.0
urllib3>=1.22
requests==2.21
geoip2==2.9.0
influxdb==5.2.0
schedule==0.6.0
distro==1.4.0
urllib3==1.24.2
Loading

0 comments on commit a0c1cdf

Please sign in to comment.