Scripts
Archiving
extract_gpx_data_from_dashcams.py
Purpose
I use this script to delete old dashcam footage and replace it with gpx data extracted from the footage itself. This works for my AUKEY DR02 D dashcam (with its separate GPS unit). It should work for other dashcams as well.
You can open the generated gpx files with programs like GPXSee or GPX-viewer.
Steps
put the dashcam footage in the appropriate directory
edit the
gpx.fmt
file if needed
Important
do not skip step 2. Read comments in file.
References
Programming languages
python
perl
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.5 |
fpyutils |
1.2.0 |
|
exiftool |
|
12.00 |
PyYAML |
5.4.1 |
Licenses
GPLv3+
YAML data
<--YAML-->
extract_gpx_data_from_dashcams.py:
category: archiving
running user: myuser
configuration files:
paths:
- gpx.fmt
- extract_gpx_data_from_dashcams.myuser.yaml
systemd unit files:
paths:
service:
- extract-gpx-data-from-dashcams.myuser.service
timer:
- extract-gpx-data-from-dashcams.myuser.timer
<!--YAML-->
pdftoocr.sh
Purpose
I use this script to transform paper documents in ocr’d PDFs.
Examples
This script processes one file per directory. The output filename
will be the SHA 1 sum of the directory name. For example, given documents/a/out.pdf
,
three files will result:
File name |
Description |
---|---|
|
the compressed, archivable, grayscaled and OCR’d version of |
|
a text file of the OCR’d text from |
|
a checksum file containing the SHA 512 checksums of |
Infact $ echo -n 'a' | sha1sum
corresponds to 86f7e437faa5a7fce15d1ddcb9eaeaea377667b8
.
Steps
scan documents with
$ simple-scan
save the output file as
${OUTPUT_FILE}
if you want to keep colors, run
$ touch "${COLOR_OVERRIDE_FILE}"
in the directory. This file will be automatically deleted once the script ends.
Important
Along with installing the listed dependencies you need to install the appropriate Tesseract language data files.
References
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.007 |
Findutils |
|
4.6.0 |
Gawk |
|
4.2.1 |
GNU Coreutils |
|
8.31 |
Ghostscript |
|
9.27 |
OCRmyPDF |
|
8.3.0 |
Document Scanner |
|
3.36.0 |
Tesseract OCR |
4.1.1 |
Configuration files
Important
It is very important to set the OCR_LANG
variable.
Licenses
CC-BY-SA 3.0
YAML data
<--YAML-->
pdftoocr.sh:
category: archiving
running user: myuser
configuration files:
paths:
- pdftoocr_deploy.sh
- pdftoocr_deploy.myuser_documents.conf
- pdftoocr.myuser_documents.conf
systemd unit files:
paths:
service:
- pdftoocr.myuser_documents.service
timer:
- pdftoocr.myuser_documents.timer
<!--YAML-->
youtube_dl.py
Purpose
I use this script to download and archive videos from various platforms.
Steps
get a list of urls and divide them by subject
optionally run common command 1
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
youtube-dl |
|
2020.06.16.1 |
Python |
|
3.8.4 |
aria2 |
1.35.0 |
|
fpyutils |
1.2.0 |
|
PyYAML |
5.4.1 |
Configuration files
Three files must exist for each subject:
the
*.yaml
file is a generic configuration filethe
*.options
file contains most of the options used byyoutube-dl
the
*txt
file contains a list of source URLs
Licenses
GPLv3+
YAML data
<--YAML-->
youtube_dl.py:
category: archiving
running user: myuser
configuration files:
paths:
- youtube_dl.some_subject.yaml
- youtube_dl.some_subject.options
- youtube_dl.some_subject.txt
systemd unit files:
paths:
service:
- youtube-dl.some_subject.service
timer:
- youtube-dl.some_subject.timer
<!--YAML-->
archive_invoice_files.py
Purpose
I use this script to archive and print invoice files.
Invoice files are downloaded from PEC accounts (certified mail) as attachments. An HTML file corresponding to the XML invoice file is archived and printed. Finally, a notification is sent to a Gotify instance. During this process, cryptographical signatures and integrity checks are performed.
Steps
install the CUPS development headers
Create a new virtual environment as explained in this post, and call it
archive_invoice_files
.Once activated you can run these commands, tested for
fattura-elettronica-reader
version2.1.0
:pip3 install wheel pip3 install requests fpyutils==2.1.0 python-dateutil fattura-elettronica-reader==3.0.0 WeasyPrint==52.1 pycups lxml
optionally run common command 1
Important
To be able to install pycups and to use WeasyPrint, CUPS must be already installed.
Warning
If an error similar to this is raised:
UserWarning: FontConfig: No fonts configured. Expect ugly output.
, install a font such as DejaVu.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.9.0 |
Requests |
2.26.0 |
|
dateutil |
2.8.1 |
|
lxml |
4.6.2 |
|
pycups |
2.0.1 |
|
WeasyPrint |
52.1 |
|
fattura-elettronica-reader |
2.1.0 |
|
fpyutils |
2.1.0 |
|
PyYAML |
6.0 |
Licenses
GPLv2+
GPLv3+
YAML data
<--YAML-->
archive_invoice_files.py:
category: archiving
running user: myuser
configuration files:
paths:
- archive_invoice_files.myuser.yaml
systemd unit files:
paths:
service:
- archive-invoice-files.myuser.service
timer:
- archive-invoice-files.myuser.timer
<!--YAML-->
archive_media_files.py
Purpose
I use this script to archive media files from removable drives such as SD cards.
Files are archived using this schema:
${device_uuid}/${year}/${month}
Udisks2 hanged frequently, so I had to write this new script which uses traditional mount commands. Parallelization of rsync and of metadata extraction was also added.
Steps
get a device with media files
get the filesystem UUID with:
$ lsblk -o name,uuid
Follow the Automatic backup on a removable USB drive on plug in example in the borgmatic_hooks.py script
get the user id and group id of the user corresponding to the path where the files will be archived
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.5 |
exiftool |
|
11.16 |
fpyutils |
1.2.2 |
|
rsync |
3.1.3 |
|
PyYAML |
5.4.1 |
Configuration files
Add the following to /etc/udev/rules.d/99-usb-automount.rules
and replace:
${UUID}
: the UUID as explained above
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_FS_UUID}=="${UUID}", SUBSYSTEM=="block", RUN{program}+="/usr/bin/bash -c '(/usr/bin/systemctl start ${MOUNT}.mount && /usr/bin/systemctl start archive-media-files.mypurpose.service; /usr/bin/systemctl start udev-umount.home-myuser-media-auto-backup.service) &'"
Then run:
udevadm control --reload
udevadm trigger
Systemd unit files
You can use $$(systemd-escape ${mountpoint})
to escape the strings correctly.
Licenses
GFDLv1.3+
YAML data
<--YAML-->
archive_media_files.py:
category: archiving
running user: root
configuration files:
paths:
- archive_media_files.mypurpose.yaml
systemd unit files:
paths:
service:
- archive-media-files.mypurpose.service
- udev-umount.home-myuser-media-auto-backup.service
<!--YAML-->
archive_emails.py
Purpose
I use this script to get a local copy of all my emails.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.5 |
fpyutils |
1.2.0 |
|
OfflineIMAP |
|
7.3.3 |
PyYAML |
5.4.1 |
Licenses
GFDLv1.3+
YAML data
<--YAML-->
archive_emails.py:
category: archiving
running user: myuser
configuration files:
paths:
- archive_emails.myuser.yaml
- archive_emails.myuser.options
systemd unit files:
paths:
service:
- archive-emails.myuser.service
timer:
- archive-emails.myuser.timer
<!--YAML-->
archive_media_with_label.py
Purpose
I use this script to add a label to phjysical supports sucs as tapes, CDs, etc…
Steps
run the program with the appriopriate parameters
rename the file
print or write down the label and stick it on the media
once you have filled a box, print or write down all the labels as a single one and stick it on the box
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
python-tabulate |
0.8.7 |
|
fpyutils |
1.2.0 |
|
PyYAML |
5.4.1 |
Licenses
GPLv3+
YAML data
<--YAML-->
archive_media_with_label.py:
category: archiving
running user: myuser
configuration files:
paths:
- archive_media_with_label.yaml
<!--YAML-->
Audio
set-turntable-loopback-sound.service
Purpose
I use this script to enable the loopback sound of a SONY PS-LX300USB turntable.
Steps
connect the turntable via USB 2.0 type B to the computer
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.007 |
alsa-utils |
|
1.1.9 |
Configuration files
To avoid aplay
bloking the output, configure ALSA with
dmix PCMs. Use aplay -l to find the device names.
In my case I also want to duplicate the analog and HDMI output but there is, however, a slight delay of the HDMI audio.
Licenses
CC-BY-SA 3.0
YAML data
<--YAML-->
set-turntable-loopback-sound.service:
category: audio
running user: mydesktopuser
configuration files:
paths:
- set-turntable-loopback-sound.asoundrc
systemd unit files:
paths:
service:
- set-turntable-loopback-sound.service
<!--YAML-->
Backups
borgmatic_hooks.py
Purpose
I use this script to send notifications during hard drive backups.
A script to mount the backed up archives is also included here.
Examples
Automatic backup on a removable USB drive on plug in
I use a variation of this script to archive important documents on USB flash drives just in case all the backups fail.
After creating a filesystem, add its entry in the /etc/fstab
file.
See also https://www.freedesktop.org/software/systemd/man/systemd.mount.html#fstab
Remove the ExecStartPre
instruction from the provided systemd service unit file.
To automatically mount the filesystem create a file called /etc/udev/rules.d/99-usb-automount.rule
and add a udev rule like this:
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_FS_UUID}=="${filesystem UUID}", SUBSYSTEM=="block", RUN{program}+="/usr/bin/bash -c '(/usr/bin/systemctl start backed_up_mountpoint.mount && systemctl start borgmatic.myhostname_backed_up_mountpoint.service && /usr/bin/systemctl start udev-umount.myhostname_backed_up_mountpoint.service) &'"
where ${filesystem UUID}
corresponds to # udevadm info --name=${partition} | grep "ID_FS_UUID="
Finally, use the provided udev-umount.backed-up-mountpoint.service
file.
Steps
create a new borg repository
Note
We want to avoid encryption because:
it works with older versions of borg
it is simpler
these are not offsite backups
Important
There are two different types of setups: local and remote repositories.
Note
We will assume that:
our source directory is a mountpoint at
/backed/up/mountpoint
. This makes sense if we want to backup/root
or/home
for example.our borg directories will be under
/mnt/backups
For example, if we want to backup
/home
and our hostname ismypc
we would have:/mnt/backups/mypc_home.borg
To create a local repository run:
$ borg init -e none /mnt/backups/myhostname_backed_up_mountpoint.borg
For remote repositories run common command 1 using
borgmatic
as parameter on the destination (backup) server. Create an SSH key pair so that you can connect to the destination server. On the source server run:$ borg init -e none user@host:/mnt/backups/myhostname_backed_up_mountpoint.borg
edit the Borgmatic YAML configuration file
References
https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/
https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/
https://borgbackup.readthedocs.io/en/stable/usage/init.html?highlight=encryption
https://borgbackup.readthedocs.io/en/stable/deployment/image-backup.html
https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.service
https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.timer
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.1.004 |
Python |
|
3.9.1 |
fpyutils |
1.2.2 |
|
borgmatic |
|
1.5.12 |
Python-LLFUSE |
1.3.8 |
Configuration files
I use a set of configuration files per mountpoint to back up.
Systemd unit files
I use a set of configuration files per mountpoint to back up.
To mount all the archives of a borg backup you simply must run the borgmatic-mount service. To unmount them stop the service.
Tip
You can use this systemd service unit file to backup when the computer shuts down.
When my computer shuts down my home directory gets backed up on the server.
What I need are the configuration and normal files: I don’t care about ~/.cache
,
the shell history nor the browser’s history and cache. You should edit the
configuration file to reflect that.
Although this service remains active all the time, the syncronization action
runs when the system is halted using an ExecStop
directive. Since we don’t
know how much time the syncronization takes a TimeoutStopSec=infinity
directive is present.
#
# borgmatic.myhostname_backed_up_mountpoint.service
#
# Copyright (C) 2016-2020 Dan Helfman <https://projects.torsion.org/witten/borgmatic/raw/branch/master/sample/systemd/borgmatic.service>
# 2020 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# See https://superuser.com/questions/1016827/how-do-i-run-a-script-before-everything-else-on-shutdown-with-systemd
#
# Copyright (C) 2015 le_me @ Stack Overflow (https://superuser.com/a/1016848)
# Copyright (C) 2017 Community @ Stack Overflow (https://superuser.com/a/1016848)
# Copyright (C) 2020 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com)
#
# This script is licensed under a
# Creative Commons Attribution-ShareAlike 3.0 International License.
#
# You should have received a copy of the license along with this
# work. If not, see <http://creativecommons.org/licenses/by-sa/3.0/>.
[Unit]
Description=borgmatic myhostname_backed_up_mountpoint backuup
Wants=network-online.target
After=network-online.target
ConditionACPower=true
Requires=backed-up-mountpoint.mount
Requires=mnt-backups-myhostname_backed_up_mountpoint.mount
After=backed-up-mountpoint.mount
After=mnt-backups-myhostname_backed_up_mountpoint.mount
[Service]
Type=oneshot
# Lower CPU and I/O priority.
Nice=19
CPUSchedulingPolicy=batch
IOSchedulingClass=best-effort
IOSchedulingPriority=7
IOWeight=100
# Do not Retry.
Restart=no
# Prevent rate limiting of borgmatic log events. If you are using an older version of systemd that
# doesn't support this (pre-240 or so), you may have to remove this option.
LogRateLimitIntervalSec=0
ExecStart=/bin/true
RemainAfterExit=yes
TimeoutStopSec=infinity
ExecStop=/usr/bin/borgmatic --config /home/jobs/scripts/by-user/root/borgmatic.myhostname_backed_up_mountpoint.yaml --syslog-verbosity 1
User=root
Group=root
[Install]
WantedBy=multi-user.target
Licenses
GPLv3+
YAML data
<--YAML-->
borgmatic_hooks.py:
category: backups
running user: root
configuration files:
paths:
- borgmatic.myhostname_backed_up_mountpoint.yaml
- borgmatic_hooks.myhostname_backed_up_mountpoint.yaml
- borgmatic_mount.myhostname_backed_up_mountpoint.yaml
systemd unit files:
paths:
service:
- borgmatic.myhostname_backed_up_mountpoint.service
- borgmatic-mount.myhostname_backed_up_mountpoint.service
- udev-umount.backed-up-mountpoint.service
timer:
- borgmatic.myhostname_backed_up_mountpoint.timer
<!--YAML-->
Desktop
set_display_gamma.sh
Purpose
I use this to automatically set a better gamma for the output on a tv.
References
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.007 |
Xorg |
|
1.5.0 |
Configuration files
Make sure that the XORG_DISPLAY
variable is set correctly.
To find out the current display variable run $ echo ${DISPLAY}
Licenses
CC-BY-SA 3.0
YAML data
<--YAML-->
set_display_gamma.sh:
category: desktop
running user: mydesktopuser
configuration files:
paths:
- set_display_gamma.TV_HDMI1.conf
systemd unit files:
paths:
service:
- set-display-gamma.service
timer:
- set-display-gamma.timer
<!--YAML-->
Development
build_python_packages.py
Purpose
Build Python packages using git sources and push them to a self-hosted PyPI server.
Steps
create a virtual machine with Debian Bullseye (stable) and transform it into Sid (unstable). Using the unstable version will provide more up to date software for development
clone the python-packages-source repository in the
repository.path
. See the configuration fileinstall and run PyPI server such as pypiserver. You can use a docker compose file like this one:
version: '3.8' services: pypiserver-authenticated: image: pypiserver/pypiserver:v1.4.2 volumes: # Authentication file. - type: bind source: /home/jobs/scripts/by-user/root/docker/pypiserver/auth target: /data/auth # Python files - type: bind source: /data/pypiserver/packages target: /data/packages ports: - "4000:8080" # I have purposefully removed the # --fallback-url https://pypi.org/simple/ # option to have a fully isolated environment. command: --disable-fallback --passwords /data/auth/.htpasswd --authenticate update /data/packages
install these packages in the virtual machine:
apt-get install build-essential fakeroot devscripts git python3-dev python3-all-dev \ games-python3-dev libgmp-dev libssl-dev libssl1.1=1.1.1k-1 libcurl4-openssl-dev \ python3-pip python3-build twine libffi-dev graphviz libgraphviz-dev pkg-config \ clang-tools libblas-dev astro-all libblas-dev libatlas-base-dev libopenblas-dev \ libgsl-dev libblis-dev liblapack-dev liblapack3 libgslcblas0 libopenblas-base \ libatlas3-base libblas3 clang-9 clang-13 clang-12 clang-11 sphinx-doc \ libbliss-dev libblis-dev libbliss2 libblis64-serial-dev libblis64-pthread-dev \ libblis64-openmp-dev libblis64-3-serial libblis64-dev libblis64-3-pthread \ libblis64-3-openmp libblis64-3 libblis3-serial libblis3-pthread \ libblis-serial-dev libblis-pthread-dev libargon2-dev libargon2-0 libargon2-1
Note
This is just a selection. Some Python packages need other dependencies not listed here.
you need to manually compile at least these packages and push them to you local PyPI to avoid a chiken and egg problem:
setuptools
setuptools_scm
wheel
compile a package like this
cd python-packages-source python3 -m build --sdist --wheel
Important
Some packages might need different dependencies. Have a look at the
setup_requires
variable insetup.py
or insetup.cfg
orrequires
in thepyproject.toml
file. If you cannot compile some, download them directly from pypi.python.org.upload it to your PyPI server
twine upload --repository-url ${your_pypi_index_url} dist/*
change the PyPI index of your programs. See for example https://software.franco.net.eu.org/frnmst/python-packages-source#client-configuration
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.9.8 |
fpyutils |
2.0.1 |
|
PyYAML |
5.4.1 |
|
appdirs |
1.4.4 |
|
Twine |
|
3.5.0 |
build |
0.7.0 |
|
Git |
|
2.33.1 |
Licenses
GPLv3+
CC-BY-SA 4.0
YAML data
<--YAML-->
build_python_packages.py:
category: development
running user: python-source-packages-updater
configuration files:
paths:
- build_python_packages.yaml
systemd unit files:
paths:
service:
- build-python-packages.service
timer:
- build-python-packages.timer
<!--YAML-->
Drives
smartd_test.py
Purpose
I use this to run periodical S.M.A.R.T. tests on the hard drives.
Steps
run
# hdparm -I ${drive}
and compare the results with$ ls /dev/disk/by-id
to know which drive corresponds to the one you want to work onoptionally run common command 1
Important
To avoid tests being interrupted you must avoid putting the disks to sleep, therefore, programs like hd-idle must be stopped before running the tests.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.7.4 |
Smartmontools |
|
7.0 |
fpyutils |
1.2.3 |
|
PyYAML |
5.4.1 |
Configuration files
The script supports only /dev/disk/by-id
names.
See also the udev rule file /lib/udev/rules.d/60-persistent-storage.rules
.
Systemd unit files
I use one file per drive so I can control when a certain drive performs testing, instead of running them all at once.
Licenses
GPLv3+
YAML data
<--YAML-->
smartd_test.py:
category: drives
running user: root
configuration files:
paths:
- smartd_test.yaml
systemd unit files:
paths:
service:
- smartd-test.ata_disk1.service
timer:
- smartd-test.ata_disk1.timer
<!--YAML-->
mdamd_check.py
Purpose
I use this to run periodical RAID data scrubs on the hard drives.
Steps
run
$ lsblk
to know the names of the mdadm devices. See also:$ cat /proc/mdstat
optionally run common command 1
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.7.3 |
fpyutils |
1.2.3 |
|
PyYAML |
5.4.1 |
Licenses
GPLv2+
YAML data
<--YAML-->
mdamd_check.py:
category: drives
running user: root
configuration files:
paths:
- mdadm_check.yaml
systemd unit files:
paths:
service:
- mdamd-check.service
timer:
- mdamd-check.timer
<!--YAML-->
File sharing
rtorrent
Purpose
I use this to automatically start and manage the torrents.
Steps
run common command 0 using
rtorrent
as parametercopy the provided configuration file into
/home/rtorrent/.rtorrent.rc
References
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
RTorrent |
|
0.9.8 |
GNU Screen |
|
4.8.0 |
Configuration files
Warning
The provided configuration file is based on an old version of RTorrent. Some parameters might be deprecated.
Note
It is assumed that the downloaded files are
placed under /data/incoming_torrents
.
Licenses
GFDLv1.3+
YAML data
<--YAML-->
rtorrent:
category: file-sharing
running user: rtorrent
configuration files:
paths:
- rtorrent.rc
systemd unit files:
paths:
service:
- rtorrent.service
<!--YAML-->
kiwix_manage.py
Purpose
I use this to download and read Wikipedia as well as other websites offline.
Steps
run common command 2 using
kiwix
as parameter
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.2 |
Requests |
2.23.0 |
|
BeautifulSoup |
4.8.0 |
|
PyYAML |
4.8.2 |
|
aria2 |
|
1.35.0 |
Kiwix tools |
|
3.0.1 |
fpyutils |
1.2.3 |
|
PyYAML |
5.4.1 |
Configuration files
It is recommended to use aria2c instead of requests as downloader. aria2c infact supports bandwidth throttling and continuation from interrupted downloads.
Systemd unit files
Important
After downloading a new file you must rerun kiwix-manage.serve.service
.
Licenses
GPLv3+
CC-BY-SA 4.0
YAML data
<--YAML-->
kiwix_manage.py:
category: file-sharing
running user: kiwix
configuration files:
paths:
- kiwix-manage.yaml
systemd unit files:
paths:
service:
- kiwix-manage.download.service
- kiwix-manage.serve.service
timer:
- kiwix-manage.download.timer
<!--YAML-->
Misc
monitor_and_notify_git_repo_changes.sh
Purpose
My Gitea instance is configured to mirror some repositories. Every 30 minutes this script checks for new commits in those bare git repositories. If something new is commited a notification is sent to my Gotify instance.
Note
This script also works for non-bare git repositories.
Steps
run common command 1
References
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.007 |
curl |
|
7.66.0 |
Git |
|
2.23.0 |
Configuration files
To avoid missing or reading duplicate messages, the variable
CHECK_TIMEOUT_INTERVAL_SECONDS
should be set
to the same value as the one in the systemd timer unit
file (OnCalendar
).
Licenses
GPLv3+
YAML data
<--YAML-->
monitor_and_notify_git_repo_changes.sh:
category: misc
running user: gitea
configuration files:
paths:
- monitor_and_notify_git_repo_changes.myrepos.conf
systemd unit files:
paths:
service:
- monitor-and-notify-git-repo-changes.myrepos.service
timer:
- monitor-and-notify-git-repo-changes.myrepos.timer
<!--YAML-->
yacy
Purpose
A personal search engine.
Steps
setup YaCy and run an instance
Note
To install YaCy you need the OpenJDK Java 13 headless runtime environment package.
run common command 2 using
yacy
as parameterclone the YaCy search server repository into
/home/yacy
:
$ git clone https://github.com/yacy/yacy_search_serve.git
References
Programming languages
bash
java
Dependencies
Name |
Binaries |
Version |
---|---|---|
YaCy |
|
Licenses
LGPLv2+
YAML data
<--YAML-->
yacy:
category: misc
running user: yacy
systemd unit files:
paths:
service:
- yacy-search-server.service
<!--YAML-->
notify_camera_action.py
Purpose
Notify when a camera connected to a system running Motion is found or lost (disconnected).
Important
We will assume that a Motion instance is configured and running.
Steps
edit a camera’s configuration file with:
# Run camera actions. on_camera_lost /home/jobs/scripts/by-user/motion/notify_camera_action.py /home/jobs/scripts/by-user/motion/notify_camera_action.yaml "%$ (id: %t)" "lost" on_camera_found /home/jobs/scripts/by-user/motion/notify_camera_action.py /home/jobs/scripts/by-user/motion/notify_camera_action.yaml "%$ (id: %t)" "found"
optionally run common command 1
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.5 |
fpyutils |
1.2.0 |
|
PyYAML |
5.4.1 |
Configuration files
A single file is used for all the cameras connected to a system.
Licenses
GPLv3+
YAML data
<--YAML-->
notify_camera_action.py:
category: misc
running user: motion
configuration files:
paths:
- notify_camera_action.yaml
<!--YAML-->
save_and_notify_file_diffs.py
Purpose
Track files on the web: when a file changes push it to a VCS repository and send notifications.
Examples
I use this script to track changes for assets of fattura-elettronica-reader. See also https://docs.franco.net.eu.org/fattura-elettronica-reader/assets.html
Steps
create a VCS repository for example with Git and clone it locally.
optionally run common command 1
References
None
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.1.004 |
Python |
|
3.9.1 |
fpyutils |
1.2.2 |
|
PyYAML |
5.4.1 |
Configuration files
The configuration file uses git but you can adapt it to work with other VCS.
Note
The running user is gitea
because I use this script with a Gitea instance.
Important
This script has only been tested with Git.
Licenses
GPLv3+
YAML data
<--YAML-->
save_and_notify_file_diffs.py:
category: misc
running user: gitea
configuration files:
paths:
- save_and_notify_file_diffs.myrepo.yaml
systemd unit files:
paths:
service:
- save-and-notify-file-diffs.myrepo.service
timer:
- save-and-notify-file-diffs.myrepo.timer
<!--YAML-->
feed_proxy.py
Purpose
I use this to get some feeds which result to be unreadable from TT-RSS. In my case, the Apache webserver is used as a MITM agent.
Steps
run common command 2 using
rss
as parameteradd the rss user to the group of the user running the webserver (
www-data
for Apache on Debian GNU/Linux)edit the configuration so that
files.base path
points to a readable directory of the webserver, for example/var/www
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
fpyutils |
1.2.2 |
|
Requests |
2.25.1 |
|
Python |
|
3.7.3 |
PyYAML |
5.4.1 |
Systemd unit files
By default torsocks is used to run the python script. If you don’t care
about getting your feeds through TOR just remove /usr/bin/torsocks
from ExecStart
.
Licenses
GPLv3+
YAML data
<--YAML-->
feed_proxy.py:
category: misc
running user: rss
configuration files:
paths:
- feed_proxy.mypurpose.yaml
systemd unit files:
paths:
service:
- feed-proxy.mypurpose.service
timer:
- feed-proxy.mypurpose.timer
<!--YAML-->
monthly_attendance_paper.py
Purpose
Automatically print a 2 cloumn monthly attendace paper to be used between an employer and employee.
Steps
configure a default printer. See also https://blog.franco.net.eu.org/notes/cups-simple-shared-printer.html
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.7.3 |
Configuration files
Note
The YAML configuration file contains raw tab and newline characters
Licenses
GPLv3+
YAML data
<--YAML-->
monthly_attendance_paper.py:
category: misc
running user: myuser
configuration files:
paths:
- monthly_attendance_paper.mypurpose.yaml
systemd unit files:
paths:
service:
- monthly-attendance-paper.mypurpose.service
timer:
- monthly-attendance-paper.mypurpose.timer
<!--YAML-->
cryptod.bitcoin.service
Purpose
Run the Bitcoin daemon (or any other Bitcoin-based daemon).
Steps
run common command 0 using
cryptocurrencies
as parameterinstall the bitcoin core:
bitcoind
andbitcoin-cli
are the only two binaries needed$ git clone "https://github.com/bitcoin/bitcoin.git" $ git checkout v0.21.1 # apt-get install build-essential libtool autotools-dev automake pkg-config bsdmainutils python3 libboost-all-dev libevent-dev $ ./autogen.sh $ ./configure --with-incompatible-bdb --without-gui --without-miniupnpc $ make # make install
make sure that you have enough space for the blockchain in
/home/cryptocurrencies/bitcoin
. If that is not the case you can change thedatadir
option incryptod.bitcoin.service
.run the daemon: the whole blockchain needs to be downloaded before doing transactions
create a file called
~/.bitcoin/bitcoin.conf
with the user runningbitcoin-cli
, with the username and password corresponding to the ones ofbitcoin.conf
:rpcuser=cryptocurrencies rpcpassword=<put a random password here>
try running
bitcoin-cli help
orbitcoin-cli -getinfo
to check the connection to the daemon
Note
If you want to run other Bitcoin-core based cryptocurrencies (for example Vertcoin, Dogecoin, etc…) it is sufficient to change the occurrencies of bitcoin in the service file. You also have to change the path of the files accordingly.
Note
Instead of having to save all the transactions (>400GB at the time of writing) you can set pruning. This will save a lot space (about 100 times smaller).
Just set prune=550
in the configuration file. See also:
References
Programming languages
shell
Dependencies
Name |
Binaries |
Version |
---|---|---|
Bitcoin Core |
|
v0.21.1 |
GNU Bash |
|
5.0.3 |
Configuration files
replace
rpcpassword
inbitcoin.conf
and in~/.bitcoin/bitcoin.conf
with an appropriate one
Licenses
MIT
YAML data
<--YAML-->
cryptod.bitcoin.service:
category: misc
running user: cryptocurrency
configuration files:
paths:
- bitcoin.conf
systemd unit files:
paths:
service:
- cryptod.bitcoin.service
<!--YAML-->
mine_coins.py
Purpose
Mine some cryptocurrencies
Steps
run common command 0 using
miner
as parameterinstall a crypto miner such as cpuminer-opt:
# apt-get install build-essential automake libssl-dev libcurl4-openssl-dev libjansson-dev libgmp-dev zlib1g-dev git $ git clone "https://github.com/JayDDee/cpuminer-opt" $ cd cpuminer-opt && ./build.sh # make install
Note
We don’t mine Dogecoins directly but other cryptocoins based on the lyra2z330 hashing algorithm. These coins will be exchanged with Dogecoins by the mining pool.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
cpuminer-opt |
|
3.16.3 |
Python |
|
3.7.3 |
fpyutils |
2.0.0 |
Configuration files
This example works for Dogecoin but you can mine any coin depending on your hardware and on the mining pool.
Licenses
GPLv3+
YAML data
<--YAML-->
mine_coins.py:
category: misc
running user: miner
configuration files:
paths:
- mine_coins.yaml
systemd unit files:
paths:
service:
- mine-coins.dogecoin.service
<!--YAML-->
firefox_profile_runner.py
Purpose
Use a GUI menu to launch multiple firefox profiles in firejail sandboxes at the same time
Steps
create new firefox profile for each of your purposes. Run
firefox-esr --ProfileManager
run the script like this:
./firefox_profile_runner.py ./firefox_profile_runner.yaml
Note
To have even more isolation use private home directories for the sandbox.
This is achieved through the --private=
argument in the firejail
options.
If you run the script with that option enabled Firefox will prompt you to create a new profile. The name you specify must correspond to the profile name in the configuration.
You can also import an existing profile:
cp -aR ~/.mozilla/firefox/${existing_firefox_profile} ${virtual_home}/.mozilla/firefox
cp -a ~/.mozilla/firefox/profiles.ini ${virtual_home}/.mozilla/firefox
Keep only the imported profile in ${virtual_home}/.mozilla/firefox/profiles.ini
and rename the INI key to Profile0
:
[Profile0]
Name=personal
IsRelative=1
Path=${existing_firefox_profile}
Programming languages
Python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.9.2 |
fpyutils |
2.1.0 |
|
Firejail |
0.9.64.4 |
|
Firefox |
78.14.0esr |
|
python-yad |
0.9.11 |
|
YAD |
0.40.0 |
|
PyYAML |
5.3.1 |
Licenses
GPLv3+
YAML data
<--YAML-->
firefox_profile_runner.py:
category: misc
running user: mydesktopuser
configuration files:
paths:
- firefox_profile_runner.yaml
<!--YAML-->
push_files.py
Purpose
Push selected files to all existing remotes of a repository.
Examples
The fattura-elettronica-reader-assets-checksums repository is handled by this script.
Steps
Note
We will use git as VCS in this example.
create a git repository
git init myrepo
add remotes using SSH URIs. You will need SSh keys to make this working
git remote add origin myremote@my.domain:/myuser/myrepo.git git remote add secondary myremote@my.other.domain:/myuser/myrepo.git
create some initial commits and push then to the remotes
Programming languages
Python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.7.3 |
fpyutils |
2.1.0 |
|
PyYAML |
6.0 |
Licenses
GPLv3+
YAML data
<--YAML-->
push_files.py:
category: misc
running user: myuser
configuration files:
paths:
- push_files.mypurpose.yaml
<!--YAML-->
System
hblock_unbound.py
Purpose
I use this script to block malicious domains at a DNS level for the whole internal network.
Important
We will assume that Unbound is configured and running.
Steps
separate Unbound’s configuration into a header and footer file. Have a look at the provided configuration files.
clone the hblock repository:
$ git clone https://github.com/hectorm/hblock.git
configure you hblock lists in the
hblock_list.txt
file
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Unbound |
|
1.12.0 |
Git |
|
2.20.1 |
hblock |
|
3.2.3 |
GNU Make |
|
4.2.1 |
fpyutils |
1.2.3 |
|
Python |
|
3.8.6 |
PyYAML |
5.4.1 |
Configuration files
In case something goes wrong you can use this fallback command:
# cat hblock_unbound.header.conf hblock_unbound.footer.conf > /etc/unbound/unbound.conf
Note
The provided configuration files are designed to work along with dnscrypt-proxy 2
Licenses
MIT
YAML data
<--YAML-->
hblock_unbound.py:
category: system
running user: root
configuration files:
paths:
- hblock_unbound.yaml
- hblock_unbound_list.txt
- hblock_unbound.footer.conf
- hblock_unbound.header.conf
- hblock_unbound.post_commands.conf
systemd unit files:
paths:
service:
- hblock-unbound.service
timer:
- hblock-unbound.timer
<!--YAML-->
clean_pacman.py
Purpose
I use this very simple script to clean the cache generated by Pacman.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.1.004 |
pacman-contrib |
|
1.4.0 |
Python |
|
3.9.1 |
fpyutils |
1.2.2 |
|
PyYAML |
5.4.1 |
Licenses
GFDLv1.3+
YAML data
<--YAML-->
clean_pacman.py:
category: system
running user: root
configuration files:
paths:
- clean_pacman.yaml
systemd unit files:
paths:
service:
- clean-pacman.service
timer:
- clean-pacman.timer
<!--YAML-->
iptables_geoport.py
Purpose
I use this script to block IP addresses by country for inbound ports on a server.
Examples
I use this script essentially to avoid bruteforce SSH attacks. However, since I use a remote scanner with SANE, some extra steps are required to make things work:
open tcp and udp ports 6566
# echo "options nf_conntrack nf_conntrack_helper=1" > /etc/modprobe.d/nf_conntrack.conf
# echo "nf_conntrack_sane" > /etc/modules-load.d/nf_conntrack_sane.conf
reboot
# cat /proc/sys/net/netfilter/nf_conntrack_helper
should return1
Steps
run the script
make the rules persistent. For example, have a look at this Arch wiki page Debian support is already active in the service file.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.1 |
Requests |
2.23.0 |
|
PyYAML |
5.3 |
|
iptables |
1:1.8.4 |
|
fpyutils |
1.2.0 |
|
PyYAML |
5.4.1 |
Configuration files
Warning
The patch rules
directive contains a list of
shell commands that are executed directly!
It is your responsability to avoid putting
malicious code there.
Licenses
GPLv2+
GFDLv1.3+
YAML data
<--YAML-->
iptables_geoport.py:
category: system
running user: root
configuration files:
paths:
- iptables_geoport.yaml
systemd unit files:
paths:
service:
- iptables-geoport.service
timer:
- iptables-geoport.timer
<!--YAML-->
roothints
Purpose
I use this service to update the list of servers, authoritative for the root domain.
Important
We will assume that Unbound is configured and running.
References
Programming languages
bash
Dependencies
Name |
Binaries |
Version |
---|---|---|
Unbound |
|
1.10.0 |
Licenses
GFDLv1.3+
YAML data
<--YAML-->
roothints:
category: system
running user: root
systemd unit files:
paths:
service:
- roothints.service
timer:
- roothints.timer
<!--YAML-->
notify_unit_status.py
Purpose
I use this script to notify when a Systemd service fails.
Examples
My Gitea instance could not start after an update. If I used this script I would have known immediately about the problem instead of several days later.
Steps
to monitor a service run
# systemctl edit ${unit_name}
copy and save the following in the text editor
[Unit] OnFailure=notify-unit-status@%n.service
Important
It is assumed that you can send emails using Msmtp like this:
run common command 0 using
email
as parametermake sure that the
root
user is able to connect to theemail
user using an SSH keyconfigure Msmtp as described in this section
configure email aliases in
/etc/aliases
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.8.4 |
fpyutils |
1.2.3 |
|
PyYAML |
5.4.1 |
Systemd unit files
The provided Systemd service unit file represents a template.
Licenses
CC-BY-SA 4.0
YAML data
<--YAML-->
notify_unit_status.py:
category: system
running user: root
configuration files:
paths:
- notify_unit_status.yaml
systemd unit files:
paths:
service:
- notify-unit-status@.service
<!--YAML-->
command_assert.py
Purpose
I use this script to check that the result of shell commands correspond to some expected output. The script also creates an RSS feed to complement the standard notifications.
Examples
You can use this if you need to check if some websites or services are reachable.
Steps
run common command 0 using
command-assert
as parameteroptionally run common command 1
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
Python |
|
3.7.3 |
PyYAML |
6.0 |
|
fpyutils |
2.1.0 |
|
feedgenerator |
2.0.0 |
Licenses
GPLv3+
CC-BY-SA 4.0
YAML data
<--YAML-->
command_assert.py:
category: system
running user: command-assert
configuration files:
paths:
- command_assert.py
- command_assert.mypurpose.yaml
systemd unit files:
paths:
service:
- command-assert.mypurpose.service
timer:
- command-assert.mypurpose.timer
<!--YAML-->
qvm.py
Purpose
I use this script to run virtual machines via QEMU and to connect to them through other clients.
Steps
create a new virtual hard disk:
qemu-img create -f qcow2 development.qcow2 64G
modify the configuration to point to
development.qcow2
, the virtual hdd.run the installation having the cdrom set to
true
, and corresponding to the ISO installation medium:./qvm ./qvm.py local development
create a backup virtual hard disk:
qemu-img create -f qcow2 -b development.qcow2 development.qcow2.mod
run the installed machine using
development.qcow2.mod
as virtual hard diskif needed, modify iptables to let data through the shared ports
on the guest system create a
powermanager
user and addpowermanager ALL=(ALL) NOPASSWD:/sbin/poweroff
usingvisudo
.on the host system create an SSH key so that the
qvm
host user can connect to thepowermanager
guest user. Have a look at theExecStop
command in the service unit file.in the host machine, configure the ssh config file (
~/.ssh/config
) like this:# See https://superuser.com/a/870918 # CC BY-SA 3.0 # (C) Kenster, 2015 Match host 127.0.0.1 user powermanager exec "test %p = 2222" IdentitiesOnly yes IdentityFile ~/.ssh/powermanager_test
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.1.004 |
Python |
|
3.9.1 |
fpyutils |
1.2.2 |
|
QEMU |
|
5.2.0 |
OpenSSH |
|
8.4p1 |
TigerVNC |
|
1.11.0 |
PyYAML |
5.4.1 |
Configuration files
if you use a QCOW2 disk you can run the script with an unprivileged user.
you can run directly from phisical partitions. Permissions depend from the filesystem. Use this snippet as reference.
Warning
Remember NOT to mount the partitions while running. Data loss may occur in that case.
# Mass memory. Use device name with options. drives: - '/dev/sda,format=raw' - '/dev/sdb,format=raw'
if you need to share a directory you can use a 9p filesystem if the guest kernel supports it. Run
modprobe 9pnet_virtio
asroot
: this is the case for Debian’slinux-image-cloud-amd64
package where 9p is not supported:modprobe: FATAL: Module 9pnet_virtio not found in directory /lib/modules/5.10.0-9-cloud-amd64
If you kernel has the
9pnet_virtio
module you can add a shared directory. This might be an extract ofqvm.yaml
:mount: enabled: true mountpoints: - shared: path: '/home/qvm/shares/development/shared' mount tag: 'shared'
In this case an
/etc/fstab
entry might look like this:shared /home/vm/shared 9p auto,access=any,x-systemd.automount,msize=268435456,trans=virtio,version=9p2000.L 0 0
Licenses
GPLv3+
YAML data
<--YAML-->
qvm.py:
category: system
running user: qvm
configuration files:
paths:
- qvm.yaml
systemd unit files:
paths:
service:
- qvm.local_test.service
<!--YAML-->
update_action.py
Purpose
I use this script to update some software not supported by the package manager, for example Docker images.
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.3 |
Python |
|
3.7.3 |
fpyutils |
1.2.2 |
|
PyYAML |
5.4.1 |
Configuration files
Warning
No filtering is performed for the configuration file. It is your responsability for its content.
Licenses
GPLv3+
YAML data
<--YAML-->
update_action.py:
category: system
running user: root
configuration files:
paths:
- update_action.mypurpose.yaml
systemd unit files:
paths:
service:
- update-action.mypurpose.service
timer:
- update-action.mypurpose.timer
<!--YAML-->
Video
record_motion.py
Purpose
I use this script to record video streams captured by webcams with Motion.
Important
We will assume that Motion is already configured and running.
Steps
make sure to have a big enough hard drive
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.1.0 |
FFmpeg |
|
2:4.3.1 |
fpyutils |
1.2.2 |
|
Python |
|
3.9.1 |
PyYAML |
5.4.1 |
Configuration files
You can use hardware acceleration instead of using software for the encoding process. Using hardware acceleration should reduce the load on the processor:
“Hardware encoders typically generate output of significantly lower quality than good software encoders like x264, but are generally faster and do not use much CPU resource. (That is, they require a higher bitrate to make output with the same perceptual quality, or they make output with a lower perceptual quality at the same bitrate.)”
—HWAccelIntro page
Since we are dealing with video surveillance footage we don’t care about quality so much.
In the configuration file you will find an example for Intel VAAPI.
In this case you need to set use global quality
to true
and use the global quality
variable instead.
You can adapt the script and/or the configuration to work for other types of hardware acceleration.
See also the Arch Wiki page.
Licenses
GPLv3+
YAML data
<--YAML-->
record_motion.py:
category: video
running user: surveillance
configuration files:
paths:
- record_motion.camera1.yaml
systemd unit files:
paths:
service:
- record-motion.camera1.service
<!--YAML-->
convert_videos.py
Purpose
I use this script to capture, encode and transcode videos from different hardware sources.
Steps
make sure to have a big enough hard drive: encoding requires a lot of space
if you are going to use multiple devices you must be able to identify them:
in case of v4l devices you can use
$ ls -l /dev/v4l/by-path/
in case of DVD devices, you can use
$ ls -l /dev/disk/by-path
in combination with$ eject
in case of ALSA devices you can follow this tutorial to get persistent naming
I strongly suggest installing something like Ananicy which automatically sets functional priority levels for processes like the ones run by ffmpeg which is heaviliy used in this script.
have a look at
$ ./convert_videos.py --help
. You can add descriptions as embedded subtitles using the--description
option.
Examples
My purpose is to digitize VHS cassettes and DVDs.
For VHSs I use
this easycap device from CSL
which uses the stk1160
kernel module and a proper VCR.
Have a look at
this LinuxTVWiki wiki page.
For DVDs I use a standard 5.25’’ SATA DVD drive.
When everything is set I start to encode a video. Transcoding is done on a different computer, a server, because its processor has a couple of extra cores and it is much more recent.
References
Programming languages
python
Dependencies
Name |
Binaries |
Version |
---|---|---|
GNU Bash |
|
5.0.017 |
FFmpeg |
|
1:4.2.3 |
PyYAML |
5.3.1 |
|
HandBrake CLI |
|
1.3.0 |
libdvdcss |
1.4.2 |
|
libdvdnav |
6.1.0 |
|
VLC media player |
|
3.0.10 |
v4l-utils |
|
1.18.1 |
gst-plugins-bad |
1.16.2 |
|
gst-plugins-base |
1.16.2 |
|
gst-plugins-good |
1.16.2 |
|
gst-plugins-ugly |
1.16.2 |
|
GStreamer |
|
1.16.2 |
Python |
|
3.8.3 |
fpyutils |
1.2.0 |
|
PyYAML |
5.4.1 |
Configuration files
The configuration file is designed so that you can you can reuse different parts of it for different sources and actions.
Important
the default transcoding options are set up to get the best quality
possible. The order of magnitude I get is 24 hours of transcoding
time for 1 hour of encoded video (at full system load).
If you feel that is too much you can change the preset
to slow
or medium
.
Warning
To simplify the development, shell commands are executed directly! It is your responsability to avoid putting malicious code.
Licenses
GPLv3+
YAML data
<--YAML-->
record_motion.sh:
category: video
running user: myuser
configuration files:
paths:
- convert_videos.yaml
systemd unit files:
paths:
service:
- convert-videos.samsung.service
<!--YAML-->