Package server
Mirroring
Debmirror
See also
Debmirror: creiamo un mirror Debian - Guide@Debianizzati.Org 1
Mirantis Documentation: Usage ~ DEBMIRROR 2
Apache Tips & Tricks: Hide a file type from directory indexes · MD/Blog 3
Set Up A Local Ubuntu / Debian Mirror with Apt-Mirror | Programster’s Blog 4
Debian – Mirror Size 5
Comment #4 : Bug #882941 : Bugs : debmirror package : Ubuntu 6
Debian – Setting up a Debian archive mirror 7
Better Default Directory Views with HTAccess | Perishable Press 8
Run as user |
Instruction number |
|
1-3,5-14 |
|
4 |
Debmirror is one of the existing programs which are able to mirror Debian APT packages. In this example we are going to use the Apache HTTP server as webserver.
In this example we will use the Apache HTTP server to serve the packages.
install
apt-get install debmirror bindfs
create a new user
useradd -m -s /bin/bash -U debmirror passwd debmirror usermod -aG jobs debmirror mkdir /home/debmirror/data chown debmirror:debmirror /home/debmirror/data
Note
In this example
/home/debmirror/data
is the base directory where all packages are served from.Tip
I suggest using normal HDDs rather than SSDs. Size in more important than speed in this case.
create the jobs directories
mkdir -p /home/jobs/{services,scripts}/by-user/debmirror chown -R debmirror:debmirror /home/jobs/{services,scripts}/by-user/debmirror chmod 700 -R /home/jobs/{services,scripts}/by-user/debmirror
load APT’s keyring into a new keyring owned by the
debmirror
usergpg --keyring /usr/share/keyrings/debian-archive-keyring.gpg --export \ | gpg --no-default-keyring --keyring trustedkeys.gpg --import
add this
configuration
for the standard Debian repository/home/jobs/scripts/by-user/debmirror/debmirror.debian.conf1# The config file is a perl script so take care to follow perl syntax. 2# Any setting in /etc/debmirror.conf overrides these defaults and 3# ~/.debmirror.conf overrides those again. Take only what you need. 4# 5# The syntax is the same as on the command line and variable names 6# loosely match option names. If you don't recognize something here 7# then just stick to the command line. 8# 9# Options specified on the command line override settings in the config 10# files. 11 12# Location of the local mirror (use with care) 13# $mirrordir="/path/to/mirrordir" 14 15# Output options 16$verbose=1; 17$progress=1; 18$debug=0; 19 20$remoteroot="debian"; 21# Download options 22$host="debian.netcologne.de"; 23$download_method="rsync"; 24@dists="stable,oldstable,buster-updates,bullseye-updates,buster-backports,bullseye-backports"; 25@sections="main"; 26@arches="amd64,all,any"; 27$omit_suite_symlinks=0; 28$skippackages=0; 29# @rsync_extra="none"; 30$i18n=1; 31$getcontents=1; 32$do_source=1; 33$max_batch=0; 34 35# Includes other translations as well. 36# See the exclude option in 37# https://help.ubuntu.com/community/Debmirror 38@includes="Translation-(en|it).*"; 39 40# @di_dists="dists"; 41# @di_archs="arches"; 42 43# Save mirror state between runs; value sets validity of cache in days 44$state_cache_days=0; 45 46# Security/Sanity options 47$ignore_release_gpg=0; 48$ignore_release=0; 49$check_md5sums=0; 50$ignore_small_errors=1; 51 52# Cleanup 53$cleanup=0; 54$post_cleanup=1; 55 56# Locking options 57$timeout=300; 58 59# Rsync options 60$rsync_batch=200; 61$rsync_options="-aIL --partial --bwlimit=10240"; 62 63# FTP/HTTP options 64$passive=0; 65# $proxy="http://proxy:port/"; 66 67# Dry run 68$dry_run=0; 69 70# Don't keep diff files but use them 71$diff_mode="use"; 72 73# The config file must return true or perl complains. 74# Always copy this. 751;
change permissions
chown -R debmirror:debmirror /home/jobs/scripts/by-user/debmirror/debmirror.debian.conf chmod 700 -R /home/jobs/scripts/by-user/debmirror/debmirror.debian.conf
create the
Systemd service unit file
/home/jobs/services/by-user/debmirror/debmirror.debian.service1[Unit] 2Description=Debmirror debian 3Requires=network-online.target 4After=network-online.target 5 6[Service] 7Type=simple 8ExecStart=/usr/bin/debmirror --config-file=/home/jobs/scripts/by-user/debmirror/debmirror.debian.conf /home/debmirror/data/debian 9User=debmirror 10Group=debmirror
create the
Systemd service timer unit file
/home/jobs/services/by-user/debmirror/debmirror.debian.timer1[Unit] 2Description=Once a day debmirror debian 3 4[Timer] 5OnCalendar=*-*-* 1:30:00 6Persistent=true 7 8[Install] 9WantedBy=timers.target
create a directory readable be Apache
mkdir -p /srv/http/debian chown www-data:www-data /srv/http/debian chmod 700 /srv/http/debian
Add this to the fstab file
/etc/fstab/home/debmirror/data /srv/http/debian fuse.bindfs auto,force-user=www-data,force-group=www-data,ro 0 0
Note
This mount command makes the directory exposed to the webserver readonly, in this case
/srv/http/debian
serve the files via HTTP by creating a new
Apache virtual host
. ReplaceFQDN
with the appropriate domain and include this file from the Apache configuration/etc/apache2/debian_mirror.apache.conf1<IfModule mod_ssl.c> 2<VirtualHost *:443> 3 4 UseCanonicalName on 5 6 Keepalive On 7 RewriteEngine on 8 9 ServerName ${FQDN} 10 11 # Set the icons also to avoid 404 errors. 12 Alias /icons/ "/usr/share/apache2/icons/" 13 14 DocumentRoot "/srv/http/debian" 15 <Directory "/srv/http/debian"> 16 Options -ExecCGI -Includes 17 Options +Indexes +SymlinksIfOwnerMatch 18 IndexOptions NameWidth=* +SuppressDescription FancyIndexing Charset=UTF-8 VersionSort FoldersFirst 19 20 ReadmeName footer.html 21 IndexIgnore header.html footer.html 22 # 23 # AllowOverride controls what directives may be placed in .htaccess files. 24 # It can be "All", "None", or any combination of the keywords: 25 # AllowOverride FileInfo AuthConfig Limit 26 # 27 AllowOverride All 28 29 # 30 # Controls who can get stuff from this server. 31 # 32 Require all granted 33 </Directory> 34 35 SSLCompression off 36 37 Include /etc/letsencrypt/options-ssl-apache.conf 38 SSLCertificateFile /etc/letsencrypt/live/${FQDN}/fullchain.pem 39 SSLCertificateKeyFile /etc/letsencrypt/live/${FQDN}/privkey.pem 40</VirtualHost> 41</IfModule>
create a new
text file
that will serve as basic instructions for configuring the APT sources file. ReplaceFQDN
with the appropriate domain/home/debmirror/data/footer.html1<h1>Examples</h1> 2 3Change <code>/etc/apt/sources.list</code> to one of these: 4 5<h2>Bullseye distribution</h2> 6 7<pre> 8deb https://${FQDN}/debian bullseye main 9deb https://${FQDN}/debian bullseye-updates main 10deb-src https://${FQDN}/debian-security bullseye-security main 11deb https://${FQDN}/debian bullseye-backports main 12 13deb [arch=amd64] https://${FQDN}/docker bullseye stable 14deb [arch=amd64] https://${FQDN}/gitea gitea main 15deb [arch=amd64] https://${FQDN}/postgresql bullseye-pgdg main 16</pre> 17 18<h2>Buster distribution</h2> 19 20<pre> 21deb https://${FQDN}/debian buster main 22deb https://${FQDN}/debian buster-updates main 23deb-src https://${FQDN}/debian-security buster/updates main 24deb https://${FQDN}/debian buster-backports main 25 26deb [arch=amd64] https://${FQDN}/docker buster stable 27deb [arch=amd64] https://${FQDN}/gitea gitea main 28deb [arch=amd64] https://${FQDN}/postgresql buster-pgdg main 29</pre> 30 31<h1>Repositories</h1> 32 33<code>stable</code> and <code>oldstable</code> distributions are available if applicable. 34 35<h2>debian</h2> 36<p>Supported architectures:</p> 37<ul> 38<li><code>amd64</code></li> 39<li><code>all</code></li> 40<li><code>any</code></li> 41</ul> 42 43<h2>debian-security</h2> 44<p>Supported architectures:</p> 45<ul> 46<li><code>amd64</code></li> 47<li><code>all</code></li> 48<li><code>any</code></li> 49</ul> 50 51<h2>docker</h2> 52<p>Supported architectures:</p> 53<ul> 54<li><code>amd64</code></li> 55</ul> 56 57<h2>gitea</h2> 58<p>Supported architectures:</p> 59<ul> 60<li><code>amd64</code></li> 61</ul> 62 63<h2>postgresql</h2> 64<p>Supported architectures:</p> 65<ul> 66<li><code>amd64</code></li> 67</ul>
Note
This example includes some unofficial repositories.
restart the Apache webserver
systemctl restart apache2
run the deploy script
Unofficial Debian sources
Run as user |
Instruction number |
|
* |
In case you want to mirror unofficial Debian sources the same instrctions apply. You just need to change the key import step
gpg \
--no-default-keyring \
--keyring trustedkeys.gpg \
--import ${package_signing_key}
Note
package_signing_key
is provided by the repository maintainers.
PyPI server
Build Python packages using git sources and push them to a self-hosted PyPI server.
Server
See also
Minimal PyPI server for uploading & downloading packages with pip/easy_install Resources 9
Run as user |
Instruction number |
|
* |
follow the Docker instructions
create the jobs directories
mkdir -p /home/jobs/scripts/by-user/root/docker/pypiserver chmod 700 /home/jobs/scripts/by-user/root/docker/pypiserver
install and run pypiserver. Use this
Docker compose file
/home/jobs/scripts/by-user/root/docker/pypiserver/docker-compose.yml1version: '3.7' 2 3services: 4 pypiserver-authenticated: 5 image: pypiserver/pypiserver:latest 6 volumes: 7 # Authentication file. 8 - type: bind 9 source: /home/jobs/scripts/by-user/root/docker/pypiserver/auth 10 target: /data/auth 11 12 # Python files. 13 - type: bind 14 source: /data/pypiserver/packages 15 target: /data/packages 16 ports: 17 - "127.0.0.1:4000:8080" 18 19 # I have purposefully removed the 20 # --fallback-url https://pypi.org/simple/ 21 # option to have a fully isolated environment. 22 command: --disable-fallback --passwords /data/auth/.htpasswd --authenticate update /data/packages
create a
Systemd unit file
. See also the Docker compose services section/home/jobs/services/by-user/root/docker-compose.pypiserver.service1[Unit] 2Requires=docker.service 3Requires=network-online.target 4After=docker.service 5After=network-online.target 6 7[Service] 8Type=simple 9WorkingDirectory=/home/jobs/scripts/by-user/root/docker/pypiserver 10 11ExecStart=/usr/bin/docker-compose up --remove-orphans 12ExecStop=/usr/bin/docker-compose down --remove-orphans 13 14Restart=always 15 16[Install] 17WantedBy=multi-user.target
run the deploy script
modify the reverse proxy port of your webserver configuration with
4000
Apache configuration
See also
Run as user |
Instruction number |
|
* |
If you use Apache as webserver you should enable caching. The upstream documentation shows how to configure pypiserver for Nginx but not for Apache.
create a new
Apache virtual host
. ReplaceFQDN
with the appropriate domain/etc/apache2/pypi_server.apache.conf1############### 2# pypiserver # 3############### 4<IfModule mod_ssl.c> 5<VirtualHost *:443> 6 UseCanonicalName on 7 8 Keepalive On 9 RewriteEngine on 10 11 ServerName ${FQDN} 12 13 SSLCompression off 14 15 RewriteRule ^/simple$ /simple/ [R] 16 ProxyPass / http://127.0.0.1:4000/ Keepalive=On max=50 timeout=300 connectiontimeout=10 17 ProxyPassReverse / http://127.0.0.1:4000/ 18 RequestHeader set X-Forwarded-Proto "https" 19 RequestHeader set X-Forwarded-Port "443" 20 RequestHeader set X-Forwarded-Host "${FQDN}" 21 22 Header set Service "pypi" 23 24 CacheRoot "/var/cache/apache" 25 CacheEnable disk / 26 CacheDirLevels 4 27 CacheDirLength 1 28 29 CacheDefaultExpire 3600 30 CacheIgnoreNoLastMod On 31 CacheIgnoreCacheControl On 32 CacheMaxFileSize 640000 33 CacheReadSize 1024 34 CacheIgnoreNoLastMod On 35 CacheIgnoreQueryString On 36 CacheIgnoreHeaders X-Forwarded-Proto X-Forwarded-For X-Forwarded-Host 37 38 # Debug. Turn these two variables off after testing. 39 CacheHeader on 40 CacheDetailHeader On 41 42 Include /etc/letsencrypt/options-ssl-apache.conf 43 SSLCertificateFile /etc/letsencrypt/live/${FQDN}/fullchain.pem 44 SSLCertificateKeyFile /etc/letsencrypt/live/${FQDN}/privkey.pem 45</VirtualHost> 46</IfModule>
Warning
The included
Cache*
options are very aggressive!create the cache directory
mkdir /var/cache/apache chown www-data:www-data /var/cache/apache chmod 775 /var/cache/apache
enable the Apache modules
a2enmod cache cache_disk systemctl start apache-htcacheclean.service systemctl restart apache2
check for a cache hit. Replace
FQDN
with the appropriate domaincurl -s https://${FQDN} 1>/dev/null 2>/dev/null curl -s -D - https://${FQDN} | head -n 20
Note
The
/packages/
page does not get cached.set
CacheHeader
andCacheDetailHeader
toOff
restart Apache
systemctl restart apache2
Virtual machine compiling the packages
See also
A collection of scripts I have written and/or adapted that I currently use on my systems as automated tasks 13
Git repository pointers and configurations to build Python packages from source 14
python - pushd through os.system - Stack Overflow 15
Pass options to `build_ext · Issue #328 · pypa/build · GitHub` 16
Run as user |
Instruction number |
|
1-2 |
|
3-13 |
|
14 |
create a virtual machine with Debian Bullseye (stable) and transform it into Sid (unstable). Using the unstable version will provide more up to date software for development.
See the QEMU server section. You might need to assign a lost of disk space.
connect to the virtual machine. See the QEMU client section
create a new user
useradd --system -s /bin/bash -U python-source-package-updater passwd python-source-package-updater usermod -aG jobs python-source-package-updater
create the jobs directories. See reference
mkdir -p /home/jobs/{scripts,services}/by-user/python-source-package-updater chown -R kiwix:kiwix /home/jobs/{scripts,services}/by-user/python-source-package-updater chmod 700 -R /home/jobs/{scripts,services}/by-user/python-source-package-updater
install these packages in the virtual machine:
apt-get install build-essential fakeroot devscripts git python3-dev python3-all-dev \ games-python3-dev libgmp-dev libssl-dev libssl1.1=1.1.1k-1 libcurl4-openssl-dev \ python3-pip python3-build twine libffi-dev graphviz libgraphviz-dev pkg-config \ clang-tools libblas-dev astro-all libblas-dev libatlas-base-dev libopenblas-dev \ libgsl-dev libblis-dev liblapack-dev liblapack3 libgslcblas0 libopenblas-base \ libatlas3-base libblas3 clang-9 clang-13 clang-12 clang-11 sphinx-doc \ libbliss-dev libblis-dev libbliss2 libblis64-serial-dev libblis64-pthread-dev \ libblis64-openmp-dev libblis64-3-serial libblis64-dev libblis64-3-pthread \ libblis64-3-openmp libblis64-3 libblis3-serial libblis3-pthread \ libblis-serial-dev libblis-pthread-dev libargon2-dev libargon2-0 libargon2-1
Note
This is just a selection. Some Python packages need other dependencies not listed here.
install the dependencies of the script
apt-get install python3-yaml python3-appdirs
install fpyutils. See reference
add the
script
/home/jobs/scripts/by-user/python-source-packages-updater/build_python_packages.py1#!/usr/bin/env python3 2# 3# build_python_packages.py 4# 5# Copyright (C) 2021-2022 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com) 6# 7# This program is free software: you can redistribute it and/or modify 8# it under the terms of the GNU General Public License as published by 9# the Free Software Foundation, either version 3 of the License, or 10# (at your option) any later version. 11# 12# This program is distributed in the hope that it will be useful, 13# but WITHOUT ANY WARRANTY; without even the implied warranty of 14# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15# GNU General Public License for more details. 16# 17# You should have received a copy of the GNU General Public License 18# along with this program. If not, see <http://www.gnu.org/licenses/>. 19r"""build_python_packages.py.""" 20 21import contextlib 22import multiprocessing 23import os 24import pathlib 25import shlex 26import shutil 27import signal 28import subprocess 29import sys 30 31import fpyutils 32import yaml 33from appdirs import AppDirs 34 35 36class InvalidCache(Exception): 37 pass 38 39 40class InvalidConfiguration(Exception): 41 pass 42 43 44def check_keys_type(keys: list, key_type) -> bool: 45 """Check that all elements of a list correspond to a specific python type.""" 46 ok = True 47 48 i = 0 49 while ok and i < len(keys): 50 if isinstance(keys[i], key_type): 51 ok = ok & True 52 else: 53 ok = ok & False 54 i += 1 55 56 return ok 57 58 59def check_values_type(keys: list, values: dict, level_0_value_type, level_1_value_type, has_level_1_value: bool) -> bool: 60 """Check that all elements of a list correspond to a specific python type.""" 61 ok = True 62 63 i = 0 64 while ok and i < len(values): 65 if isinstance(values[keys[i]], level_0_value_type): 66 j = 0 67 while has_level_1_value and ok and j < len(values[keys[i]]): 68 if isinstance(values[keys[i]][j], level_1_value_type): 69 ok = ok & True 70 else: 71 ok = ok & False 72 j += 1 73 else: 74 ok = ok & False 75 76 i += 1 77 78 return ok 79 80 81################################## 82# Check configuration structure # 83################################## 84def check_configuration_structure_ignore(configuration: dict) -> bool: 85 ok = True 86 if ('submodules' in configuration 87 and isinstance(configuration['submodules'], dict) 88 and 'ignore' in configuration['submodules'] 89 and isinstance(configuration['submodules']['ignore'], dict)): 90 91 ignore_keys = list(configuration['submodules']['ignore'].keys()) 92 93 ok = ok & check_keys_type(ignore_keys, str) 94 ok = ok & check_values_type(ignore_keys, configuration['submodules']['ignore'], list, str, True) 95 96 return ok 97 98 99def check_configuration_structure_base_directory_override(configuration: dict) -> bool: 100 ok = True 101 if ('submodules' in configuration 102 and isinstance(configuration['submodules'], dict) 103 and 'base_directory_override' in configuration['submodules'] 104 and isinstance(configuration['submodules']['base_directory_override'], dict)): 105 106 base_directory_override_keys = list(configuration['submodules']['base_directory_override'].keys()) 107 ok = ok & check_keys_type(base_directory_override_keys, str) 108 ok = ok & check_values_type(base_directory_override_keys, configuration['submodules']['base_directory_override'], str, None, False) 109 else: 110 ok = False 111 112 return ok 113 114 115def check_configuration_structure_checkout(configuration: dict) -> bool: 116 ok = True 117 if ('submodules' in configuration 118 and 'checkout' in configuration['submodules'] 119 and isinstance(configuration['submodules'], dict) 120 and isinstance(configuration['submodules']['checkout'], dict)): 121 122 checkout_keys = list(configuration['submodules']['checkout'].keys()) 123 ok = ok & check_keys_type(checkout_keys, str) 124 ok = ok & check_values_type(checkout_keys, configuration['submodules']['checkout'], list, str, True) 125 else: 126 ok = False 127 128 return ok 129 130 131def check_configuration_structure_build(configuration: dict) -> bool: 132 ok = True 133 if ('submodules' in configuration 134 and 'build' in configuration['submodules'] 135 and isinstance(configuration['submodules'], dict) 136 and isinstance(configuration['submodules']['build'], dict) 137 and 'pre_commands' in configuration['submodules']['build'] 138 and 'post_commands' in configuration['submodules']['build']): 139 pre = configuration['submodules']['build']['pre_commands'] 140 post = configuration['submodules']['build']['post_commands'] 141 pre_commands_block_keys = list(pre.keys()) 142 ok = ok & check_keys_type(pre_commands_block_keys, str) 143 post_commands_block_keys = list(post.keys()) 144 ok = ok & check_keys_type(post_commands_block_keys, str) 145 146 i = 0 147 while ok and i < len(pre): 148 pre_section = pre[list(pre.keys())[i]] 149 ok = ok & check_values_type(list(pre_section.keys()), pre_section, list, str, True) 150 i += 1 151 i = 0 152 while ok and i < len(post): 153 post_section = pre[list(post.keys())[i]] 154 ok = ok & check_values_type(list(post_section.keys()), post_section, list, str, True) 155 i += 1 156 else: 157 ok = False 158 159 return ok 160 161 162def check_configuration_structure(configuration: dict) -> bool: 163 ok = True 164 if ('repository' in configuration 165 and 'submodules' in configuration 166 and 'other' in configuration 167 and 'path' in configuration['repository'] 168 and 'remote' in configuration['repository'] 169 and 'default branch' in configuration['repository'] 170 and isinstance(configuration['submodules'], dict) 171 and isinstance(configuration['repository']['path'], str) 172 and isinstance(configuration['repository']['remote'], str) 173 and isinstance(configuration['repository']['default branch'], str) 174 and 'concurrent_workers' in configuration['other'] 175 and 'unit_workers_per_block' in configuration['other'] 176 and isinstance(configuration['other']['concurrent_workers'], int) 177 and isinstance(configuration['other']['unit_workers_per_block'], int) 178 and 'ignored_are_successuful' in configuration['submodules'] 179 and 'mark_failed_as_successful' in configuration['submodules'] 180 and isinstance(configuration['submodules']['ignored_are_successuful'], bool) 181 and isinstance(configuration['submodules']['mark_failed_as_successful'], bool)): 182 ok = True 183 else: 184 ok = False 185 186 ok = ok & check_configuration_structure_ignore(configuration) 187 188 return ok 189 190 191def elements_are_unique(struct: list) -> bool: 192 unique = False 193 if len(struct) == len(set(struct)): 194 unique = True 195 196 return unique 197 198 199######################### 200# Check cache structure # 201######################### 202def check_cache_structure(cache: dict) -> bool: 203 ok = True 204 if not isinstance(cache, dict): 205 ok = False 206 207 elements = list(cache.keys()) 208 209 ok = ok & check_keys_type(elements, str) 210 211 # Check that tags are unique within the same git repository. 212 i = 0 213 while ok and i < len(cache): 214 if not elements_are_unique(cache[elements[i]]): 215 ok = ok & False 216 i += 1 217 218 ok = ok & check_values_type(elements, cache, list, str, True) 219 220 return ok 221 222 223########### 224# Generic # 225########### 226def send_notification(message: str, notify: dict): 227 m = notify['gotify']['message'] + '\n' + message 228 if notify['gotify']['enabled']: 229 fpyutils.notify.send_gotify_message( 230 notify['gotify']['url'], 231 notify['gotify']['token'], m, 232 notify['gotify']['title'], 233 notify['gotify']['priority']) 234 if notify['email']['enabled']: 235 fpyutils.notify.send_email(message, 236 notify['email']['smtp server'], 237 notify['email']['port'], 238 notify['email']['sender'], 239 notify['email']['user'], 240 notify['email']['password'], 241 notify['email']['receiver'], 242 notify['email']['subject']) 243 244 245# See 246# https://stackoverflow.com/a/13847807 247# CC BY-SA 4.0 248# spiralman, bryant1410 249@contextlib.contextmanager 250def pushd(new_dir): 251 previous_dir = os.getcwd() 252 os.chdir(new_dir) 253 try: 254 yield 255 finally: 256 os.chdir(previous_dir) 257 258 259def build_message(total_tags: int, total_successful_tags: int) -> str: 260 message = '\ncurrent successful package git tags: ' + str(total_successful_tags) 261 message += '\ntotal package git tags: ' + str(total_tags) 262 if total_tags != 0: 263 message += '\ncurrent success rate percent: ' + str((total_successful_tags / total_tags) * 100) 264 else: 265 message += '\ncurrent success rate percent: 0' 266 267 return message 268 269 270def print_information(repository_name: str, git_ref: str, message: str = 'processing'): 271 print('\n==========================') 272 print('note: ' + message + ': ' + repository_name + '; git ref: ' + git_ref) 273 print('==========================\n') 274 275 276######### 277# Files # 278######### 279def read_yaml_file(file: str) -> dict: 280 data = dict() 281 if pathlib.Path(file).is_file(): 282 data = yaml.load(open(file, 'r'), Loader=yaml.SafeLoader) 283 284 return data 285 286 287def read_cache_file(file: str) -> dict: 288 cache = read_yaml_file(file) 289 if not check_cache_structure(cache): 290 raise InvalidCache 291 292 return cache 293 294 295def update_cache(cache: dict, new_cache: dict): 296 for c in cache: 297 if c in new_cache: 298 cache[c] = list(set(cache[c]).union(set(new_cache[c]))) 299 300 for c in new_cache: 301 if c not in cache: 302 cache[c] = new_cache[c] 303 304 305def write_cache(cache: dict, new_cache: dict, cache_file: str): 306 print('note: writing cache') 307 308 update_cache(cache, new_cache) 309 310 with open(cache_file, 'w') as f: 311 f.write(yaml.dump(cache)) 312 313 314####### 315# Git # 316####### 317def git_get_updates(git_executable: str, repository_path: str, repository_remote: str, repository_default_branch: str): 318 with pushd(repository_path): 319 fpyutils.shell.execute_command_live_output( 320 shlex.quote(git_executable) 321 + ' pull ' 322 + repository_remote 323 + ' ' 324 + repository_default_branch 325 ) 326 fpyutils.shell.execute_command_live_output(shlex.quote(git_executable) + ' submodule sync') 327 328 # We might need to add the '--recursive' option for 'git submodule update' 329 # to build certain packages. This means that we still depend from external 330 # services at build time if we use that option. 331 fpyutils.shell.execute_command_live_output(shlex.quote(git_executable) + ' submodule update --init --remote') 332 333 fpyutils.shell.execute_command_live_output(shlex.quote(git_executable) + ' submodule foreach git fetch --tags --force') 334 335 336def git_remove_untracked_files(git_executable: str): 337 fpyutils.shell.execute_command_live_output(shlex.quote(git_executable) + ' checkout --') 338 fpyutils.shell.execute_command_live_output(shlex.quote(git_executable) + ' clean -d --force') 339 340 341def git_get_tags(git_executable: str) -> list: 342 r"""Get the list of tags of a git repository. 343 344 :returns: s, a list of tags. In case the git repository has no tags, s is an empty list. 345 """ 346 s = subprocess.run([shlex.quote(git_executable), 'tag'], check=True, capture_output=True) 347 s = s.stdout.decode('UTF-8').rstrip().split('\n') 348 349 # Avoid an empty element when there are no tags. 350 if s == ['']: 351 s = list() 352 353 return s 354 355 356def git_filter_processed_tags(tags: list, cache: dict, software_name: str) -> tuple: 357 r"""Given a list of tags filter out the ones already present in cache.""" 358 successful_tags: int = 0 359 360 if software_name in cache: 361 total_tags_original = len(tags) 362 # Filter tags not already processed. 363 tags = list(set(tags) - set(cache[software_name])) 364 365 # Tags already in cache must be successful. 366 # In case len(tags) < len(cache[software_name]) because configuration changed, 367 # pick the minimum between the two. 368 successful_tags = min(len(cache[software_name]), total_tags_original) 369 370 # Git repositories without tags: remove cache reference if present. 371 if tags == cache[software_name] and tags == ['']: 372 tags = [] 373 374 return tags, successful_tags 375 376 377def git_filter_ignore_tags(tags: list, ignore_objects: dict, skip_tags: bool, software_name: str) -> list: 378 r"""Given a list of tags filter out the ones in an ignore list.""" 379 if skip_tags: 380 tags = list(set(tags) - set(ignore_objects[software_name])) 381 382 return tags 383 384 385def git_get_repository_timestamp(git_executable: str) -> str: 386 r"""Return the timestamp of the last git ref.""" 387 return subprocess.run( 388 [ 389 shlex.quote(git_executable), 390 'log', 391 '-1', 392 '--pretty=%ct' 393 ], check=True, capture_output=True).stdout.decode('UTF-8').strip() 394 395 396########## 397# Python # 398########## 399def build_dist(python_executable: str, git_executable: str): 400 r"""Build the Python package in a reproducable way. 401 402 Remove all dev, pre-releases, etc information from the package name. 403 Use a static timestamp. 404 See 405 https://github.com/pypa/build/issues/328#issuecomment-877028239 406 """ 407 subprocess.run( 408 [ 409 shlex.quote(python_executable), 410 '-m', 411 'build', 412 '--sdist', 413 '--wheel', 414 '-C--global-option=egg_info', 415 '-C--global-option=--no-date', 416 '-C--global-option=--tag-build=', 417 '.' 418 ], check=True, env=dict(os.environ, SOURCE_DATE_EPOCH=git_get_repository_timestamp(git_executable))) 419 420 421def upload_dist(twine_executable: str, pypi_url: str, pypi_username: str, pypi_password: str): 422 r"""Push the compiled package to a remote PyPI server.""" 423 subprocess.run( 424 [ 425 shlex.quote(twine_executable), 426 'upload', 427 '--repository-url', 428 pypi_url, 429 '--non-interactive', 430 '--skip-existing', 431 'dist/*' 432 ], check=True, env=dict(os.environ, TWINE_PASSWORD=pypi_password, TWINE_USERNAME=pypi_username)) 433 434 435def skip_objects(ignore_objects: dict, software_name: str) -> tuple: 436 r"""Determine whether to skip repositories and/or tags.""" 437 skip_repository = False 438 skip_tags = False 439 if software_name in ignore_objects: 440 if len(ignore_objects[software_name]) == 0: 441 skip_repository = True 442 skip_tags = False 443 else: 444 skip_tags = True 445 446 return skip_repository, skip_tags 447 448 449def set_base_directory_override(base_directory_override: dict, directory: str, software_name: str) -> pathlib.Path: 450 r"""Change to the appropriate directory. 451 452 :returns: directory, the path of the directory where the setup files 453 are present. 454 455 ..note: Values are reported in the remote configuration 456 """ 457 old_directory = directory 458 if software_name in base_directory_override: 459 directory = pathlib.Path(directory, base_directory_override[software_name]) 460 # Check if inner_directory exists. 461 # inner_directory usually is equal to absolute_directory 462 if not directory.is_dir(): 463 # Fallback. 464 directory = pathlib.Path(old_directory) 465 else: 466 directory = pathlib.Path(directory) 467 468 return directory 469 470 471def build_package_pre_post_commands(command_block: dict): 472 for block in command_block: 473 cmd = list() 474 for c in command_block[block]: 475 cmd.append(shlex.quote(c)) 476 try: 477 subprocess.run(cmd, check=True) 478 except subprocess.CalledProcessError as e: 479 print(e) 480 481 482def build_package( 483 git_executable, 484 python_executable, 485 rm_executable: str, 486 twine_executable, 487 pypi_url, 488 pypi_user, 489 pypi_password, 490 cache: dict, 491 directory: str, 492 software_name: str, 493 tag: str, 494 base_directory_override: dict, 495 submodule_mark_failed_as_successful: bool, 496 command_block_pre: dict, 497 command_block_post: dict, 498) -> int: 499 r"""Checkout, compile and push. 500 501 This function processes repositories without tags. 502 In this case they are marked as successful but are not added to the cache. 503 """ 504 successful_tag: int = 0 505 directory_relative_path = directory.stem 506 # Cleanup. 507 git_remove_untracked_files(git_executable) 508 509 print_information(directory_relative_path, tag, 'processing') 510 511 # Checkout repository with tags: avoid checking out tagless repositories. 512 if tag != str(): 513 fpyutils.shell.execute_command_live_output( 514 shlex.quote(git_executable) 515 + ' checkout ' 516 + tag 517 ) 518 519 # Decide whether to change directory based on remote configuration. 520 inner_directory = set_base_directory_override(base_directory_override, directory, software_name) 521 subdirectory_absolute_path = str(inner_directory) 522 with pushd(subdirectory_absolute_path): 523 fpyutils.shell.execute_command_live_output(shlex.quote(rm_executable) + ' -rf build dist') 524 try: 525 build_package_pre_post_commands(command_block_pre) 526 build_dist(python_executable, git_executable) 527 build_package_pre_post_commands(command_block_post) 528 529 upload_dist(twine_executable, pypi_url, pypi_user, pypi_password) 530 531 # Register success in cache yaml file. 532 successful_tag = 1 533 if tag != str(): 534 cache[software_name].append(tag) 535 536 except subprocess.CalledProcessError: 537 print_information(directory.stem, tag, 'error') 538 539 if submodule_mark_failed_as_successful: 540 # Do not add an empty git tag to the cache. 541 successful_tag = 1 542 if tag != str(): 543 cache[software_name].append(tag) 544 545 git_remove_untracked_files(git_executable) 546 547 return successful_tag 548 549 550def read_remote_configuration(repository_path: str) -> dict: 551 """Retrieve the configuration of the remote repository.""" 552 remote_configuration_file = pathlib.Path(repository_path, 'configuration.yaml') 553 remote_config = dict() 554 if remote_configuration_file.is_file(): 555 remote_config = yaml.load(open(remote_configuration_file, 'r'), Loader=yaml.SafeLoader) 556 if (not check_configuration_structure_base_directory_override(remote_config) 557 or not check_configuration_structure_checkout(remote_config) 558 or not check_configuration_structure_build(remote_config)): 559 raise InvalidConfiguration 560 561 return remote_config 562 563 564def worker(args: list) -> tuple: 565 directory: pathlib.Path = next(args) 566 ignore_objects: dict = next(args) 567 cache: dict = next(args) 568 submodules_ignored_are_successuful: bool = next(args) 569 git_executable: str = next(args) 570 python_executable: str = next(args) 571 rm_executable: str = next(args) 572 twine_executable: str = next(args) 573 pypi_url: str = next(args) 574 pypi_user: str = next(args) 575 pypi_password: str = next(args) 576 submodule_mark_failed_as_successful: bool = next(args) 577 remote_config: dict = next(args) 578 repository_path: str = next(args) 579 580 total_tags_including_processed: int = 0 581 total_tags_to_process: int = 0 582 total_successful_tags_including_processed: int = 0 583 successful_tags: int = 0 584 585 # The software name is used in the cache as key. 586 software_name = pathlib.Path(directory).stem 587 skip_repository, skip_tags = skip_objects(ignore_objects, software_name) 588 submodule_absolute_path = str(pathlib.Path(repository_path, directory)) 589 590 # Create a new cache slot. 591 if software_name not in cache: 592 cache[software_name] = list() 593 594 # Mark ignored repositories as successful if the setting is enabled. 595 # Save in cache. 596 if (skip_repository 597 and submodules_ignored_are_successuful): 598 with pushd(submodule_absolute_path): 599 tags = git_get_tags(git_executable) 600 601 # Bulk append: all tags are successful. 602 cache[software_name] = tags 603 604 total_tags: int = len(tags) 605 successful_tags = total_tags 606 total_tags_including_processed = total_tags 607 total_tags_to_process = total_tags 608 609 # Process the repository normally. 610 elif not skip_repository: 611 with pushd(submodule_absolute_path): 612 # Cleanup previous runs. 613 git_remove_untracked_files(git_executable) 614 615 # Get all git tags and iterate. 616 tags = git_get_tags(git_executable) 617 # Useful to detect tagless repositories. 618 total_tags_original = len(tags) 619 620 if total_tags_original > 0: 621 # Filter out tags that are in the ignore list. 622 tags = git_filter_ignore_tags(tags, ignore_objects, skip_tags, software_name) 623 624 # The 'checkout' section in the remote configuration rewrites all tags. 625 if software_name in remote_config['submodules']['checkout']: 626 tags = remote_config['submodules']['checkout'][software_name] 627 628 total_tags_including_processed = len(tags) 629 630 # Filter tags not already processed (i.e: not already in cache marked as successful). 631 tags, successful_tags = git_filter_processed_tags(tags, cache, software_name) 632 total_tags_to_process = len(tags) 633 else: 634 # Tagless repositories have 1 dummy tag. 635 total_tags_to_process = 1 636 total_tags_including_processed = 1 637 638 # Get pre-post build commands. 639 if software_name in remote_config['submodules']['build']['pre_commands']: 640 pre_commands = remote_config['submodules']['build']['pre_commands'][software_name] 641 else: 642 pre_commands = dict() 643 if software_name in remote_config['submodules']['build']['post_commands']: 644 post_commands = remote_config['submodules']['build']['post_commands'][software_name] 645 else: 646 post_commands = dict() 647 648 # Build the Python package. 649 if total_tags_original == 0: 650 print('note: tag 1 of 1') 651 # Tagless repository. Pass an empty string as tag id. 652 successful_tags += build_package( 653 git_executable, 654 python_executable, 655 rm_executable, 656 twine_executable, 657 pypi_url, 658 pypi_user, 659 pypi_password, 660 cache, 661 directory, 662 software_name, 663 str(), 664 remote_config['submodules']['base_directory_override'], 665 submodule_mark_failed_as_successful, 666 pre_commands, 667 post_commands, 668 ) 669 else: 670 i = 1 671 for t in tags: 672 print('note: tag ' + str(i) + ' of ' + str(len(tags))) 673 successful_tags += build_package( 674 git_executable, 675 python_executable, 676 rm_executable, 677 twine_executable, 678 pypi_url, 679 pypi_user, 680 pypi_password, 681 cache, 682 directory, 683 software_name, 684 t, 685 remote_config['submodules']['base_directory_override'], 686 submodule_mark_failed_as_successful, 687 pre_commands, 688 post_commands, 689 ) 690 i += 1 691 692 total_successful_tags_including_processed = successful_tags 693 694 return cache, total_tags_including_processed, total_tags_to_process, total_successful_tags_including_processed 695 696 697def build_worker_arg( 698 directory: pathlib.Path, 699 ignore_objects: dict, 700 cache: dict, 701 submodules_ignored_are_successuful: bool, 702 git_executable, 703 python_executable: str, 704 rm_executable: str, 705 twine_executable: str, 706 pypi_url: str, 707 pypi_user: str, 708 pypi_password: str, 709 submodule_mark_failed_as_successful: str, 710 remote_config: dict, 711 repository_path: str 712) -> iter: 713 714 return iter([directory, ignore_objects, cache, submodules_ignored_are_successuful, 715 git_executable, python_executable, rm_executable, twine_executable, pypi_url, 716 pypi_user, pypi_password, submodule_mark_failed_as_successful, remote_config, repository_path]) 717 718 719def process( 720 cache: dict, 721 cache_file: str, 722 ignore_objects: dict, 723 submodules_ignored_are_successuful: bool, 724 notify: dict, 725 git_executable: str, 726 python_executable: str, 727 rm_executable: str, 728 twine_executable: str, 729 pypi_url: str, 730 pypi_user: str, 731 pypi_password: str, 732 repository_path: str, 733 repository_remote: str, 734 repository_default_branch: str, 735 submodule_mark_failed_as_successful: bool, 736 concurrent_workers: int = multiprocessing.cpu_count(), 737 unit_workers_per_block: int = 1, 738): 739 quit: bool = False 740 741 # Define and register the signals. 742 def signal_handler(*args): 743 global quit 744 quit = True 745 print('\n==========================') 746 print('note: signal received. finished queued workers and writing ' + str(len(cache)) + ' repository elements to cache before exit') 747 print('==========================\n') 748 signal.signal(signal.SIGINT, signal_handler) 749 signal.signal(signal.SIGTERM, signal_handler) 750 751 # Autodetect. 752 if concurrent_workers <= 0: 753 concurrent_workers = multiprocessing.cpu_count() 754 if unit_workers_per_block <= 0: 755 unit_workers_per_block = 1 756 757 git_get_updates(git_executable, repository_path, repository_remote, repository_default_branch) 758 759 remote_config = read_remote_configuration(repository_path) 760 761 # Go to the submodules subdirectory. 762 repository_path = pathlib.Path(repository_path, 'submodules') 763 764 dirs: list = [x for x in pathlib.Path(repository_path).iterdir()] 765 len_dirs: int = len(dirs) 766 n: int = 0 767 new_cache: dict = dict() 768 new_total_tags_including_processed: int = 0 769 new_total_tags_to_process: int = 0 770 new_total_successful_tags: int = 0 771 import copy 772 tmp_cache: dict = copy.deepcopy(cache) 773 774 while n < len_dirs and not quit: 775 signal.signal(signal.SIGINT, signal.SIG_IGN) 776 signal.signal(signal.SIGTERM, signal.SIG_IGN) 777 signal.signal(signal.SIGABRT, signal.SIG_IGN) 778 signal.signal(signal.SIGALRM, signal.SIG_IGN) 779 780 remaining_dirs: int = len_dirs - n 781 args: list = list() 782 step = min(concurrent_workers * unit_workers_per_block, remaining_dirs) 783 784 print('note: remaining ' + str(remaining_dirs) + ' repositories') 785 786 for i in range(0, step): 787 # Skip non.directories and the ./.git directory. 788 if dirs[n + i].is_dir() and dirs[n + i].stem != '.git': 789 args.append(build_worker_arg(dirs[n + i], ignore_objects, tmp_cache, 790 submodules_ignored_are_successuful, git_executable, 791 python_executable, rm_executable, twine_executable, pypi_url, 792 pypi_user, pypi_password, submodule_mark_failed_as_successful, 793 remote_config, repository_path)) 794 795 pool = multiprocessing.Pool(processes=concurrent_workers) 796 try: 797 signal.signal(signal.SIGINT, signal_handler) 798 signal.signal(signal.SIGTERM, signal_handler) 799 signal.signal(signal.SIGABRT, signal_handler) 800 signal.signal(signal.SIGALRM, signal_handler) 801 result = pool.map_async(worker, args) 802 rr = result.get(timeout=3600) 803 for r in rr: 804 update_cache(new_cache, r[0]) 805 new_total_tags_including_processed += r[1] 806 new_total_tags_to_process += r[2] 807 new_total_successful_tags += r[3] 808 except (KeyboardInterrupt, InterruptedError): 809 pool.terminate() 810 pool.join() 811 quit = True 812 else: 813 pool.close() 814 pool.join() 815 816 n += step 817 818 if pool: 819 # Cleanup. 820 pool.close() 821 pool.join() 822 823 write_cache(cache, new_cache, cache_file) 824 825 message = build_message(new_total_tags_including_processed, new_total_successful_tags) 826 print(message) 827 send_notification(message, notify) 828 829 830if __name__ == '__main__': 831 def main(): 832 configuration_file = shlex.quote(sys.argv[1]) 833 config = yaml.load(open(configuration_file, 'r'), Loader=yaml.SafeLoader) 834 if not check_configuration_structure(config): 835 raise InvalidConfiguration 836 837 dirs = AppDirs('build_pypi_packages') 838 # Read the cache file. 839 if config['files']['cache']['clear']: 840 shutil.rmtree(dirs.user_cache_dir, ignore_errors=True) 841 pathlib.Path(dirs.user_cache_dir).mkdir(mode=0o700, exist_ok=True, parents=True) 842 cache_file = str(pathlib.Path(dirs.user_cache_dir, config['files']['cache']['file'])) 843 cache = read_cache_file(cache_file) 844 845 process( 846 cache, 847 cache_file, 848 config['submodules']['ignore'], 849 config['submodules']['ignored_are_successuful'], 850 config['notify'], 851 config['files']['executables']['git'], 852 config['files']['executables']['python'], 853 config['files']['executables']['rm'], 854 config['files']['executables']['twine'], 855 config['pypi']['url'], 856 config['pypi']['username'], 857 config['pypi']['password'], 858 config['repository']['path'], 859 config['repository']['remote'], 860 config['repository']['default branch'], 861 config['submodules']['mark_failed_as_successful'], 862 config['other']['concurrent_workers'], 863 config['other']['unit_workers_per_block'], 864 ) 865 866 main()
add the
configuration
/home/jobs/scripts/by-user/python-source-packages-updater/build_python_packages.yaml1# 2# build_python_packages.yaml 3# 4# Copyright (C) 2021-2022 Franco Masotti (franco \D\o\T masotti {-A-T-} tutanota \D\o\T com) 5# 6# This program is free software: you can redistribute it and/or modify 7# it under the terms of the GNU General Public License as published by 8# the Free Software Foundation, either version 3 of the License, or 9# (at your option) any later version. 10# 11# This program is distributed in the hope that it will be useful, 12# but WITHOUT ANY WARRANTY; without even the implied warranty of 13# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14# GNU General Public License for more details. 15# 16# You should have received a copy of the GNU General Public License 17# along with this program. If not, see <http://www.gnu.org/licenses/>. 18 19notify: 20 email: 21 enabled: false 22 smtp server: 'smtp.gmail.com' 23 port: 465 24 sender: 'myusername@gmail.com' 25 user: 'myusername' 26 password: 'my awesome password' 27 receiver: 'myusername@gmail.com' 28 subject: 'update action' 29 gotify: 30 enabled: false 31 url: '<gotify url>' 32 token: '<app token>' 33 title: 'update action' 34 message: 'update action' 35 priority: 5 36 37 38repository: 39 path: '/home/jobs/scripts/by-user/pypi-source-packages-updater/python-packages-source' 40 41 remote: 'origin' 42 43 # Default branch of the main repository. 44 default branch: 'dev' 45 46submodules: 47 ignored are successuful: true 48 mark failed as successful: true 49 50 ignore: {} 51 # Directory name. 52 # Empty list == all otherwise a list of git tags. 53 # Example: 54 # 55 # ignore: 56 # alabaster: [] 57 # argon2-cffi: 58 # # Tags names obtained from git tag 59 # - 0.1 60 # - 0.2 61 62files: 63 # Relative path (stem) contextually to the user's cache directory. 64 cache file: 'cache' 65 clear cache: false 66 67 executables: 68 git: 'git' 69 python: 'python3' 70 rm: 'rm' 71 twine: 'twine' 72 73pypi: 74 url: '<PyPI URL>' 75 username: '<PyPI username>' 76 password: '<PyPI password>' 77 78other: 79 # Put 0 to set worker number automatically based on CPU. 80 concurrent_workers: 0 81 82 # If we set other.concurrent_workers = 3 and other.unit_workers_per_block = 4 83 # each block will process other.concurrent_workers * other.unit_workers_per_block repositories, i.e: 12. 84 # This nuber works better if it's a multiplier of other.concurrent_workers. 85 # The bigger this number the more probability you have to lose 86 # updates in the cache structure (~/.cache/build_pypi_packages/cache) due to the way signals work. 87 unit_workers_per_block: 1
add the
helper script
. update_and_build_python_packages.sh clones and updates the python-packages-source 14 repository and compiles all the packages/home/jobs/scripts/by-user/python-source-packages-updater/update_and_build_python_packages.sh1#!/usr/bin/env bash 2 3REPOSITORY='https://software.franco.net.eu.org/frnmst/python-packages-source.git' 4 5pushd /home/jobs/scripts/by-user/python-source-packages-updater 6export PATH=$PATH:/home/python-source-packages-updater/.local/bin/ 7 8# Clean the existing repository. 9rm -rf python-packages-source 10rm -rf /home/python-source-packages-updater/.cache/pre-commit 11rm -rf /home/python-source-packages-updater/.local/share/virtualenvs 12 13git clone "${REPOSITORY}" 14 15pushd python-packages-source 16 17git checkout dev 18git pull 19 20# Always commit and push to dev only. 21[ "$(git branch --show-current)" = 'dev' ] || exit 1 22 23# Update all submodules and the stats. 24pipenv install --dev 25make submodules-update 26make submodules-add-gitea 27make stats 28git add -A 29git commit -m "Submodule updates." 30git push 31 32popd 33 34# Compile the packages. 35./build_python_packages.py ./build_python_packages.yaml 36 37# Cleanup. 38rm -rf python-packages-source 39 40popd
add the
Systemd service file
/home/jobs/services/by-user/python-source-packages-updater/build-python-packages.service1[Unit] 2Description=Build python packages 3 4[Service] 5Type=simple 6ExecStart=/home/jobs/scripts/by-user/python-source-packages-updater/update_and_build_python_packages.sh 7User=python-source-packages-updater 8Group=python-source-packages-updater 9StandardOutput=null 10StandardError=null
add the
Systemd timer unit file
/home/jobs/services/by-user/python-source-packages-updater/build-python-packages.timer1[Unit] 2Description=Once every week build python packages 3 4[Timer] 5OnCalendar=Weekly 6Persistent=true 7 8[Install] 9WantedBy=timers.target
run the deploy script
To be able to compile most packages you need to manually compile at least these basic ones and push them to you local PyPI server.
setuptools
setuptools_scm
wheel
You can clone the python-packages-source 14 respository then compile and upload these basic packages.
git clone https://software.franco.net.eu.org/frnmst/python-packages-source.git cd python-packages-source/setuptools python3 -m build --sdist --wheel twine upload --repository-url ${your_pypi_index_url} dist/*
Important
Some packages might need different dependencies. Have a look at the
setup_requires
variable insetup.py
or insetup.cfg
orrequires
in thepyproject.toml
file. If you cannot compile some, download them directly from pypi.python.org.
Updating the package graph
Run as user |
Instruction number |
|
1 |
|
2 |
|
3 |
When you run the helper script you can update the stats graph automatically by using a GIT commit hook. The following script generates the graph and copies it to a webserver directory.
connect via SSH to the GIT remote machine
install the dependencies
apt-get install git python3 make pipenv
configure your remote: add this to the post-receive hooks
${remote_git_repository}/hooks/post-receive1#!/usr/bin/bash -l 2 3IMAGE=""$(echo -n 'frnmst/python-packages-source' | sha1sum | awk '{print $1 }')"_graph0.png" 4DOMAIN='assets.franco.net.eu.org' 5 6TMP_GIT_CLONE=""${HOME}"/tmp/python-packages-source" 7PUBLIC_WWW="/var/www/${DOMAIN}/image/${IMAGE}" 8 9git clone "${GIT_DIR}" "${TMP_GIT_CLONE}" 10 11pushd "${TMP_GIT_CLONE}" 12make install 13make plot OUTPUT="${PUBLIC_WWW}" 14chmod 770 "${PUBLIC_WWW}" 15popd 16 17rm --recursive --force "${TMP_GIT_CLONE}"
Using the PyPI server
change the PyPI index of your programs. See for example https://software.franco.net.eu.org/frnmst/python-packages-source#client-configuration
Footnotes
- 1
https://guide.debianizzati.org/index.php/Debmirror:_creiamo_un_mirror_Debian CC BY-SA 4.0, Copyright (c) guide.debianizzati contributors
- 2
https://docs.mirantis.com/mcp/q4-18/mcp-dev-resources/debmirror/debmirror_readme.html unknown license
- 3
https://www.ducea.com/2006/06/19/apache-tips-tricks-hide-a-file-type-from-directory-indexes/ unknown license
- 4
https://blog.programster.org/set-up-a-local-ubuntu-mirror-with-apt-mirror unknown license
- 5
https://www.debian.org/mirror/size MIT License, Copyright © 1997-2021 Software in the Public Interest, Inc. and others
- 6
https://bugs.launchpad.net/ubuntu/+source/debmirror/+bug/882941/comments/4 unknown license
- 7
https://people.debian.org/~koster/www/mirror/ftpmirror Open Publication License, Draft v1.0 or later, Copyright © 1997-2011 Software in the Public Interest, Inc.
- 8
https://perishablepress.com/better-default-directory-views-with-htaccess/ CC BY 4.0, © 2004–2022 Perishable Press
- 9
https://github.com/pypiserver/pypiserver zlib/libpng + MIT license, Copyright (c) 2011-2014 Ralf Schmitt
- 10
https://github.com/pypiserver/pypiserver#serving-thousands-of-packages zlib/libpng + MIT license, Copyright (c) 2011-2014 Ralf Schmitt
- 11
https://httpd.apache.org/docs/2.4/caching.html Apache License, Version 2.0. Copyright (c) Apache HTTPD contributors
- 12
https://httpd.apache.org/docs/2.4/mod/mod_cache.html Apache License, Version 2.0. Copyright (c) Apache HTTPD contributors
- 13
https://software.franco.net.eu.org/frnmst/automated-tasks GNU GPLv3+, copyright (c) 2019-2022, Franco Masotti
- 14(1,2,3)
https://software.franco.net.eu.org/frnmst/python-packages-source GPLv3+, Copyright (C) 2021-2022, Franco Masotti
- 15
https://stackoverflow.com/a/13847807 CC BY-SA 4.0, Copyright (c) 2012, 2020, spiralman, bryant1410 (at stackoverflow.com)
- 16
https://github.com/pypa/build/issues/328#issuecomment-877028239 unknown license