Compare commits
171 Commits
add-new-ad
...
main
Author | SHA1 | Date | |
---|---|---|---|
|
15fa33c651 | ||
|
39108372ff | ||
|
03c6ac008c | ||
1d08c5ba97 | |||
361c76c437 | |||
df84023d8d | |||
4b418e3a44 | |||
b9f5808525 | |||
b6b85b9c2a | |||
91bfdebf72 | |||
26084c7814 | |||
b3e72460b5 | |||
4942c97083 | |||
614fadd360 | |||
6e31968d51 | |||
7e166ab396 | |||
804117adde | |||
4c61b707dd | |||
37994f3b4c | |||
bfd0425dd9 | |||
874baa9f90 | |||
4e11aaaa29 | |||
7aa187b4e5 | |||
7c52eba5d3 | |||
be9d0bff55 | |||
63d677af6a | |||
c7164de4bf | |||
427c58b532 | |||
6cd15a6256 | |||
cc0bcb5af0 | |||
c56e3f668c | |||
63466822a9 | |||
a33eb720b3 | |||
8227d75d28 | |||
6bed5493ef | |||
b7b087adf0 | |||
919f6d136c | |||
35015c9d94 | |||
273bad355e | |||
71cdcc3879 | |||
d7a2e4b611 | |||
4311b396e4 | |||
8f3a53766a | |||
1e202a67ed | |||
3e007270b9 | |||
e5077105b8 | |||
a366cfb4d7 | |||
9853a34162 | |||
919ddcccec | |||
16f01034fa | |||
9c5f481dbb | |||
a0bd69c489 | |||
945e77cb4b | |||
ef908376cf | |||
b0f4a7e460 | |||
ff301b5e3f | |||
0d3303a40c | |||
923cc11beb | |||
3476f0262a | |||
a3ae374a81 | |||
8fd2abf8f4 | |||
ecc26c971d | |||
2507ba6ecc | |||
d0afae1d42 | |||
0925f891e5 | |||
2106b3f25d | |||
04834f339a | |||
7145be4178 | |||
d79eeaf9a2 | |||
4b1437be1e | |||
7fd9b344ae | |||
89224bd978 | |||
1b1bb22299 | |||
375e447964 | |||
b2f0aa1e4e | |||
bff0692c2b | |||
a87af2e9be | |||
c1a7a74fda | |||
ad5fb46eff | |||
d932006941 | |||
f914a48b53 | |||
cfcf7e39f0 | |||
b1de6737f0 | |||
b23ade4cd8 | |||
cdd255c5bb | |||
28706e9c93 | |||
96b8997c0c | |||
2c47553a9f | |||
1a836650a2 | |||
d48fdd4369 | |||
45fcd51029 | |||
2a2f871886 | |||
acd05145e6 | |||
00cfba4510 | |||
5217cf2522 | |||
e66792ec3e | |||
a10cf1c058 | |||
acc2eb2a3d | |||
f11810aadf | |||
7011f26307 | |||
fc855490cf | |||
5f21d9290f | |||
a81735a1b8 | |||
dc63cac8aa | |||
4facb64bf8 | |||
e4574e7ecc | |||
838b764615 | |||
407186e530 | |||
0977ab8cd8 | |||
58ec1a9445 | |||
|
05875dd046 | ||
7b4322aa60 | |||
f43bfc433a | |||
142d1d4f5a | |||
2c38e5a267 | |||
b82f39a42d | |||
97bcca145b | |||
e1bce18d2d | |||
debcecd1fe | |||
360db194ac | |||
9eb656e75c | |||
22ebaf1629 | |||
7c449cb887 | |||
b37351ec61 | |||
844efe8a49 | |||
d9988a932c | |||
e1691830b1 | |||
c7d7f15922 | |||
6c02c5ed3d | |||
5d500ea8b6 | |||
172978a64a | |||
86c135b2db | |||
39e166239f | |||
8b9309ff4b | |||
0e940df399 | |||
fa9acea169 | |||
8f6645e362 | |||
9f17a039a4 | |||
bc3f0bb703 | |||
0f3edc0970 | |||
7677c7b250 | |||
73ac351bdd | |||
8f18594833 | |||
159165d2d5 | |||
689649c8ae | |||
5471eabbfd | |||
fc9c82c9e5 | |||
b3ae2ba776 | |||
889987cdc9 | |||
a2658a01df | |||
|
549ebabafd | ||
9268ec1ff7 | |||
|
64b16ba5a3 | ||
c36f157585 | |||
6fd24eb6cb | |||
d978c942f0 | |||
88a8a2e5df | |||
55fc3867c6 | |||
7fd8cdd1d9 | |||
192e1c614b | |||
b11c57fe52 | |||
ff8418f4b6 | |||
4926d109d9 | |||
19f47eafd3 | |||
96c9b01a19 | |||
72e87000fc | |||
41a199717d | |||
b59e8df4bf | |||
e82ac2db33 | |||
810a990d13 | |||
6223ef7f05 |
14
.gitignore
vendored
14
.gitignore
vendored
@ -1,7 +1,13 @@
|
||||
build/
|
||||
*.log
|
||||
*.pyc
|
||||
*egg-info*
|
||||
*~
|
||||
venv/
|
||||
testing/
|
||||
.etherpump
|
||||
/p/
|
||||
/publish/
|
||||
build/
|
||||
dist/
|
||||
index.html
|
||||
padinfo.json
|
||||
.etherdump
|
||||
testing/
|
||||
venv/
|
||||
|
14
Makefile
Normal file
14
Makefile
Normal file
@ -0,0 +1,14 @@
|
||||
default: style
|
||||
|
||||
format:
|
||||
@poetry run black etherpump
|
||||
|
||||
sort:
|
||||
@poetry run isort etherpump
|
||||
|
||||
lint:
|
||||
@poetry run flake8 etherpump
|
||||
|
||||
style: format sort lint
|
||||
|
||||
.PHONY: style format sort lint
|
318
README.md
318
README.md
@ -1,81 +1,291 @@
|
||||
etherdump
|
||||
=========
|
||||
# etherpump
|
||||
|
||||
Tool to publish [etherpad](http://etherpad.org/) pages to files.
|
||||
[![PyPI version](https://badge.fury.io/py/etherpump.svg)](https://badge.fury.io/py/etherpump)
|
||||
[![GPL license](https://img.shields.io/badge/license-GPL-brightgreen.svg)](https://git.vvvvvvaria.org/varia/etherpump/src/branch/master/LICENSE.txt)
|
||||
|
||||
_Pumping text from etherpads into publications_
|
||||
|
||||
Requirements
|
||||
-------------
|
||||
* python3
|
||||
* html5lib
|
||||
* requests (settext)
|
||||
* python-dateutil, jinja2 (index subcommand)
|
||||
A command-line utility that extends the multi writing and publishing functionalities of the [etherpad](http://etherpad.org/) by exporting the pads in multiple formats.
|
||||
|
||||
Installation
|
||||
-------------
|
||||
## Many pads, many networks
|
||||
|
||||
pip install python-dateutil jinja2 html5lib
|
||||
python setup.py install
|
||||
_Etherpump_ is a friendly fork of [_etherdump_](https://gitlab.constantvzw.org/aa/etherdump), a command line tool written by [Michael Murtaugh](http://automatist.org/) that converts etherpad pages to files. This fork is made out of curiosities in the tool, a wish to study it and shared sparks of enthusiasm to use it in different situations within Varia.
|
||||
|
||||
Example
|
||||
---------------
|
||||
mkdir mydump
|
||||
cd myddump
|
||||
etherdump init
|
||||
Etherpump is a stretched version of etherdump. It is a playground in which we would like to add features to the initial tool that diffuse actions of _dumping_ into _pumping_. So most of all, etherpump is a work-in-progress, exploring potential uses of etherpads to edit, structure and publish various types of content.
|
||||
|
||||
Added features are:
|
||||
|
||||
- opt-in publishing with the `__PUBLISH__` magic word
|
||||
- the `publication` command, that listens to custom magic words such as `__RELEARN__`
|
||||
|
||||
See the [Change log / notes ](#change-log--notes) section for further changes.
|
||||
|
||||
Etherpump is a tool that is used from the command line. It pumps all pads of one etherpad installation to a folder, saving them as different text files, such as plain text and HTML. It also creates an index file, that allows one to easily navigate through the list of pads. Etherpump follows a document-driven idea of publishing, which means that it converts pads as database entries into pads as files. This seems to be a redundant act of copying, but is actually an important in-between step that allows for many different publishing projects and experiments.
|
||||
|
||||
We started to get to know etherpump through various editions of Relearn and/or the worksessions organized by Constant. Collaborative writing on an etherpad has been an important ingredient for these situations. The habit of using pads branched into the day-to-day practice of Varia, where we use etherpads for all sorts of things, ranging from organising remote-meetings with 10+ people, to writing and designing PDF documents collaboratively.
|
||||
|
||||
After installing etherpump on the Varia server, we collectively decided to not want to publish pads by default. Discussions in the group around the use of etherpads, privacy and ideas of what publishing means, led to a need to have etherpump only start the indexing work after it recognizes a `__PUBLISH__` marker on a pad. We decided to work on a `__PUBLISH__ vs. __NOPUBLISH__` branch of etherdump, which we now fork into **etherpump**.
|
||||
|
||||
# Change log / notes
|
||||
|
||||
**December 2020**
|
||||
|
||||
Added the `--magicwords` flag. Parsing and indexing of magic words is now
|
||||
supported. See [etherpump.vvvvvvaria.org](https://etherpump.vvvvvvaria.org) for
|
||||
more. This is still a work in progress.
|
||||
|
||||
Change `--connection` default setting to 50 to avoid overpowering modestly
|
||||
powered servers.
|
||||
|
||||
**November 2020**
|
||||
|
||||
Releasing Etherpump 0.0.18!
|
||||
|
||||
Handled a bug that saved the same HTML content in multiple files. Disclaimer: resolved in a hacky way.
|
||||
|
||||
---
|
||||
|
||||
**October 2020**
|
||||
|
||||
Use the more friendly packaging tool [Poetry](https://python-poetry.org/) for publishing.
|
||||
|
||||
Further performance tweaks, informative logging and miscellaneous bug fixing.
|
||||
|
||||
Decolonize our Git praxis and use the `main` branch.
|
||||
|
||||
---
|
||||
|
||||
**January 2020**
|
||||
|
||||
Added experimental [trio](trio.readthedocs.io) and
|
||||
[asks](https://asks.readthedocs.io/en/latest/index.html) support for the `pull`
|
||||
command which enables pads to be processed concurrently. The default
|
||||
`--connection` option is set to 20 which may overpower the target server. If in
|
||||
doubt, set this to a lower number (like 5). This functionality is experimental,
|
||||
be cautious and please report bugs!
|
||||
|
||||
Removed fancy progress bars for pulling because concurrent processing makes
|
||||
that hard to track. For now, we simply output whichever padid we're finished
|
||||
with.
|
||||
|
||||
---
|
||||
|
||||
**October 2019**
|
||||
|
||||
Improve `etherpump --help` handling to make it easier for new users.
|
||||
|
||||
Added the `python-dateutil` and `pypandoc` dependencies
|
||||
|
||||
Added a fancy progress bar with `tqdm` for long running `etherpump pull --all` calls
|
||||
|
||||
Started with the [experimental library API](#library-api-example).
|
||||
|
||||
---
|
||||
|
||||
**September 2019**
|
||||
|
||||
Forking _etherpump_ into _etherpump_.
|
||||
|
||||
<https://git.vvvvvvaria.org/varia/etherpump>
|
||||
|
||||
Migrating the source code to Python 3.
|
||||
|
||||
Integrate PyPi publishing with setuptools.
|
||||
|
||||
---
|
||||
|
||||
**May - September 2019**
|
||||
|
||||
etherpump is used to produce the _Ruminating Relearn_ section of the Network Of One's Own 2 (NOOO2) publication.
|
||||
|
||||
A new command is added to make a web publication, based on the custom magic word `__RELEARN__`.
|
||||
|
||||
---
|
||||
|
||||
**June 2019**
|
||||
|
||||
Multiple conversations around etherpump emerged during Relearn Curved in Varia, Rotterdam.
|
||||
|
||||
Including the idea of executable pads (_etherhooks_), custom magic words, a federated snippet protocol (_etherstekje_) and more.
|
||||
|
||||
<https://varia.zone/relearn-2019.html>
|
||||
|
||||
---
|
||||
|
||||
**April 2019**
|
||||
|
||||
Installation of etherpump on the Varia server.
|
||||
|
||||
<https://etherpump.vvvvvvaria.org/>
|
||||
|
||||
---
|
||||
|
||||
**March 2019**
|
||||
|
||||
The `__PUBLISH__ vs. __NOPUBLISH__` was added to the etherpump repository by _decentral1se_.
|
||||
|
||||
<https://gitlab.constantvzw.org/aa/etherpump/issues/3>
|
||||
|
||||
---
|
||||
|
||||
Originally designed for use at: [Constant](http://etherdump.constantvzw.org/).
|
||||
|
||||
More notes can be found in the [git repository of etherdump](https://gitlab.constantvzw.org/aa/etherdump).
|
||||
|
||||
# Install etherpump
|
||||
|
||||
`$ pip install etherpump`
|
||||
|
||||
Etherpump only supports Python >= 3.6.
|
||||
|
||||
## Command-line example
|
||||
|
||||
```
|
||||
$ mkdir mydump
|
||||
$ cd myddump
|
||||
$ etherpump init
|
||||
```
|
||||
|
||||
The program then interactively asks some questions:
|
||||
|
||||
Please type the URL of the etherpad:
|
||||
http://automatist.local:9001/
|
||||
The APIKEY is the contents of the file APIKEY.txt in the etherpad folder
|
||||
Please paste the APIKEY:
|
||||
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
> Please type the URL of the etherpad (e.g. https://pad.vvvvvvaria.org):
|
||||
>
|
||||
> https://pad.vvvvvvaria.org/
|
||||
|
||||
The settings are placed in a file called .etherdump/settings.json and are used (by default) by future commands.
|
||||
The APIKEY is the contents of the file APIKEY.txt in the etherpad folder.
|
||||
|
||||
> Please paste the APIKEY:
|
||||
>
|
||||
> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
|
||||
The settings are placed in a file called `.etherpump/settings.json` and are used (by default) by future commands.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Text+Meta performance wrangling
|
||||
|
||||
If you have a lot of pads, you might want to try the following to speed things
|
||||
up. This example is something we do at Varia. Firstly, you download all the
|
||||
pads text + metadata as the only formats. This is likely what you want when
|
||||
you're trying to work directly with the text. You can do that like so:
|
||||
|
||||
```bash
|
||||
$ etherpump pull --text --meta --publish-opt-in
|
||||
```
|
||||
|
||||
The key here is to get the `--meta` so that etherpump is able to read quickly
|
||||
skip it on the following run if there are no new revisions. So, in practice,
|
||||
you get a slower first run and faster following runs as more pads are skipped
|
||||
from actually doing a file system write to save the contents which we already
|
||||
have.
|
||||
|
||||
## Library API Example
|
||||
|
||||
Etherpump can be used as a library.
|
||||
|
||||
All commands can be imported and run programmatically.
|
||||
|
||||
```python
|
||||
>>> from etherpump.api import pull
|
||||
>>> pull(['--text', '--meta', '--publish-opt-in'])
|
||||
```
|
||||
|
||||
There is also a Magic Word interface. It supports the following API:
|
||||
|
||||
> magic_word(word, fresh)
|
||||
|
||||
- **word**: The magic word to match pad text against (e.g. `__PUB_CLUB__`)
|
||||
- **fresh** (default: `True`): Whether or not run a `etherpump pull` each time
|
||||
|
||||
Here is an example:
|
||||
|
||||
```python
|
||||
from etherpump.api import magic_word
|
||||
|
||||
@magic_word("__PUB_CLUB__", fresh=False)
|
||||
def pub_club_texts(pads):
|
||||
for name in pads:
|
||||
print(pads[name]["txt"])
|
||||
|
||||
|
||||
pub_club_texts()
|
||||
```
|
||||
|
||||
subcommands
|
||||
----------
|
||||
`pads` is a dictionary which contains pad names as keys and pad text as values.
|
||||
Normally, the `fresh=False` is useful when you're hacking away and want to read
|
||||
pad contents from the local file system and not over the network each time.
|
||||
|
||||
* init
|
||||
* pull
|
||||
* list
|
||||
* listauthors
|
||||
* gettext
|
||||
* settext
|
||||
* gethtml
|
||||
* creatediffhtml
|
||||
* revisionscount
|
||||
* index
|
||||
* deletepad
|
||||
## Subcommands
|
||||
|
||||
To get help on a subcommand:
|
||||
To see all available subcommands, run:
|
||||
|
||||
etherdump revisionscount --help
|
||||
`$ etherpump --help`
|
||||
|
||||
For help on each individual subcommand, run:
|
||||
|
||||
Change log / notes
|
||||
=======================
|
||||
`$ etherpump revisionscount --help`
|
||||
|
||||
Originally designed for use at: [constant](http://etherdump.constantvzw.org/).
|
||||
## Publishing
|
||||
|
||||
Please use ["semver"](https://semver.org/) conventions for versions.
|
||||
|
||||
17 Oct 2016
|
||||
-----------------------------------------------
|
||||
Preparations for [Machine Research](https://machineresearch.wordpress.com/) [2](http://constantvzw.org/site/Machine-Research,2646.html)
|
||||
Here are the steps to follow (e.g. for a `0.1.3` release):
|
||||
|
||||
- Change the version number in the `etherpump/__init__.py` `__VERSION__` to `0.1.3`
|
||||
- Change the version number in the `pyproject.toml` `version` field to `0.1.3`
|
||||
- `git add . && git commit -m "Publish new 0.1.3 version" && git tag 0.1.3 && git push --tags`
|
||||
- Run `poetry publish --build`
|
||||
|
||||
6 Oct 2017
|
||||
----------------------
|
||||
Feature request from PW: When deleting a previously public document, generate a page / pages with an explanation (along the lines of "This document was previously public but has been marked .... maybe give links to search").
|
||||
You should have a [PyPi](https://pypi.org/) account and be added as an owner/maintainer on the [etherpump package](https://pypi.org/project/etherpump/).
|
||||
|
||||
3 Nov 2017
|
||||
---------------
|
||||
machineresearch seems to be __NOPUBLISH__ but still exists (also in recentchanges)
|
||||
## Testing
|
||||
|
||||
Jan 2018
|
||||
-------------
|
||||
Updated files to work with python3 (probably this has broken python2).
|
||||
It can be quite handy to run a very temporary local Etherpad instance to test against. This is possible with [Docker](https://docs.docker.com/get-docker/).
|
||||
|
||||
```bash
|
||||
$ docker run -d --name etherpad -p 9001:9001 etherpad/etherpad
|
||||
$ docker exec -ti etherpad cat APIKEY.txt;echo
|
||||
```
|
||||
|
||||
Then you can `etherpump init` to that local Etherpad for experimentation and testing. You use `http://localhost:9001` as the pad URL.
|
||||
|
||||
Later on, you can remove the Etherpad with:
|
||||
|
||||
```bash
|
||||
$ docker rm -f --volumes etherpad
|
||||
```
|
||||
|
||||
## Maintenance utilities
|
||||
|
||||
Tools to help things stay tidy over time.
|
||||
|
||||
```bash
|
||||
$ make
|
||||
```
|
||||
|
||||
Please see the following links for further reading:
|
||||
|
||||
- [flake8](http://flake8.pycqa.org)
|
||||
- [isort](https://isort.readthedocs.io)
|
||||
- [black](https://black.readthedocs.io)
|
||||
|
||||
### Server Systers Situation
|
||||
|
||||
```
|
||||
$ sudo -su systers
|
||||
$ cd /var/www/etherpump
|
||||
$ sh cron.sh
|
||||
```
|
||||
|
||||
Served from `/etc/nginx/sites-enabled/etherpump.vvvvvvaria.conf`.
|
||||
|
||||
## Keeping track of Etherpad-lite
|
||||
|
||||
- [Etherpad-lite API documentation](https://etherpad.org/doc/v1.7.5/)
|
||||
- [Etherpad-lite releases](https://github.com/ether/etherpad-lite/releases)
|
||||
|
||||
# License
|
||||
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE, Version 3.
|
||||
|
||||
See [LICENSE](./LICENSE).
|
||||
|
@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
|
||||
usage = """Usage:
|
||||
etherdump CMD
|
||||
|
||||
where CMD could be:
|
||||
pull
|
||||
index
|
||||
dumpcsv
|
||||
gettext
|
||||
gethtml
|
||||
creatediffhtml
|
||||
list
|
||||
listauthors
|
||||
revisionscount
|
||||
showmeta
|
||||
html5tidy
|
||||
|
||||
For more information on each command try:
|
||||
etherdump CMD --help
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
cmd = sys.argv[1]
|
||||
if cmd.startswith("-"):
|
||||
cmd = "sync"
|
||||
args = sys.argv
|
||||
else:
|
||||
args = sys.argv[2:]
|
||||
except IndexError:
|
||||
print (usage)
|
||||
sys.exit(0)
|
||||
try:
|
||||
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
|
||||
cmdmod = __import__("etherdump.commands.%s" % cmd, fromlist=["etherdump.commands"])
|
||||
cmdmod.main(args)
|
||||
except ImportError as e:
|
||||
print ("Error performing command '{0}'\n(python said: {1})\n".format(cmd, e))
|
||||
print (usage)
|
||||
|
24
cron.sh
Executable file
24
cron.sh
Executable file
@ -0,0 +1,24 @@
|
||||
echo "Pulling pads..."
|
||||
|
||||
/usr/local/bin/poetry run etherpump pull \
|
||||
--meta \
|
||||
--html \
|
||||
--text \
|
||||
--magicwords \
|
||||
--publish-opt-in \
|
||||
--pub p \
|
||||
--css ../stylesheet.css \
|
||||
--fix-names \
|
||||
--connection 5 \
|
||||
--force
|
||||
|
||||
echo "Building the etherpump index..."
|
||||
|
||||
/usr/local/bin/poetry run etherpump index \
|
||||
input \
|
||||
p/*.meta.json \
|
||||
--templatepath templates \
|
||||
--title "Notes, __MAGICWORDS__, readers & more ..." \
|
||||
--output index.html
|
||||
|
||||
echo "Done!"
|
@ -1,3 +0,0 @@
|
||||
import os
|
||||
|
||||
DATAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data")
|
@ -1,38 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the createDiffHTML API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid
|
||||
data['startRev'] = "0"
|
||||
if args.rev != None:
|
||||
data['rev'] = args.rev
|
||||
requesturl = apiurl+'createDiffHTML?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
try:
|
||||
results = json.load(urlopen(requesturl))['data']
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
print (results['html'].encode("utf-8"))
|
||||
except HTTPError as e:
|
||||
pass
|
@ -1,32 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid # is utf-8 encoded
|
||||
requesturl = apiurl+'deletePad?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
if results['data']:
|
||||
print (results['data']['text'].encode("utf-8"))
|
@ -1,83 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import sys, json, re
|
||||
from datetime import datetime
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
from csv import writer
|
||||
from math import ceil, floor
|
||||
|
||||
"""
|
||||
Dumps a CSV of all pads with columns
|
||||
padid, groupid, revisions, lastedited, author_ids
|
||||
|
||||
padids have their group name trimmed
|
||||
groupid is without (g. $)
|
||||
revisions is an integral number of edits
|
||||
lastedited is ISO8601 formatted
|
||||
author_ids is a space delimited list of internal author IDs
|
||||
"""
|
||||
|
||||
groupnamepat = re.compile(r"^g\.(\w+)\$")
|
||||
|
||||
out = writer(sys.stdout)
|
||||
|
||||
def jsonload (url):
|
||||
f = urlopen(url)
|
||||
data = f.read()
|
||||
f.close()
|
||||
return json.loads(data)
|
||||
|
||||
def main (args):
|
||||
p = ArgumentParser("outputs a CSV of information all all pads")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
requesturl = apiurl+'listAllPads?'+urlencode(data)
|
||||
|
||||
padids = jsonload(requesturl)['data']['padIDs']
|
||||
padids.sort()
|
||||
numpads = len(padids)
|
||||
maxmsglen = 0
|
||||
count = 0
|
||||
out.writerow(("padid", "groupid", "lastedited", "revisions", "author_ids"))
|
||||
for i, padid in enumerate(padids):
|
||||
p = (float(i) / numpads)
|
||||
percentage = int(floor(p*100))
|
||||
bars = int(ceil(p*20))
|
||||
bar = ("*"*bars) + ("-"*(20-bars))
|
||||
msg = u"\r{0} {1}/{2} {3}... ".format(bar, (i+1), numpads, padid)
|
||||
if len(msg) > maxmsglen:
|
||||
maxmsglen = len(msg)
|
||||
sys.stderr.write("\r{0}".format(" "*maxmsglen))
|
||||
sys.stderr.write(msg.encode("utf-8"))
|
||||
sys.stderr.flush()
|
||||
m = groupnamepat.match(padid)
|
||||
if m:
|
||||
groupname = m.group(1)
|
||||
padidnogroup = padid[m.end():]
|
||||
else:
|
||||
groupname = u""
|
||||
padidnogroup = padid
|
||||
|
||||
data['padID'] = padid.encode("utf-8")
|
||||
revisions = jsonload(apiurl+'getRevisionsCount?'+urlencode(data))['data']['revisions']
|
||||
if (revisions == 0) and not args.zerorevs:
|
||||
continue
|
||||
|
||||
|
||||
lastedited_raw = jsonload(apiurl+'getLastEdited?'+urlencode(data))['data']['lastEdited']
|
||||
lastedited_iso = datetime.fromtimestamp(int(lastedited_raw)/1000).isoformat()
|
||||
author_ids = jsonload(apiurl+'listAuthorsOfPad?'+urlencode(data))['data']['authorIDs']
|
||||
author_ids = u" ".join(author_ids).encode("utf-8")
|
||||
out.writerow((padidnogroup.encode("utf-8"), groupname.encode("utf-8"), revisions, lastedited_iso, author_ids))
|
||||
count += 1
|
||||
|
||||
print("\nWrote {0} rows...".format(count), file=sys.stderr)
|
||||
|
@ -1,34 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getHTML API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid
|
||||
if args.rev != None:
|
||||
data['rev'] = args.rev
|
||||
requesturl = apiurl+'getHTML?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))['data']
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
print (results['html'].encode("utf-8"))
|
@ -1,43 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json, sys
|
||||
try:
|
||||
# python2
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid # is utf-8 encoded
|
||||
if args.rev != None:
|
||||
data['rev'] = args.rev
|
||||
requesturl = apiurl+'getText?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
resp = urlopen(requesturl).read()
|
||||
resp = resp.decode("utf-8")
|
||||
results = json.loads(resp)
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
if results['data']:
|
||||
sys.stdout.write(results['data']['text'])
|
@ -1,40 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
import sys
|
||||
from etherdump.commands.common import getjson
|
||||
try:
|
||||
# python2
|
||||
from urlparse import urlparse, urlunparse
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
input = raw_input
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlparse, urlunparse, urlencode
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
def main (args):
|
||||
p = ArgumentParser("call listAllPads and print the results")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="lines", help="output format: lines, json; default lines")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = {0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
requesturl = apiurl+'listAllPads?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
results = getjson(requesturl)['data']['padIDs']
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
for r in results:
|
||||
print (r)
|
||||
|
@ -1,31 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("call listAuthorsOfPad for the padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument("--format", default="lines", help="output format, can be: lines, json; default: lines")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid.encode("utf-8")
|
||||
requesturl = apiurl+'listAuthorsOfPad?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))['data']['authorIDs']
|
||||
if args.format == "json":
|
||||
print (json.dumps(results))
|
||||
else:
|
||||
for r in results:
|
||||
print (r.encode("utf-8"))
|
@ -1,262 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import sys, json, re, os
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
# python2
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlencode, quote
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
from etherdump.commands.common import *
|
||||
from time import sleep
|
||||
from etherdump.commands.html5tidy import html5tidy
|
||||
import html5lib
|
||||
from xml.etree import ElementTree as ET
|
||||
from fnmatch import fnmatch
|
||||
|
||||
# debugging
|
||||
# import ElementTree as ET
|
||||
|
||||
"""
|
||||
pull(meta):
|
||||
Update meta data files for those that have changed.
|
||||
Check for changed pads by looking at revisions & comparing to existing
|
||||
|
||||
|
||||
todo...
|
||||
use/prefer public interfaces ? (export functions)
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def try_deleting (files):
|
||||
for f in files:
|
||||
try:
|
||||
os.remove(f)
|
||||
except OSError as e:
|
||||
pass
|
||||
|
||||
def main (args):
|
||||
p = ArgumentParser("Check for pads that have changed since last sync (according to .meta.json)")
|
||||
|
||||
p.add_argument("padid", nargs="*", default=[])
|
||||
p.add_argument("--glob", default=False, help="download pads matching a glob pattern")
|
||||
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
|
||||
p.add_argument("--pub", default="p", help="folder to store files for public pads, default: p")
|
||||
p.add_argument("--group", default="g", help="folder to store files for group pads, default: g")
|
||||
p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
|
||||
p.add_argument("--meta", default=False, action="store_true", help="download meta to PADID.meta.json, default: False")
|
||||
p.add_argument("--text", default=False, action="store_true", help="download text to PADID.txt, default: False")
|
||||
p.add_argument("--html", default=False, action="store_true", help="download html to PADID.html, default: False")
|
||||
p.add_argument("--dhtml", default=False, action="store_true", help="download dhtml to PADID.diff.html, default: False")
|
||||
p.add_argument("--all", default=False, action="store_true", help="download all files (meta, text, html, dhtml), default: False")
|
||||
p.add_argument("--folder", default=False, action="store_true", help="dump files in a folder named PADID (meta, text, html, dhtml), default: False")
|
||||
p.add_argument("--output", default=False, action="store_true", help="output changed padids on stdout")
|
||||
p.add_argument("--force", default=False, action="store_true", help="reload, even if revisions count matches previous")
|
||||
p.add_argument("--no-raw-ext", default=False, action="store_true", help="save plain text as padname with no (additional) extension")
|
||||
p.add_argument("--fix-names", default=False, action="store_true", help="normalize padid's (no spaces, special control chars) for use in file names")
|
||||
|
||||
p.add_argument("--filter-ext", default=None, help="filter pads by extension")
|
||||
|
||||
p.add_argument("--css", default="/styles.css", help="add css url to output pages, default: /styles.css")
|
||||
p.add_argument("--script", default="/versions.js", help="add script url to output pages, default: /versions.js")
|
||||
|
||||
p.add_argument("--nopublish", default="__NOPUBLISH__", help="no publish magic word, default: __NOPUBLISH__")
|
||||
|
||||
args = p.parse_args(args)
|
||||
|
||||
raw_ext = ".raw.txt"
|
||||
if args.no_raw_ext:
|
||||
raw_ext = ""
|
||||
|
||||
info = loadpadinfo(args.padinfo)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
|
||||
if args.padid:
|
||||
padids = args.padid
|
||||
elif args.glob:
|
||||
padids = getjson(info['localapiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
|
||||
padids = [x for x in padids if fnmatch(x, args.glob)]
|
||||
else:
|
||||
padids = getjson(info['localapiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
|
||||
padids.sort()
|
||||
numpads = len(padids)
|
||||
# maxmsglen = 0
|
||||
count = 0
|
||||
for i, padid in enumerate(padids):
|
||||
if args.skip != None and i<args.skip:
|
||||
continue
|
||||
progressbar(i, numpads, padid)
|
||||
|
||||
data['padID'] = padid.encode("utf-8")
|
||||
p = padpath(padid, args.pub, args.group, args.fix_names)
|
||||
if args.folder:
|
||||
p = os.path.join(p, padid.encode("utf-8"))
|
||||
|
||||
metapath = p + ".meta.json"
|
||||
revisions = None
|
||||
tries = 1
|
||||
skip = False
|
||||
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
|
||||
meta = {}
|
||||
# if type(padurlbase) == unicode:
|
||||
# padurlbase = padurlbase.encode("utf-8")
|
||||
while True:
|
||||
try:
|
||||
if os.path.exists(metapath):
|
||||
with open(metapath) as f:
|
||||
meta.update(json.load(f))
|
||||
revisions = getjson(info['localapiurl']+'getRevisionsCount?'+urlencode(data))['data']['revisions']
|
||||
if meta['revisions'] == revisions and not args.force:
|
||||
skip=True
|
||||
break
|
||||
|
||||
meta['padid'] = padid # .encode("utf-8")
|
||||
versions = meta["versions"] = []
|
||||
versions.append({
|
||||
"url": padurlbase + quote(padid),
|
||||
"type": "pad",
|
||||
"code": 200
|
||||
})
|
||||
|
||||
if revisions == None:
|
||||
meta['revisions'] = getjson(info['localapiurl']+'getRevisionsCount?'+urlencode(data))['data']['revisions']
|
||||
else:
|
||||
meta['revisions' ] = revisions
|
||||
|
||||
if (meta['revisions'] == 0) and (not args.zerorevs):
|
||||
# print("Skipping zero revs", file=sys.stderr)
|
||||
skip=True
|
||||
break
|
||||
|
||||
# todo: load more metadata!
|
||||
meta['group'], meta['pad'] = splitpadname(padid)
|
||||
meta['pathbase'] = p
|
||||
meta['lastedited_raw'] = int(getjson(info['localapiurl']+'getLastEdited?'+urlencode(data))['data']['lastEdited'])
|
||||
meta['lastedited_iso'] = datetime.fromtimestamp(int(meta['lastedited_raw'])/1000).isoformat()
|
||||
meta['author_ids'] = getjson(info['localapiurl']+'listAuthorsOfPad?'+urlencode(data))['data']['authorIDs']
|
||||
break
|
||||
except HTTPError as e:
|
||||
tries += 1
|
||||
if tries > 3:
|
||||
print ("Too many failures ({0}), skipping".format(padid), file=sys.stderr)
|
||||
skip=True
|
||||
break
|
||||
else:
|
||||
sleep(3)
|
||||
except TypeError as e:
|
||||
print ("Type Error loading pad {0} (phantom pad?), skipping".format(padid), file=sys.stderr)
|
||||
skip=True
|
||||
break
|
||||
|
||||
if skip:
|
||||
continue
|
||||
|
||||
count += 1
|
||||
|
||||
if args.output:
|
||||
print (padid)
|
||||
|
||||
if args.all or (args.meta or args.text or args.html or args.dhtml):
|
||||
try:
|
||||
os.makedirs(os.path.split(metapath)[0])
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
if args.all or args.text:
|
||||
text = getjson(info['localapiurl']+'getText?'+urlencode(data))
|
||||
ver = {"type": "text"}
|
||||
versions.append(ver)
|
||||
ver["code"] = text["_code"]
|
||||
if text["_code"] == 200:
|
||||
text = text['data']['text']
|
||||
|
||||
##########################################
|
||||
## ENFORCE __NOPUBLISH__ MAGIC WORD
|
||||
##########################################
|
||||
if args.nopublish and args.nopublish in text:
|
||||
# NEED TO PURGE ANY EXISTING DOCS
|
||||
try_deleting((p+raw_ext,p+".raw.html",p+".diff.html",p+".meta.json"))
|
||||
continue
|
||||
|
||||
ver["path"] = p+raw_ext
|
||||
ver["url"] = quote(ver["path"])
|
||||
with open(ver["path"], "w") as f:
|
||||
f.write(text)
|
||||
# once the content is settled, compute a hash
|
||||
# and link it in the metadata!
|
||||
|
||||
links = []
|
||||
if args.css:
|
||||
links.append({"href":args.css, "rel":"stylesheet"})
|
||||
# todo, make this process reflect which files actually were made
|
||||
versionbaseurl = quote(padid)
|
||||
links.append({"href":versions[0]["url"], "rel":"alternate", "type":"text/html", "title":"Etherpad"})
|
||||
if args.all or args.text:
|
||||
links.append({"href":versionbaseurl+raw_ext, "rel":"alternate", "type":"text/plain", "title":"Plain text"})
|
||||
if args.all or args.html:
|
||||
links.append({"href":versionbaseurl+".raw.html", "rel":"alternate", "type":"text/html", "title":"HTML"})
|
||||
if args.all or args.dhtml:
|
||||
links.append({"href":versionbaseurl+".diff.html", "rel":"alternate", "type":"text/html", "title":"HTML with author colors"})
|
||||
if args.all or args.meta:
|
||||
links.append({"href":versionbaseurl+".meta.json", "rel":"alternate", "type":"application/json", "title":"Meta data"})
|
||||
|
||||
# links.append({"href":"/", "rel":"search", "type":"text/html", "title":"Index"})
|
||||
|
||||
if args.all or args.dhtml:
|
||||
data['startRev'] = "0"
|
||||
html = getjson(info['localapiurl']+'createDiffHTML?'+urlencode(data))
|
||||
ver = {"type": "diffhtml"}
|
||||
versions.append(ver)
|
||||
ver["code"] = html["_code"]
|
||||
if html["_code"] == 200:
|
||||
try:
|
||||
html = html['data']['html']
|
||||
ver["path"] = p+".diff.html"
|
||||
ver["url"] = quote(ver["path"])
|
||||
# doc = html5lib.parse(html, treebuilder="etree", override_encoding="utf-8", namespaceHTMLElements=False)
|
||||
doc = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False)
|
||||
html5tidy(doc, indent=True, title=padid, scripts=args.script, links=links)
|
||||
with open(ver["path"], "w") as f:
|
||||
# f.write(html.encode("utf-8"))
|
||||
print(ET.tostring(doc, method="html", encoding="unicode"), file=f)
|
||||
except TypeError:
|
||||
# Malformed / incomplete response, record the message (such as "internal error") in the metadata and write NO file!
|
||||
ver["message"] = html["message"]
|
||||
# with open(ver["path"], "w") as f:
|
||||
# print ("""<pre>{0}</pre>""".format(json.dumps(html, indent=2)), file=f)
|
||||
|
||||
# Process text, html, dhtml, all options
|
||||
if args.all or args.html:
|
||||
html = getjson(info['localapiurl']+'getHTML?'+urlencode(data))
|
||||
ver = {"type": "html"}
|
||||
versions.append(ver)
|
||||
ver["code"] = html["_code"]
|
||||
if html["_code"] == 200:
|
||||
html = html['data']['html']
|
||||
ver["path"] = p+".raw.html"
|
||||
ver["url"] = quote(ver["path"])
|
||||
doc = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False)
|
||||
html5tidy(doc, indent=True, title=padid, scripts=args.script, links=links)
|
||||
with open(ver["path"], "w") as f:
|
||||
# f.write(html.encode("utf-8"))
|
||||
print (ET.tostring(doc, method="html", encoding="unicode"), file=f)
|
||||
|
||||
# output meta
|
||||
if args.all or args.meta:
|
||||
ver = {"type": "meta"}
|
||||
versions.append(ver)
|
||||
ver["path"] = metapath
|
||||
ver["url"] = quote(metapath)
|
||||
with open(metapath, "w") as f:
|
||||
json.dump(meta, f, indent=2)
|
||||
|
||||
print("\n{0} pad(s) loaded".format(count), file=sys.stderr)
|
@ -1,25 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("call getRevisionsCount for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid.encode("utf-8")
|
||||
requesturl = apiurl+'getRevisionsCount?'+urlencode(data)
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))['data']['revisions']
|
||||
print (results)
|
@ -1,66 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json, sys
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
import requests
|
||||
|
||||
|
||||
LIMIT_BYTES = 100*1000
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the setHTML API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--html", default=None, help="html, default: read from stdin")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument("--create", default=False, action="store_true", help="flag to create pad if necessary")
|
||||
p.add_argument("--limit", default=False, action="store_true", help="limit text to 100k (etherpad limit)")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
# data = {}
|
||||
# data['apikey'] = info['apikey']
|
||||
# data['padID'] = args.padid # is utf-8 encoded
|
||||
|
||||
createPad = False
|
||||
if args.create:
|
||||
# check if it's in fact necessary
|
||||
requesturl = apiurl+'getRevisionsCount?'+urlencode({'apikey': info['apikey'], 'padID': args.padid})
|
||||
results = json.load(urlopen(requesturl))
|
||||
print (json.dumps(results, indent=2), file=sys.stderr)
|
||||
if results['code'] != 0:
|
||||
createPad = True
|
||||
|
||||
if args.html:
|
||||
html = args.html
|
||||
else:
|
||||
html = sys.stdin.read()
|
||||
|
||||
params = {}
|
||||
params['apikey'] = info['apikey']
|
||||
params['padID'] = args.padid
|
||||
|
||||
if createPad:
|
||||
requesturl = apiurl+'createPad'
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
results = requests.post(requesturl, params=params, data={'text': ''}) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
print (json.dumps(results, indent=2))
|
||||
|
||||
if len(html) > LIMIT_BYTES and args.limit:
|
||||
print ("limiting", len(text), LIMIT_BYTES, file=sys.stderr)
|
||||
html = html[:LIMIT_BYTES]
|
||||
|
||||
requesturl = apiurl+'setHTML'
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
# params['html'] = html
|
||||
results = requests.post(requesturl, params={'apikey': info['apikey']}, data={'apikey': info['apikey'], 'padID': args.padid, 'html': html}) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
print (json.dumps(results, indent=2))
|
@ -1,68 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import json, sys
|
||||
|
||||
try:
|
||||
# python2
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlencode, quote
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
LIMIT_BYTES = 100*1000
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument("--text", default=None, help="text, default: read from stdin")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument("--create", default=False, action="store_true", help="flag to create pad if necessary")
|
||||
p.add_argument("--limit", default=False, action="store_true", help="limit text to 100k (etherpad limit)")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
data['padID'] = args.padid # is utf-8 encoded
|
||||
|
||||
createPad = False
|
||||
if args.create:
|
||||
requesturl = apiurl+'getRevisionsCount?'+urlencode(data)
|
||||
results = json.load(urlopen(requesturl))
|
||||
# print (json.dumps(results, indent=2))
|
||||
if results['code'] != 0:
|
||||
createPad = True
|
||||
|
||||
if args.text:
|
||||
text = args.text
|
||||
else:
|
||||
text = sys.stdin.read()
|
||||
|
||||
if len(text) > LIMIT_BYTES and args.limit:
|
||||
print ("limiting", len(text), LIMIT_BYTES)
|
||||
text = text[:LIMIT_BYTES]
|
||||
|
||||
data['text'] = text
|
||||
|
||||
if createPad:
|
||||
requesturl = apiurl+'createPad'
|
||||
else:
|
||||
requesturl = apiurl+'setText'
|
||||
|
||||
if args.showurl:
|
||||
print (requesturl)
|
||||
results = requests.post(requesturl, params=data) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
if results['code'] != 0:
|
||||
print ("setText: ERROR ({0}) on pad {1}: {2}".format(results['code'], args.padid, results['message']))
|
||||
# json.dumps(results, indent=2)
|
@ -1,111 +0,0 @@
|
||||
from __future__ import print_function
|
||||
from argparse import ArgumentParser
|
||||
import sys, json, re, os
|
||||
from datetime import datetime
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
from math import ceil, floor
|
||||
from common import *
|
||||
|
||||
"""
|
||||
status (meta):
|
||||
Update meta data files for those that have changed.
|
||||
Check for changed pads by looking at revisions & comparing to existing
|
||||
|
||||
|
||||
design decisions...
|
||||
ok based on the fact that only the txt file is pushable (via setText)
|
||||
it makes sense to give this file "primacy" ... ie to put the other forms
|
||||
(html, diff.html) in a special place (if created at all). Otherwise this
|
||||
complicates the "syncing" idea....
|
||||
|
||||
"""
|
||||
|
||||
class PadItemException (Exception):
|
||||
pass
|
||||
|
||||
class PadItem ():
|
||||
def __init__ (self, padid=None, path=None, padexists=False):
|
||||
self.padexists = padexists
|
||||
if padid and path:
|
||||
raise PadItemException("only give padid or path")
|
||||
if not (padid or path):
|
||||
raise PadItemException("either padid or path must be specified")
|
||||
if padid:
|
||||
self.padid = padid
|
||||
self.path = padpath(padid, group_path="g")
|
||||
else:
|
||||
self.path = path
|
||||
self.padid = padpath2id(path)
|
||||
|
||||
@property
|
||||
def status (self):
|
||||
if self.fileexists:
|
||||
if self.padexists:
|
||||
return "S"
|
||||
else:
|
||||
return "F"
|
||||
elif self.padexists:
|
||||
return "P"
|
||||
else:
|
||||
return "?"
|
||||
|
||||
@property
|
||||
def fileexists (self):
|
||||
return os.path.exists(self.path)
|
||||
|
||||
def ignore_p (path, settings=None):
|
||||
if path.startswith("."):
|
||||
return True
|
||||
|
||||
def main (args):
|
||||
p = ArgumentParser("Check for pads that have changed since last sync (according to .meta.json)")
|
||||
# p.add_argument("padid", nargs="*", default=[])
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: .etherdump/settings.json")
|
||||
p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
|
||||
p.add_argument("--pub", default=".", help="folder to store files for public pads, default: pub")
|
||||
p.add_argument("--group", default="g", help="folder to store files for group pads, default: g")
|
||||
p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
|
||||
p.add_argument("--meta", default=False, action="store_true", help="download meta to PADID.meta.json, default: False")
|
||||
p.add_argument("--text", default=False, action="store_true", help="download text to PADID.txt, default: False")
|
||||
p.add_argument("--html", default=False, action="store_true", help="download html to PADID.html, default: False")
|
||||
p.add_argument("--dhtml", default=False, action="store_true", help="download dhtml to PADID.dhtml, default: False")
|
||||
p.add_argument("--all", default=False, action="store_true", help="download all files (meta, text, html, dhtml), default: False")
|
||||
args = p.parse_args(args)
|
||||
|
||||
info = loadpadinfo(args.padinfo)
|
||||
data = {}
|
||||
data['apikey'] = info['apikey']
|
||||
|
||||
padsbypath = {}
|
||||
|
||||
# listAllPads
|
||||
padids = getjson(info['apiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
|
||||
padids.sort()
|
||||
for padid in padids:
|
||||
pad = PadItem(padid=padid, padexists=True)
|
||||
padsbypath[pad.path] = pad
|
||||
|
||||
files = os.listdir(args.pub)
|
||||
files = [x for x in files if not ignore_p(x)]
|
||||
files.sort()
|
||||
for p in files:
|
||||
pad = padsbypath.get(p)
|
||||
if not pad:
|
||||
pad = PadItem(path=p)
|
||||
padsbypath[pad.path] = pad
|
||||
|
||||
pads = padsbypath.values()
|
||||
pads.sort(key=lambda x: (x.status, x.padid))
|
||||
|
||||
curstat = None
|
||||
for p in pads:
|
||||
if p.status != curstat:
|
||||
curstat = p.status
|
||||
if curstat == "F":
|
||||
print ("New/changed files")
|
||||
elif curstat == "P":
|
||||
print ("New/changed pads")
|
||||
elif curstat == ".":
|
||||
print ("Up to date")
|
||||
print (" ", p.status, p.padid)
|
100
etherpump/__init__.py
Normal file
100
etherpump/__init__.py
Normal file
@ -0,0 +1,100 @@
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
import sys
|
||||
|
||||
DATAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data")
|
||||
|
||||
__VERSION__ = "0.0.20"
|
||||
|
||||
|
||||
def subcommands():
|
||||
"""List all sub-commands for the `--help` output."""
|
||||
output = []
|
||||
|
||||
subcommands = [
|
||||
"creatediffhtml",
|
||||
"deletepad",
|
||||
"dumpcsv",
|
||||
"gethtml",
|
||||
"gettext",
|
||||
"index",
|
||||
"init",
|
||||
"list",
|
||||
"listauthors",
|
||||
"publication",
|
||||
"pull",
|
||||
"revisionscount",
|
||||
"sethtml",
|
||||
"settext",
|
||||
"showmeta",
|
||||
]
|
||||
|
||||
for subcommand in subcommands:
|
||||
try:
|
||||
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
|
||||
doc = __import__(
|
||||
"etherpump.commands.%s" % subcommand,
|
||||
fromlist=["etherdump.commands"],
|
||||
).__doc__
|
||||
except ModuleNotFoundError:
|
||||
doc = ""
|
||||
output.append(f" {subcommand}: {doc}")
|
||||
|
||||
output.sort()
|
||||
|
||||
return "\n".join(output)
|
||||
|
||||
|
||||
usage = """
|
||||
_
|
||||
| |
|
||||
_ _|_ | | _ ,_ _ _ _ _ _
|
||||
|/ | |/ \ |/ / | |/ \_| | / |/ |/ | |/ \_
|
||||
|__/|_/| |_/|__/ |_/|__/ \_/|_/ | | |_/|__/
|
||||
/| /|
|
||||
\| \|
|
||||
Usage:
|
||||
etherpump CMD
|
||||
|
||||
where CMD could be:
|
||||
{}
|
||||
|
||||
For more information on each command try:
|
||||
etherpump CMD --help""".format(
|
||||
subcommands()
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
cmd = sys.argv[1]
|
||||
if cmd.startswith("-"):
|
||||
args = sys.argv
|
||||
else:
|
||||
args = sys.argv[2:]
|
||||
|
||||
if len(sys.argv) < 3:
|
||||
if any(arg in args for arg in ["--help", "-h"]):
|
||||
print(usage)
|
||||
sys.exit(0)
|
||||
elif any(arg in args for arg in ["--version", "-v"]):
|
||||
print("etherpump {}".format(__VERSION__))
|
||||
sys.exit(0)
|
||||
|
||||
except IndexError:
|
||||
print(usage)
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
|
||||
cmdmod = __import__(
|
||||
"etherpump.commands.%s" % cmd, fromlist=["etherdump.commands"]
|
||||
)
|
||||
cmdmod.main(args)
|
||||
except ImportError as e:
|
||||
print(
|
||||
"Error performing command '{0}'\n(python said: {1})\n".format(
|
||||
cmd, e
|
||||
)
|
||||
)
|
||||
print(usage)
|
67
etherpump/api/__init__.py
Normal file
67
etherpump/api/__init__.py
Normal file
@ -0,0 +1,67 @@
|
||||
from functools import wraps
|
||||
from os.path import exists
|
||||
from pathlib import Path
|
||||
from urllib.parse import urlencode
|
||||
|
||||
from etherpump.commands.common import getjson, loadpadinfo
|
||||
from etherpump.commands.creatediffhtml import main as creatediffhtml # noqa
|
||||
from etherpump.commands.deletepad import main as deletepad # noqa
|
||||
from etherpump.commands.dumpcsv import main as dumpcsv # noqa
|
||||
from etherpump.commands.gethtml import main as gethtml # noqa
|
||||
from etherpump.commands.gettext import main as gettext # noqa
|
||||
from etherpump.commands.index import main as index # noqa
|
||||
from etherpump.commands.init import main # noqa
|
||||
from etherpump.commands.init import main as init
|
||||
from etherpump.commands.list import main as list # noqa
|
||||
from etherpump.commands.listauthors import main as listauthors # noqa
|
||||
from etherpump.commands.publication import main as publication # noqa
|
||||
from etherpump.commands.pull import main as pull
|
||||
from etherpump.commands.revisionscount import main as revisionscount # noqa
|
||||
from etherpump.commands.sethtml import main as sethtml # noqa
|
||||
from etherpump.commands.settext import main as settext # noqa
|
||||
from etherpump.commands.showmeta import main as showmeta # noqa
|
||||
|
||||
|
||||
def ensure_init():
|
||||
path = Path(".etherpump/settings.json").absolute()
|
||||
if not exists(path):
|
||||
try:
|
||||
main([])
|
||||
except SystemExit:
|
||||
pass
|
||||
|
||||
|
||||
def get_pad_ids():
|
||||
info = loadpadinfo(Path(".etherpump/settings.json"))
|
||||
data = {"apikey": info["apikey"]}
|
||||
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
|
||||
return getjson(url)["data"]["padIDs"]
|
||||
|
||||
|
||||
def magic_word(word, fresh=True):
|
||||
ensure_init()
|
||||
|
||||
if fresh:
|
||||
pull(["--text", "--meta", "--publish-opt-in", "--publish", word])
|
||||
|
||||
pads = {}
|
||||
pad_ids = get_pad_ids()
|
||||
for pad_id in pad_ids:
|
||||
path = Path("./p/{}.raw.txt".format(pad_id)).absolute()
|
||||
try:
|
||||
with open(path, "r") as handle:
|
||||
text = handle.read()
|
||||
if word in text:
|
||||
pads[pad_id] = {}
|
||||
pads[pad_id]["txt"] = text
|
||||
except FileNotFoundError:
|
||||
continue
|
||||
|
||||
def _magic_word(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
return func(pads)
|
||||
|
||||
return wrapper
|
||||
|
||||
return _magic_word
|
@ -1,8 +1,8 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import print_function
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
import json, os
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("")
|
||||
@ -18,6 +18,6 @@ def main(args):
|
||||
ret.append(meta)
|
||||
|
||||
if args.indent:
|
||||
print (json.dumps(ret, indent=args.indent))
|
||||
print(json.dumps(ret, indent=args.indent))
|
||||
else:
|
||||
print (json.dumps(ret))
|
||||
print(json.dumps(ret))
|
@ -1,40 +1,31 @@
|
||||
from __future__ import print_function
|
||||
import re, os, json, sys
|
||||
from math import ceil, floor
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from html.entities import name2codepoint
|
||||
from time import sleep
|
||||
from urllib.parse import quote_plus, unquote_plus
|
||||
from urllib.request import HTTPError, urlopen
|
||||
|
||||
try:
|
||||
# python2
|
||||
from urlparse import urlparse, urlunparse
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
from urllib import quote_plus, unquote_plus
|
||||
from htmlentitydefs import name2codepoint
|
||||
|
||||
input = raw_input
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlparse, urlunparse, urlencode, quote_plus, unquote_plus
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
from html.entities import name2codepoint
|
||||
import trio
|
||||
|
||||
groupnamepat = re.compile(r"^g\.(\w+)\$")
|
||||
def splitpadname (padid):
|
||||
|
||||
|
||||
def splitpadname(padid):
|
||||
m = groupnamepat.match(padid)
|
||||
if m:
|
||||
return(m.group(1), padid[m.end():])
|
||||
return (m.group(1), padid[m.end() :])
|
||||
else:
|
||||
return (u"", padid)
|
||||
return ("", padid)
|
||||
|
||||
def padurl (padid, ):
|
||||
|
||||
def padurl(padid,):
|
||||
return padid
|
||||
|
||||
def padpath (padid, pub_path=u"", group_path=u"", normalize=False):
|
||||
|
||||
def padpath(padid, pub_path="", group_path="", normalize=False):
|
||||
g, p = splitpadname(padid)
|
||||
# if type(g) == unicode:
|
||||
# g = g.encode("utf-8")
|
||||
# if type(p) == unicode:
|
||||
# p = p.encode("utf-8")
|
||||
p = quote_plus(p)
|
||||
if normalize:
|
||||
p = p.replace(" ", "_")
|
||||
@ -47,9 +38,8 @@ def padpath (padid, pub_path=u"", group_path=u"", normalize=False):
|
||||
else:
|
||||
return os.path.join(pub_path, p)
|
||||
|
||||
def padpath2id (path):
|
||||
if type(path) == unicode:
|
||||
path = path.encode("utf-8")
|
||||
|
||||
def padpath2id(path):
|
||||
dd, p = os.path.split(path)
|
||||
gname = dd.split("/")[-1]
|
||||
p = unquote_plus(p)
|
||||
@ -58,7 +48,8 @@ def padpath2id (path):
|
||||
else:
|
||||
return p.decode("utf-8")
|
||||
|
||||
def getjson (url, max_retry=3, retry_sleep_time=3):
|
||||
|
||||
def getjson(url, max_retry=3, retry_sleep_time=3):
|
||||
ret = {}
|
||||
ret["_retries"] = 0
|
||||
while ret["_retries"] <= max_retry:
|
||||
@ -76,32 +67,47 @@ def getjson (url, max_retry=3, retry_sleep_time=3):
|
||||
except ValueError as e:
|
||||
url = "http://localhost" + url
|
||||
except HTTPError as e:
|
||||
print ("HTTPError {0}".format(e), file=sys.stderr)
|
||||
print("HTTPError {0}".format(e), file=sys.stderr)
|
||||
ret["_code"] = e.code
|
||||
ret["_retries"]+=1
|
||||
ret["_retries"] += 1
|
||||
if retry_sleep_time:
|
||||
sleep(retry_sleep_time)
|
||||
return ret
|
||||
|
||||
|
||||
async def agetjson(session, url):
|
||||
"""The asynchronous version of getjson."""
|
||||
RETRY = 20
|
||||
TIMEOUT = 10
|
||||
|
||||
ret = {}
|
||||
ret["_retries"] = 0
|
||||
|
||||
try:
|
||||
response = await session.get(url, timeout=TIMEOUT, retries=RETRY)
|
||||
rurl = response.url
|
||||
ret.update(response.json())
|
||||
ret["_code"] = response.status_code
|
||||
if rurl != url:
|
||||
ret["_url"] = rurl
|
||||
return ret
|
||||
except Exception as e:
|
||||
print("Failed to download {}, saw {}".format(url, str(e)))
|
||||
return
|
||||
|
||||
|
||||
def loadpadinfo(p):
|
||||
with open(p) as f:
|
||||
info = json.load(f)
|
||||
if 'localapiurl' not in info:
|
||||
info['localapiurl'] = info.get('apiurl')
|
||||
if "localapiurl" not in info:
|
||||
info["localapiurl"] = info.get("apiurl")
|
||||
return info
|
||||
|
||||
def progressbar (i, num, label="", file=sys.stderr):
|
||||
p = float(i) / num
|
||||
percentage = int(floor(p*100))
|
||||
bars = int(ceil(p*20))
|
||||
bar = ("*"*bars) + ("-"*(20-bars))
|
||||
msg = u"\r{0} {1}/{2} {3}... ".format(bar, (i+1), num, label)
|
||||
sys.stderr.write(msg)
|
||||
sys.stderr.flush()
|
||||
|
||||
# Python developer Fredrik Lundh (author of elementtree, among other things)
|
||||
# has such a function on his website, which works with decimal, hex and named
|
||||
# entities:
|
||||
|
||||
|
||||
# Python developer Fredrik Lundh (author of elementtree, among other things) has such a function on his website, which works with decimal, hex and named entities:
|
||||
##
|
||||
# Removes HTML or XML character references and entities from a text string.
|
||||
#
|
||||
@ -114,17 +120,26 @@ def unescape(text):
|
||||
# character reference
|
||||
try:
|
||||
if text[:3] == "&#x":
|
||||
return unichr(int(text[3:-1], 16))
|
||||
return chr(int(text[3:-1], 16))
|
||||
else:
|
||||
return unichr(int(text[2:-1]))
|
||||
return chr(int(text[2:-1]))
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
# named entity
|
||||
try:
|
||||
text = unichr(name2codepoint[text[1:-1]])
|
||||
text = chr(name2codepoint[text[1:-1]])
|
||||
except KeyError:
|
||||
pass
|
||||
return text # leave as is
|
||||
return text # leave as is
|
||||
|
||||
return re.sub("&#?\w+;", fixup, text)
|
||||
|
||||
|
||||
def istty():
|
||||
return sys.stdout.isatty() and os.environ.get("TERM") != "dumb"
|
||||
|
||||
|
||||
def chunks(lst, n):
|
||||
for i in range(0, len(lst), n):
|
||||
yield lst[i : i + n]
|
52
etherpump/commands/creatediffhtml.py
Normal file
52
etherpump/commands/creatediffhtml.py
Normal file
@ -0,0 +1,52 @@
|
||||
"""Calls the createDiffHTML API function for the given padid"""
|
||||
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser(
|
||||
"calls the createDiffHTML API function for the given padid"
|
||||
)
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="text",
|
||||
help="output format, can be: text, json; default: text",
|
||||
)
|
||||
p.add_argument(
|
||||
"--rev", type=int, default=None, help="revision, default: latest"
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid
|
||||
data["startRev"] = "0"
|
||||
if args.rev != None:
|
||||
data["rev"] = args.rev
|
||||
requesturl = apiurl + "createDiffHTML?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
try:
|
||||
results = json.load(urlopen(requesturl))["data"]
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
print(results["html"])
|
||||
except HTTPError as e:
|
||||
pass
|
41
etherpump/commands/deletepad.py
Normal file
41
etherpump/commands/deletepad.py
Normal file
@ -0,0 +1,41 @@
|
||||
"""Calls the getText API function for the given padid"""
|
||||
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="text",
|
||||
help="output format, can be: text, json; default: text",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid
|
||||
requesturl = apiurl + "deletePad?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
if results["data"]:
|
||||
print(results["data"]["text"])
|
106
etherpump/commands/dumpcsv.py
Normal file
106
etherpump/commands/dumpcsv.py
Normal file
@ -0,0 +1,106 @@
|
||||
"""Dumps a CSV of all pads"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from csv import writer
|
||||
from datetime import datetime
|
||||
from math import ceil, floor
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
"""
|
||||
Dumps a CSV of all pads with columns
|
||||
padid, groupid, revisions, lastedited, author_ids
|
||||
|
||||
padids have their group name trimmed
|
||||
groupid is without (g. $)
|
||||
revisions is an integral number of edits
|
||||
lastedited is ISO8601 formatted
|
||||
author_ids is a space delimited list of internal author IDs
|
||||
"""
|
||||
|
||||
groupnamepat = re.compile(r"^g\.(\w+)\$")
|
||||
|
||||
out = writer(sys.stdout)
|
||||
|
||||
|
||||
def jsonload(url):
|
||||
f = urlopen(url)
|
||||
data = f.read()
|
||||
f.close()
|
||||
return json.loads(data)
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("outputs a CSV of information all all pads")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument(
|
||||
"--zerorevs",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="include pads with zero revisions, default: False",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
requesturl = apiurl + "listAllPads?" + urlencode(data)
|
||||
|
||||
padids = jsonload(requesturl)["data"]["padIDs"]
|
||||
padids.sort()
|
||||
numpads = len(padids)
|
||||
maxmsglen = 0
|
||||
count = 0
|
||||
out.writerow(("padid", "groupid", "lastedited", "revisions", "author_ids"))
|
||||
for i, padid in enumerate(padids):
|
||||
p = float(i) / numpads
|
||||
percentage = int(floor(p * 100))
|
||||
bars = int(ceil(p * 20))
|
||||
bar = ("*" * bars) + ("-" * (20 - bars))
|
||||
msg = "\r{0} {1}/{2} {3}... ".format(bar, (i + 1), numpads, padid)
|
||||
if len(msg) > maxmsglen:
|
||||
maxmsglen = len(msg)
|
||||
sys.stderr.write("\r{0}".format(" " * maxmsglen))
|
||||
sys.stderr.write(msg)
|
||||
sys.stderr.flush()
|
||||
m = groupnamepat.match(padid)
|
||||
if m:
|
||||
groupname = m.group(1)
|
||||
padidnogroup = padid[m.end() :]
|
||||
else:
|
||||
groupname = ""
|
||||
padidnogroup = padid
|
||||
|
||||
data["padID"] = padid
|
||||
revisions = jsonload(apiurl + "getRevisionsCount?" + urlencode(data))[
|
||||
"data"
|
||||
]["revisions"]
|
||||
if (revisions == 0) and not args.zerorevs:
|
||||
continue
|
||||
|
||||
lastedited_raw = jsonload(apiurl + "getLastEdited?" + urlencode(data))[
|
||||
"data"
|
||||
]["lastEdited"]
|
||||
lastedited_iso = datetime.fromtimestamp(
|
||||
int(lastedited_raw) / 1000
|
||||
).isoformat()
|
||||
author_ids = jsonload(apiurl + "listAuthorsOfPad?" + urlencode(data))[
|
||||
"data"
|
||||
]["authorIDs"]
|
||||
author_ids = " ".join(author_ids)
|
||||
out.writerow(
|
||||
(padidnogroup, groupname, revisions, lastedited_iso, author_ids)
|
||||
)
|
||||
count += 1
|
||||
|
||||
print("\nWrote {0} rows...".format(count), file=sys.stderr)
|
46
etherpump/commands/gethtml.py
Normal file
46
etherpump/commands/gethtml.py
Normal file
@ -0,0 +1,46 @@
|
||||
"""Calls the getHTML API function for the given padid"""
|
||||
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getHTML API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="text",
|
||||
help="output format, can be: text, json; default: text",
|
||||
)
|
||||
p.add_argument(
|
||||
"--rev", type=int, default=None, help="revision, default: latest"
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid
|
||||
if args.rev != None:
|
||||
data["rev"] = args.rev
|
||||
requesturl = apiurl + "getHTML?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))["data"]
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
print(results["html"])
|
49
etherpump/commands/gettext.py
Normal file
49
etherpump/commands/gettext.py
Normal file
@ -0,0 +1,49 @@
|
||||
"""Calls the getText API function for the given padid"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import HTTPError, URLError, urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="text",
|
||||
help="output format, can be: text, json; default: text",
|
||||
)
|
||||
p.add_argument(
|
||||
"--rev", type=int, default=None, help="revision, default: latest"
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid # is utf-8 encoded
|
||||
if args.rev != None:
|
||||
data["rev"] = args.rev
|
||||
requesturl = apiurl + "getText?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
resp = urlopen(requesturl).read()
|
||||
resp = resp.decode("utf-8")
|
||||
results = json.loads(resp)
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
if results["data"]:
|
||||
sys.stdout.write(results["data"]["text"])
|
@ -1,28 +1,31 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from __future__ import print_function
|
||||
from html5lib import parse
|
||||
import os, sys
|
||||
|
||||
import os
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from xml.etree import ElementTree as ET
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
from html5lib import parse
|
||||
|
||||
|
||||
def etree_indent(elem, level=0):
|
||||
i = "\n" + level*" "
|
||||
i = "\n" + level * " "
|
||||
if len(elem):
|
||||
if not elem.text or not elem.text.strip():
|
||||
elem.text = i + " "
|
||||
if not elem.tail or not elem.tail.strip():
|
||||
elem.tail = i
|
||||
for elem in elem:
|
||||
etree_indent(elem, level+1)
|
||||
etree_indent(elem, level + 1)
|
||||
if not elem.tail or not elem.tail.strip():
|
||||
elem.tail = i
|
||||
else:
|
||||
if level and (not elem.tail or not elem.tail.strip()):
|
||||
elem.tail = i
|
||||
|
||||
def get_link_type (url):
|
||||
|
||||
def get_link_type(url):
|
||||
lurl = url.lower()
|
||||
if lurl.endswith(".html") or lurl.endswith(".htm"):
|
||||
return "text/html"
|
||||
@ -37,13 +40,17 @@ def get_link_type (url):
|
||||
elif lurl.endswith(".js") or lurl.endswith(".jsonp"):
|
||||
return "text/javascript"
|
||||
|
||||
def pluralize (x):
|
||||
|
||||
def pluralize(x):
|
||||
if type(x) == list or type(x) == tuple:
|
||||
return x
|
||||
else:
|
||||
return (x,)
|
||||
|
||||
def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, indent=False):
|
||||
|
||||
def html5tidy(
|
||||
doc, charset="utf-8", title=None, scripts=None, links=None, indent=False
|
||||
):
|
||||
if scripts:
|
||||
script_srcs = [x.attrib.get("src") for x in doc.findall(".//script")]
|
||||
for src in pluralize(scripts):
|
||||
@ -56,21 +63,30 @@ def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, inden
|
||||
for elt in doc.findall(".//link"):
|
||||
href = elt.attrib.get("href")
|
||||
if href:
|
||||
existinglinks[href] = elt
|
||||
existinglinks[href] = elt
|
||||
for link in links:
|
||||
linktype = link.get("type") or get_link_type(link["href"])
|
||||
if link["href"] in existinglinks:
|
||||
elt = existinglinks[link["href"]]
|
||||
elt.attrib["rel"] = link["rel"]
|
||||
else:
|
||||
elt = ET.SubElement(doc.find(".//head"), "link", href=link["href"], rel=link["rel"])
|
||||
elt = ET.SubElement(
|
||||
doc.find(".//head"),
|
||||
"link",
|
||||
href=link["href"],
|
||||
rel=link["rel"],
|
||||
)
|
||||
if linktype:
|
||||
elt.attrib["type"] = linktype
|
||||
elt.attrib["type"] = linktype
|
||||
if "title" in link:
|
||||
elt.attrib["title"] = link["title"]
|
||||
|
||||
if charset:
|
||||
meta_charsets = [x.attrib.get("charset") for x in doc.findall(".//meta") if x.attrib.get("charset") != None]
|
||||
meta_charsets = [
|
||||
x.attrib.get("charset")
|
||||
for x in doc.findall(".//meta")
|
||||
if x.attrib.get("charset") != None
|
||||
]
|
||||
if not meta_charsets:
|
||||
meta = ET.SubElement(doc.find(".//head"), "meta", charset=charset)
|
||||
|
||||
@ -79,33 +95,89 @@ def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, inden
|
||||
if not titleelt:
|
||||
titleelt = ET.SubElement(doc.find(".//head"), "title")
|
||||
titleelt.text = title
|
||||
|
||||
|
||||
if indent:
|
||||
etree_indent(doc)
|
||||
return doc
|
||||
|
||||
def main (args):
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("")
|
||||
p.add_argument("input", nargs="?", default=None)
|
||||
p.add_argument("--indent", default=False, action="store_true")
|
||||
p.add_argument("--mogrify", default=False, action="store_true", help="modify file in place")
|
||||
p.add_argument("--method", default="html", help="method, default: html, values: html, xml, text")
|
||||
p.add_argument(
|
||||
"--mogrify",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="modify file in place",
|
||||
)
|
||||
p.add_argument(
|
||||
"--method",
|
||||
default="html",
|
||||
help="method, default: html, values: html, xml, text",
|
||||
)
|
||||
p.add_argument("--output", default=None, help="")
|
||||
p.add_argument("--title", default=None, help="ensure/add title tag in head")
|
||||
p.add_argument("--charset", default="utf-8", help="ensure/add meta tag with charset")
|
||||
p.add_argument("--script", action="append", default=[], help="ensure/add script tag")
|
||||
p.add_argument(
|
||||
"--charset", default="utf-8", help="ensure/add meta tag with charset"
|
||||
)
|
||||
p.add_argument(
|
||||
"--script", action="append", default=[], help="ensure/add script tag"
|
||||
)
|
||||
# <link>s, see https://www.w3.org/TR/html5/links.html#links
|
||||
p.add_argument("--stylesheet", action="append", default=[], help="ensure/add style link")
|
||||
p.add_argument("--alternate", action="append", default=[], nargs="+", help="ensure/add alternate links (optionally followed by a title and type)")
|
||||
p.add_argument("--next", action="append", default=[], nargs="+", help="ensure/add alternate link")
|
||||
p.add_argument("--prev", action="append", default=[], nargs="+", help="ensure/add alternate link")
|
||||
p.add_argument("--search", action="append", default=[], nargs="+", help="ensure/add search link")
|
||||
p.add_argument("--rss", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/rss+xml")
|
||||
p.add_argument("--atom", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/atom+xml")
|
||||
p.add_argument(
|
||||
"--stylesheet",
|
||||
action="append",
|
||||
default=[],
|
||||
help="ensure/add style link",
|
||||
)
|
||||
p.add_argument(
|
||||
"--alternate",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add alternate links (optionally followed by a title and type)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--next",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add alternate link",
|
||||
)
|
||||
p.add_argument(
|
||||
"--prev",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add alternate link",
|
||||
)
|
||||
p.add_argument(
|
||||
"--search",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add search link",
|
||||
)
|
||||
p.add_argument(
|
||||
"--rss",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add alternate link of type application/rss+xml",
|
||||
)
|
||||
p.add_argument(
|
||||
"--atom",
|
||||
action="append",
|
||||
default=[],
|
||||
nargs="+",
|
||||
help="ensure/add alternate link of type application/atom+xml",
|
||||
)
|
||||
|
||||
args = p.parse_args(args)
|
||||
links = []
|
||||
def add_links (links, items, rel, _type=None):
|
||||
|
||||
def add_links(links, items, rel, _type=None):
|
||||
for href in items:
|
||||
d = {}
|
||||
d["rel"] = rel
|
||||
@ -128,6 +200,7 @@ def main (args):
|
||||
d["href"] = href
|
||||
|
||||
links.append(d)
|
||||
|
||||
for rel in ("stylesheet", "alternate", "next", "prev", "search"):
|
||||
add_links(links, getattr(args, rel), rel)
|
||||
for item in args.rss:
|
||||
@ -144,27 +217,33 @@ def main (args):
|
||||
doc = parse(fin, treebuilder="etree", namespaceHTMLElements=False)
|
||||
if fin != sys.stdin:
|
||||
fin.close()
|
||||
html5tidy(doc, scripts=args.script, links=links, title=args.title, indent=args.indent)
|
||||
html5tidy(
|
||||
doc,
|
||||
scripts=args.script,
|
||||
links=links,
|
||||
title=args.title,
|
||||
indent=args.indent,
|
||||
)
|
||||
|
||||
# OUTPUT
|
||||
tmppath = None
|
||||
if args.output:
|
||||
fout = open(args.output, "w")
|
||||
elif args.mogrify:
|
||||
tmppath = args.input+".tmp"
|
||||
tmppath = args.input + ".tmp"
|
||||
fout = open(tmppath, "w")
|
||||
else:
|
||||
fout = sys.stdout
|
||||
|
||||
print (ET.tostring(doc, method=args.method, encoding="unicode"), file=fout)
|
||||
print(ET.tostring(doc, method=args.method, encoding="unicode"), file=fout)
|
||||
|
||||
if fout != sys.stdout:
|
||||
fout.close()
|
||||
|
||||
if tmppath:
|
||||
os.rename(args.input, args.input+"~")
|
||||
os.rename(args.input, args.input + "~")
|
||||
os.rename(tmppath, args.input)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __name__ == "__main__":
|
||||
main(sys.argv)
|
@ -1,33 +1,28 @@
|
||||
from __future__ import print_function
|
||||
"""Generate pages from etherpumps using a template"""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from argparse import ArgumentParser
|
||||
import sys, json, re, os, time
|
||||
from datetime import datetime
|
||||
import dateutil.parser
|
||||
from urllib.parse import urlparse, urlunparse
|
||||
|
||||
try:
|
||||
# python2
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
from urlparse import urlparse, urlunparse
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlparse, urlunparse, urlencode, quote
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
from jinja2 import FileSystemLoader, Environment
|
||||
from etherdump.commands.common import *
|
||||
from time import sleep
|
||||
import dateutil.parser
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
||||
from etherpump.commands.common import * # noqa
|
||||
|
||||
"""
|
||||
index:
|
||||
Generate pages from etherdumps using a template.
|
||||
Generate pages from etherpumps using a template.
|
||||
|
||||
Built-in templates: rss.xml, index.html
|
||||
|
||||
"""
|
||||
|
||||
def group (items, key=lambda x: x):
|
||||
|
||||
def group(items, key=lambda x: x):
|
||||
""" returns a list of lists, of items grouped by a key function """
|
||||
ret = []
|
||||
keys = {}
|
||||
@ -41,31 +36,33 @@ def group (items, key=lambda x: x):
|
||||
ret.append(keys[k])
|
||||
return ret
|
||||
|
||||
# def base (x):
|
||||
# return re.sub(r"(\.raw\.html)|(\.diff\.html)|(\.meta\.json)|(\.raw\.txt)$", "", x)
|
||||
|
||||
def splitextlong (x):
|
||||
def splitextlong(x):
|
||||
""" split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """
|
||||
m = re.search(r"^(.*?)(\..*)$", x)
|
||||
if m:
|
||||
return m.groups()
|
||||
else:
|
||||
return x, ''
|
||||
return x, ""
|
||||
|
||||
def base (x):
|
||||
|
||||
def base(x):
|
||||
return splitextlong(x)[0]
|
||||
|
||||
def excerpt (t, chars=25):
|
||||
|
||||
def excerpt(t, chars=25):
|
||||
if len(t) > chars:
|
||||
t = t[:chars] + "..."
|
||||
return t
|
||||
|
||||
def absurl (url, base=None):
|
||||
|
||||
def absurl(url, base=None):
|
||||
if not url.startswith("http"):
|
||||
return base + url
|
||||
return url
|
||||
|
||||
def url_base (url):
|
||||
|
||||
def url_base(url):
|
||||
(scheme, netloc, path, params, query, fragment) = urlparse(url)
|
||||
path, _ = os.path.split(path.lstrip("/"))
|
||||
ret = urlunparse((scheme, netloc, path, None, None, None))
|
||||
@ -73,50 +70,136 @@ def url_base (url):
|
||||
ret += "/"
|
||||
return ret
|
||||
|
||||
def datetimeformat (t, format='%Y-%m-%d %H:%M:%S'):
|
||||
|
||||
def datetimeformat(t, format="%Y-%m-%d %H:%M:%S"):
|
||||
if type(t) == str:
|
||||
dt = dateutil.parser.parse(t)
|
||||
return dt.strftime(format)
|
||||
else:
|
||||
return time.strftime(format, time.localtime(t))
|
||||
|
||||
def main (args):
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("Convert dumped files to a document via a template.")
|
||||
|
||||
p.add_argument("input", nargs="+", help="Files to list (.meta.json files)")
|
||||
|
||||
p.add_argument("--templatepath", default=None, help="path to find templates, default: built-in")
|
||||
p.add_argument("--template", default="index.html", help="template name, built-ins include index.html, rss.xml; default: index.html")
|
||||
p.add_argument("--padinfo", default=".etherdump/settings.json", help="settings, default: ./.etherdump/settings.json")
|
||||
p.add_argument(
|
||||
"--templatepath",
|
||||
default=None,
|
||||
help="path to find templates, default: built-in",
|
||||
)
|
||||
p.add_argument(
|
||||
"--template",
|
||||
default="index.html",
|
||||
help="template name, built-ins include index.html, rss.xml; default: index.html",
|
||||
)
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: ./.etherdump/settings.json",
|
||||
)
|
||||
# p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
|
||||
|
||||
p.add_argument("--order", default="padid", help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid")
|
||||
p.add_argument("--reverse", default=False, action="store_true", help="reverse order, default: False (reverse chrono)")
|
||||
p.add_argument("--limit", type=int, default=0, help="limit to number of items, default: 0 (no limit)")
|
||||
p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
|
||||
p.add_argument(
|
||||
"--order",
|
||||
default="padid",
|
||||
help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid",
|
||||
)
|
||||
p.add_argument(
|
||||
"--reverse",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="reverse order, default: False (reverse chrono)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--limit",
|
||||
type=int,
|
||||
default=0,
|
||||
help="limit to number of items, default: 0 (no limit)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--skip",
|
||||
default=None,
|
||||
type=int,
|
||||
help="skip this many items, default: None",
|
||||
)
|
||||
|
||||
p.add_argument("--content", default=False, action="store_true", help="rss: include (full) content tag, default: False")
|
||||
p.add_argument("--link", default="diffhtml,html,text", help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text")
|
||||
p.add_argument("--linkbase", default=None, help="base url to use for links, default: try to use the feedurl")
|
||||
p.add_argument(
|
||||
"--content",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="rss: include (full) content tag, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--link",
|
||||
default="diffhtml,html,text",
|
||||
help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text",
|
||||
)
|
||||
p.add_argument(
|
||||
"--linkbase",
|
||||
default=None,
|
||||
help="base url to use for links, default: try to use the feedurl",
|
||||
)
|
||||
p.add_argument("--output", default=None, help="output, default: stdout")
|
||||
|
||||
p.add_argument("--files", default=False, action="store_true", help="include files (experimental)")
|
||||
p.add_argument(
|
||||
"--files",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="include files (experimental)",
|
||||
)
|
||||
|
||||
pg = p.add_argument_group('template variables')
|
||||
pg.add_argument("--feedurl", default="feed.xml", help="rss: to use as feeds own (self) link, default: feed.xml")
|
||||
pg.add_argument("--siteurl", default=None, help="rss: to use as channel's site link, default: the etherpad url")
|
||||
pg.add_argument("--title", default="etherdump", help="title for document or rss feed channel title, default: etherdump")
|
||||
pg.add_argument("--description", default="", help="rss: channel description, default: empty")
|
||||
pg.add_argument("--language", default="en-US", help="rss: feed language, default: en-US")
|
||||
pg.add_argument("--updatePeriod", default="daily", help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily")
|
||||
pg.add_argument("--updateFrequency", default=1, type=int, help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1")
|
||||
pg.add_argument("--generator", default="https://gitlab.com/activearchives/etherdump", help="generator, default: https://gitlab.com/activearchives/etherdump")
|
||||
pg.add_argument("--timestamp", default=None, help="timestamp, default: now (e.g. 2015-12-01 12:30:00)")
|
||||
pg = p.add_argument_group("template variables")
|
||||
pg.add_argument(
|
||||
"--feedurl",
|
||||
default="feed.xml",
|
||||
help="rss: to use as feeds own (self) link, default: feed.xml",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--siteurl",
|
||||
default=None,
|
||||
help="rss: to use as channel's site link, default: the etherpad url",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--title",
|
||||
default="etherpump",
|
||||
help="title for document or rss feed channel title, default: etherdump",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--description",
|
||||
default="",
|
||||
help="rss: channel description, default: empty",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--language", default="en-US", help="rss: feed language, default: en-US"
|
||||
)
|
||||
pg.add_argument(
|
||||
"--updatePeriod",
|
||||
default="daily",
|
||||
help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--updateFrequency",
|
||||
default=1,
|
||||
type=int,
|
||||
help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--generator",
|
||||
default="https://gitlab.com/activearchives/etherpump",
|
||||
help="generator, default: https://gitlab.com/activearchives/etherdump",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--timestamp",
|
||||
default=None,
|
||||
help="timestamp, default: now (e.g. 2015-12-01 12:30:00)",
|
||||
)
|
||||
pg.add_argument("--next", default=None, help="next link, default: None)")
|
||||
pg.add_argument("--prev", default=None, help="prev link, default: None")
|
||||
|
||||
args = p.parse_args(args)
|
||||
|
||||
|
||||
tmpath = args.templatepath
|
||||
# Default path for template is the built-in data/templates
|
||||
if tmpath == None:
|
||||
@ -136,28 +219,25 @@ def main (args):
|
||||
# Use "base" to strip (longest) extensions
|
||||
# inputs = group(inputs, base)
|
||||
|
||||
def wrappath (p):
|
||||
def wrappath(p):
|
||||
path = "./{0}".format(p)
|
||||
ext = os.path.splitext(p)[1][1:]
|
||||
return {
|
||||
"url": path,
|
||||
"path": path,
|
||||
"code": 200,
|
||||
"type": ext
|
||||
}
|
||||
return {"url": path, "path": path, "code": 200, "type": ext}
|
||||
|
||||
def metaforpaths (paths):
|
||||
def metaforpaths(paths):
|
||||
ret = {}
|
||||
pid = base(paths[0])
|
||||
ret['pad'] = ret['padid'] = pid
|
||||
ret['versions'] = [wrappath(x) for x in paths]
|
||||
ret["pad"] = ret["padid"] = pid
|
||||
ret["versions"] = [wrappath(x) for x in paths]
|
||||
lastedited = None
|
||||
for p in paths:
|
||||
mtime = os.stat(p).st_mtime
|
||||
mtime = os.stat(p).st_mtime
|
||||
if lastedited == None or mtime > lastedited:
|
||||
lastedited = mtime
|
||||
ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime("%Y-%m-%dT%H:%M:%S")
|
||||
ret["lastedited_raw"] = mtime
|
||||
ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime(
|
||||
"%Y-%m-%dT%H:%M:%S"
|
||||
)
|
||||
ret["lastedited_raw"] = mtime
|
||||
return ret
|
||||
|
||||
def loadmeta(p):
|
||||
@ -176,28 +256,32 @@ def main (args):
|
||||
# else:
|
||||
# return metaforpaths(paths)
|
||||
|
||||
def fixdates (padmeta):
|
||||
def fixdates(padmeta):
|
||||
d = dateutil.parser.parse(padmeta["lastedited_iso"])
|
||||
padmeta["lastedited"] = d
|
||||
padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000")
|
||||
return padmeta
|
||||
|
||||
pads = map(loadmeta, inputs)
|
||||
pads = list(map(loadmeta, inputs))
|
||||
pads = [x for x in pads if x != None]
|
||||
pads = map(fixdates, pads)
|
||||
pads = list(map(fixdates, pads))
|
||||
args.pads = list(pads)
|
||||
|
||||
def could_have_base (x, y):
|
||||
return x == y or (x.startswith(y) and x[len(y):].startswith("."))
|
||||
def could_have_base(x, y):
|
||||
return x == y or (x.startswith(y) and x[len(y) :].startswith("."))
|
||||
|
||||
def get_best_pad (x):
|
||||
def get_best_pad(x):
|
||||
for pb in padbases:
|
||||
p = pads_by_base[pb]
|
||||
if could_have_base(x, pb):
|
||||
return p
|
||||
|
||||
def has_version (padinfo, path):
|
||||
return [x for x in padinfo['versions'] if 'path' in x and x['path'] == "./"+path]
|
||||
def has_version(padinfo, path):
|
||||
return [
|
||||
x
|
||||
for x in padinfo["versions"]
|
||||
if "path" in x and x["path"] == "./" + path
|
||||
]
|
||||
|
||||
if args.files:
|
||||
inputs = args.input
|
||||
@ -207,7 +291,7 @@ def main (args):
|
||||
pads_by_base = {}
|
||||
for p in args.pads:
|
||||
# print ("Trying padid", p['padid'], file=sys.stderr)
|
||||
padbase = os.path.splitext(p['padid'])[0]
|
||||
padbase = os.path.splitext(p["padid"])[0]
|
||||
pads_by_base[padbase] = p
|
||||
padbases = list(pads_by_base.keys())
|
||||
# SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH
|
||||
@ -215,25 +299,33 @@ def main (args):
|
||||
# print ("PADBASES", file=sys.stderr)
|
||||
# for pb in padbases:
|
||||
# print (" ", pb, file=sys.stderr)
|
||||
print ("pairing input files with pads", file=sys.stderr)
|
||||
print("pairing input files with pads", file=sys.stderr)
|
||||
for x in inputs:
|
||||
# pair input with a pad if possible
|
||||
xbasename = os.path.basename(x)
|
||||
p = get_best_pad(xbasename)
|
||||
if p:
|
||||
if not has_version(p, x):
|
||||
print ("Grouping file {0} with pad {1}".format(x, p['padid']), file=sys.stderr)
|
||||
p['versions'].append(wrappath(x))
|
||||
print(
|
||||
"Grouping file {0} with pad {1}".format(x, p["padid"]),
|
||||
file=sys.stderr,
|
||||
)
|
||||
p["versions"].append(wrappath(x))
|
||||
else:
|
||||
print ("Skipping existing version {0} ({1})...".format(x, p['padid']), file=sys.stderr)
|
||||
print(
|
||||
"Skipping existing version {0} ({1})...".format(
|
||||
x, p["padid"]
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
removelist.append(x)
|
||||
# Removed Matches files
|
||||
for x in removelist:
|
||||
inputs.remove(x)
|
||||
print ("Remaining files:", file=sys.stderr)
|
||||
print("Remaining files:", file=sys.stderr)
|
||||
for x in inputs:
|
||||
print (x, file=sys.stderr)
|
||||
print (file=sys.stderr)
|
||||
print(x, file=sys.stderr)
|
||||
print(file=sys.stderr)
|
||||
# Add "fake" pads for remaining files
|
||||
for x in inputs:
|
||||
args.pads.append(metaforpaths([x]))
|
||||
@ -242,14 +334,14 @@ def main (args):
|
||||
args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
|
||||
# if type(padurlbase) == unicode:
|
||||
# padurlbase = padurlbase.encode("utf-8")
|
||||
args.siteurl = args.siteurl or padurlbase
|
||||
args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000")
|
||||
|
||||
# order items & apply limit
|
||||
if args.order == "lastedited":
|
||||
args.pads.sort(key=lambda x: x.get("lastedited_iso"), reverse=args.reverse)
|
||||
args.pads.sort(
|
||||
key=lambda x: x.get("lastedited_iso"), reverse=args.reverse
|
||||
)
|
||||
elif args.order == "pad":
|
||||
args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse)
|
||||
elif args.order == "padid":
|
||||
@ -257,12 +349,14 @@ def main (args):
|
||||
elif args.order == "revisions":
|
||||
args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse)
|
||||
elif args.order == "authors":
|
||||
args.pads.sort(key=lambda x: len(x.get("authors")), reverse=args.reverse)
|
||||
args.pads.sort(
|
||||
key=lambda x: len(x.get("authors")), reverse=args.reverse
|
||||
)
|
||||
else:
|
||||
raise Exception("That ordering is not implemented!")
|
||||
|
||||
if args.limit:
|
||||
args.pads = args.pads[:args.limit]
|
||||
args.pads = args.pads[: args.limit]
|
||||
|
||||
# add versions_by_type, add in full text
|
||||
# add link (based on args.link)
|
||||
@ -279,10 +373,10 @@ def main (args):
|
||||
|
||||
if "text" in versions_by_type:
|
||||
try:
|
||||
with open (versions_by_type["text"]["path"]) as f:
|
||||
with open(versions_by_type["text"]["path"]) as f:
|
||||
p["text"] = f.read()
|
||||
except FileNotFoundError:
|
||||
p['text'] = ''
|
||||
p["text"] = ""
|
||||
# ADD IN LINK TO PAD AS "link"
|
||||
for v in linkversions:
|
||||
if v in versions_by_type:
|
||||
@ -296,6 +390,6 @@ def main (args):
|
||||
|
||||
if args.output:
|
||||
with open(args.output, "w") as f:
|
||||
print (template.render(vars(args)), file=f)
|
||||
print(template.render(vars(args)), file=f)
|
||||
else:
|
||||
print (template.render(vars(args)))
|
||||
print(template.render(vars(args)))
|
@ -1,27 +1,20 @@
|
||||
from __future__ import print_function
|
||||
"""Initialize an etherpump folder"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode, urlparse, urlunparse
|
||||
from urllib.request import HTTPError, URLError, urlopen
|
||||
|
||||
try:
|
||||
# python2
|
||||
from urlparse import urlparse, urlunparse
|
||||
from urllib2 import urlopen, URLError, HTTPError
|
||||
from urllib import urlencode
|
||||
input = raw_input
|
||||
except ImportError:
|
||||
# python3
|
||||
from urllib.parse import urlparse, urlunparse, urlencode
|
||||
from urllib.request import urlopen, URLError, HTTPError
|
||||
|
||||
import json, os, sys
|
||||
|
||||
def get_api(url, cmd=None, data=None, verbose=False):
|
||||
try:
|
||||
useurl = url+cmd
|
||||
useurl = url + cmd
|
||||
if data:
|
||||
useurl += "?"+urlencode(data)
|
||||
# data['apikey'] = "7c8faa070c97f83d8f705c935a32d5141f89cbaa2158042fa92e8ddad5dbc5e1"
|
||||
useurl += "?" + urlencode(data)
|
||||
if verbose:
|
||||
print ("trying", useurl, file=sys.stderr)
|
||||
print("trying", useurl, file=sys.stderr)
|
||||
resp = urlopen(useurl).read()
|
||||
resp = resp.decode("utf-8")
|
||||
resp = json.loads(resp)
|
||||
@ -29,20 +22,17 @@ def get_api(url, cmd=None, data=None, verbose=False):
|
||||
return resp
|
||||
except ValueError as e:
|
||||
if verbose:
|
||||
print (" ValueError", e, file=sys.stderr)
|
||||
print(" ValueError", e, file=sys.stderr)
|
||||
return
|
||||
except HTTPError as e:
|
||||
if verbose:
|
||||
print (" HTTPError", e, file=sys.stderr)
|
||||
print(" HTTPError", e, file=sys.stderr)
|
||||
if e.code == 401:
|
||||
# Unauthorized is how the API responds to an incorrect API key
|
||||
return {"code": 401, "message": e}
|
||||
# resp = json.load(e)
|
||||
# if "code" in resp and "message" in resp:
|
||||
# # print ("returning", resp, file=sys.stderr)
|
||||
# return resp
|
||||
|
||||
def tryapiurl (url, verbose=False):
|
||||
|
||||
def tryapiurl(url, verbose=False):
|
||||
"""
|
||||
Try to use url as api, correcting if possible.
|
||||
Returns corrected / normalized URL, or None if not possible
|
||||
@ -51,26 +41,32 @@ def tryapiurl (url, verbose=False):
|
||||
scheme, netloc, path, params, query, fragment = urlparse(url)
|
||||
if scheme == "":
|
||||
url = "http://" + url
|
||||
scheme, netloc, path, params, query, fragment = urlparse(url)
|
||||
scheme, netloc, path, params, query, fragment = urlparse(url)
|
||||
params, query, fragment = ("", "", "")
|
||||
path = path.strip("/")
|
||||
# 1. try directly...
|
||||
apiurl = urlunparse((scheme, netloc, path, params, query, fragment))+"/"
|
||||
apiurl = (
|
||||
urlunparse((scheme, netloc, path, params, query, fragment)) + "/"
|
||||
)
|
||||
if get_api(apiurl, "listAllPads", verbose=verbose):
|
||||
return apiurl
|
||||
# 2. try with += api/1.2.9
|
||||
path = os.path.join(path, "api", "1.2.9")+"/"
|
||||
path = os.path.join(path, "api", "1.2.9") + "/"
|
||||
apiurl = urlunparse((scheme, netloc, path, params, query, fragment))
|
||||
if get_api(apiurl, "listAllPads", verbose=verbose):
|
||||
return apiurl
|
||||
# except ValueError as e:
|
||||
# print ("ValueError", e, file=sys.stderr)
|
||||
except URLError as e:
|
||||
print ("URLError", e, file=sys.stderr)
|
||||
print("URLError", e, file=sys.stderr)
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("initialize an etherdump folder")
|
||||
p.add_argument("arg", nargs="*", default=[], help="optional positional args: path etherpadurl")
|
||||
p = ArgumentParser("initialize an etherpump folder")
|
||||
p.add_argument(
|
||||
"arg",
|
||||
nargs="*",
|
||||
default=[],
|
||||
help="optional positional args: path etherpadurl",
|
||||
)
|
||||
p.add_argument("--path", default=None, help="path to initialize")
|
||||
p.add_argument("--padurl", default=None, help="")
|
||||
p.add_argument("--apikey", default=None, help="")
|
||||
@ -78,14 +74,13 @@ def main(args):
|
||||
p.add_argument("--reinit", default=False, action="store_true", help="")
|
||||
args = p.parse_args(args)
|
||||
|
||||
|
||||
path = args.path
|
||||
if path == None and len(args.arg):
|
||||
path = args.arg[0]
|
||||
if not path:
|
||||
path = "."
|
||||
|
||||
edpath = os.path.join(path, ".etherdump")
|
||||
edpath = os.path.join(path, ".etherpump")
|
||||
try:
|
||||
os.makedirs(edpath)
|
||||
except OSError:
|
||||
@ -97,7 +92,7 @@ def main(args):
|
||||
with open(padinfopath) as f:
|
||||
padinfo = json.load(f)
|
||||
if not args.reinit:
|
||||
print ("Folder is already initialized. Use --reinit to reset settings.")
|
||||
print("Folder already initialized. Use --reinit to reset settings")
|
||||
sys.exit(0)
|
||||
except IOError:
|
||||
pass
|
||||
@ -108,22 +103,29 @@ def main(args):
|
||||
apiurl = args.padurl
|
||||
while True:
|
||||
if apiurl:
|
||||
apiurl = tryapiurl(apiurl,verbose=args.verbose)
|
||||
apiurl = tryapiurl(apiurl, verbose=args.verbose)
|
||||
if apiurl:
|
||||
# print ("Got APIURL: {0}".format(apiurl))
|
||||
break
|
||||
apiurl = input("Please type the URL of the etherpad: ").strip()
|
||||
apiurl = input(
|
||||
"Please type the URL of the etherpad (e.g. https://pad.vvvvvvaria.org): "
|
||||
).strip()
|
||||
padinfo["apiurl"] = apiurl
|
||||
apikey = args.apikey
|
||||
while True:
|
||||
if apikey:
|
||||
resp = get_api(apiurl, "listAllPads", {"apikey": apikey}, verbose=args.verbose)
|
||||
resp = get_api(
|
||||
apiurl, "listAllPads", {"apikey": apikey}, verbose=args.verbose
|
||||
)
|
||||
if resp and resp["code"] == 0:
|
||||
# print ("GOOD")
|
||||
break
|
||||
else:
|
||||
print ("bad")
|
||||
print ("The APIKEY is the contents of the file APIKEY.txt in the etherpad folder", file=sys.stderr)
|
||||
print("bad")
|
||||
print(
|
||||
"The APIKEY is the contents of the file APIKEY.txt in the etherpad folder",
|
||||
file=sys.stderr,
|
||||
)
|
||||
apikey = input("Please paste the APIKEY: ").strip()
|
||||
padinfo["apikey"] = apikey
|
||||
|
@ -1,10 +1,13 @@
|
||||
from __future__ import print_function
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from argparse import ArgumentParser
|
||||
import json, os, re
|
||||
from urllib import urlencode
|
||||
from urllib2 import urlopen, HTTPError, URLError
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
def group (items, key=lambda x: x):
|
||||
|
||||
def group(items, key=lambda x: x):
|
||||
ret = []
|
||||
keys = {}
|
||||
for item in items:
|
||||
@ -17,6 +20,7 @@ def group (items, key=lambda x: x):
|
||||
ret.append(keys[k])
|
||||
return ret
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("")
|
||||
p.add_argument("input", nargs="+", help="filenames")
|
||||
@ -27,10 +31,11 @@ def main(args):
|
||||
|
||||
inputs = [x for x in inputs if not os.path.isdir(x)]
|
||||
|
||||
def base (x):
|
||||
def base(x):
|
||||
return re.sub(r"(\.html)|(\.diff\.html)|(\.meta\.json)|(\.txt)$", "", x)
|
||||
#from pprint import pprint
|
||||
#pprint()
|
||||
|
||||
# from pprint import pprint
|
||||
# pprint()
|
||||
gg = group(inputs, base)
|
||||
for items in gg:
|
||||
itembase = base(items[0])
|
||||
@ -40,5 +45,5 @@ def main(args):
|
||||
pass
|
||||
for i in items:
|
||||
newloc = os.path.join(itembase, i)
|
||||
print ("'{0}' => '{1}'".format(i, newloc))
|
||||
print("'{0}' => '{1}'".format(i, newloc))
|
||||
os.rename(i, newloc)
|
42
etherpump/commands/list.py
Normal file
42
etherpump/commands/list.py
Normal file
@ -0,0 +1,42 @@
|
||||
"""Call listAllPads and print the results"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode, urlparse, urlunparse
|
||||
from urllib.request import HTTPError, URLError, urlopen
|
||||
|
||||
from etherpump.commands.common import getjson
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("call listAllPads and print the results")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="lines",
|
||||
help="output format: lines, json; default lines",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = {0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
requesturl = apiurl + "listAllPads?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
results = getjson(requesturl)["data"]["padIDs"]
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
for r in results:
|
||||
print(r)
|
40
etherpump/commands/listauthors.py
Normal file
40
etherpump/commands/listauthors.py
Normal file
@ -0,0 +1,40 @@
|
||||
"""Call listAuthorsOfPad for the padid"""
|
||||
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("call listAuthorsOfPad for the padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
p.add_argument(
|
||||
"--format",
|
||||
default="lines",
|
||||
help="output format, can be: lines, json; default: lines",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid
|
||||
requesturl = apiurl + "listAuthorsOfPad?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))["data"]["authorIDs"]
|
||||
if args.format == "json":
|
||||
print(json.dumps(results))
|
||||
else:
|
||||
for r in results:
|
||||
print(r)
|
424
etherpump/commands/publication.py
Normal file
424
etherpump/commands/publication.py
Normal file
@ -0,0 +1,424 @@
|
||||
"""Generate a single document from etherpumps using a template"""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from argparse import ArgumentParser
|
||||
from datetime import datetime
|
||||
from urllib.parse import urlparse, urlunparse
|
||||
|
||||
import dateutil.parser
|
||||
import pypandoc
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
||||
from etherpump.commands.common import * # noqa
|
||||
|
||||
"""
|
||||
publication:
|
||||
Generate a single document from etherpumps using a template.
|
||||
|
||||
Built-in templates: publication.html
|
||||
|
||||
"""
|
||||
|
||||
|
||||
def group(items, key=lambda x: x):
|
||||
""" returns a list of lists, of items grouped by a key function """
|
||||
ret = []
|
||||
keys = {}
|
||||
for item in items:
|
||||
k = key(item)
|
||||
if k not in keys:
|
||||
keys[k] = []
|
||||
keys[k].append(item)
|
||||
for k in sorted(keys):
|
||||
keys[k].sort()
|
||||
ret.append(keys[k])
|
||||
return ret
|
||||
|
||||
|
||||
# def base (x):
|
||||
# return re.sub(r"(\.raw\.html)|(\.diff\.html)|(\.meta\.json)|(\.raw\.txt)$", "", x)
|
||||
|
||||
|
||||
def splitextlong(x):
|
||||
""" split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """
|
||||
m = re.search(r"^(.*?)(\..*)$", x)
|
||||
if m:
|
||||
return m.groups()
|
||||
else:
|
||||
return x, ""
|
||||
|
||||
|
||||
def base(x):
|
||||
return splitextlong(x)[0]
|
||||
|
||||
|
||||
def excerpt(t, chars=25):
|
||||
if len(t) > chars:
|
||||
t = t[:chars] + "..."
|
||||
return t
|
||||
|
||||
|
||||
def absurl(url, base=None):
|
||||
if not url.startswith("http"):
|
||||
return base + url
|
||||
return url
|
||||
|
||||
|
||||
def url_base(url):
|
||||
(scheme, netloc, path, params, query, fragment) = urlparse(url)
|
||||
path, _ = os.path.split(path.lstrip("/"))
|
||||
ret = urlunparse((scheme, netloc, path, None, None, None))
|
||||
if ret:
|
||||
ret += "/"
|
||||
return ret
|
||||
|
||||
|
||||
def datetimeformat(t, format="%Y-%m-%d %H:%M:%S"):
|
||||
if type(t) == str:
|
||||
dt = dateutil.parser.parse(t)
|
||||
return dt.strftime(format)
|
||||
else:
|
||||
return time.strftime(format, time.localtime(t))
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("Convert dumped files to a document via a template.")
|
||||
|
||||
p.add_argument("input", nargs="+", help="Files to list (.meta.json files)")
|
||||
|
||||
p.add_argument(
|
||||
"--templatepath",
|
||||
default=None,
|
||||
help="path to find templates, default: built-in",
|
||||
)
|
||||
p.add_argument(
|
||||
"--template",
|
||||
default="publication.html",
|
||||
help="template name, built-ins include publication.html; default: publication.html",
|
||||
)
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: ./.etherdump/settings.json",
|
||||
)
|
||||
# p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
|
||||
|
||||
p.add_argument(
|
||||
"--order",
|
||||
default="padid",
|
||||
help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid",
|
||||
)
|
||||
p.add_argument(
|
||||
"--reverse",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="reverse order, default: False (reverse chrono)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--limit",
|
||||
type=int,
|
||||
default=0,
|
||||
help="limit to number of items, default: 0 (no limit)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--skip",
|
||||
default=None,
|
||||
type=int,
|
||||
help="skip this many items, default: None",
|
||||
)
|
||||
|
||||
p.add_argument(
|
||||
"--content",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="rss: include (full) content tag, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--link",
|
||||
default="diffhtml,html,text",
|
||||
help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text",
|
||||
)
|
||||
p.add_argument(
|
||||
"--linkbase",
|
||||
default=None,
|
||||
help="base url to use for links, default: try to use the feedurl",
|
||||
)
|
||||
p.add_argument("--output", default=None, help="output, default: stdout")
|
||||
|
||||
p.add_argument(
|
||||
"--files",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="include files (experimental)",
|
||||
)
|
||||
|
||||
pg = p.add_argument_group("template variables")
|
||||
pg.add_argument(
|
||||
"--feedurl",
|
||||
default="feed.xml",
|
||||
help="rss: to use as feeds own (self) link, default: feed.xml",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--siteurl",
|
||||
default=None,
|
||||
help="rss: to use as channel's site link, default: the etherpad url",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--title",
|
||||
default="etherpump",
|
||||
help="title for document or rss feed channel title, default: etherdump",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--description",
|
||||
default="",
|
||||
help="rss: channel description, default: empty",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--language", default="en-US", help="rss: feed language, default: en-US"
|
||||
)
|
||||
pg.add_argument(
|
||||
"--updatePeriod",
|
||||
default="daily",
|
||||
help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--updateFrequency",
|
||||
default=1,
|
||||
type=int,
|
||||
help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--generator",
|
||||
default="https://git.vvvvvvaria.org/varia/etherpump",
|
||||
help="generator, default: https://git.vvvvvvaria.org/varia/etherdump",
|
||||
)
|
||||
pg.add_argument(
|
||||
"--timestamp",
|
||||
default=None,
|
||||
help="timestamp, default: now (e.g. 2015-12-01 12:30:00)",
|
||||
)
|
||||
pg.add_argument("--next", default=None, help="next link, default: None)")
|
||||
pg.add_argument("--prev", default=None, help="prev link, default: None")
|
||||
|
||||
args = p.parse_args(args)
|
||||
|
||||
tmpath = args.templatepath
|
||||
# Default path for template is the built-in data/templates
|
||||
if tmpath == None:
|
||||
tmpath = os.path.split(os.path.abspath(__file__))[0]
|
||||
tmpath = os.path.split(tmpath)[0]
|
||||
tmpath = os.path.join(tmpath, "data", "templates")
|
||||
|
||||
env = Environment(loader=FileSystemLoader(tmpath))
|
||||
env.filters["excerpt"] = excerpt
|
||||
env.filters["datetimeformat"] = datetimeformat
|
||||
template = env.get_template(args.template)
|
||||
|
||||
info = loadpadinfo(args.padinfo)
|
||||
|
||||
inputs = args.input
|
||||
inputs.sort()
|
||||
|
||||
# Use "base" to strip (longest) extensions
|
||||
# inputs = group(inputs, base)
|
||||
|
||||
def wrappath(p):
|
||||
path = "./{0}".format(p)
|
||||
ext = os.path.splitext(p)[1][1:]
|
||||
return {"url": path, "path": path, "code": 200, "type": ext}
|
||||
|
||||
def metaforpaths(paths):
|
||||
ret = {}
|
||||
pid = base(paths[0])
|
||||
ret["pad"] = ret["padid"] = pid
|
||||
ret["versions"] = [wrappath(x) for x in paths]
|
||||
lastedited = None
|
||||
for p in paths:
|
||||
mtime = os.stat(p).st_mtime
|
||||
if lastedited == None or mtime > lastedited:
|
||||
lastedited = mtime
|
||||
ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime(
|
||||
"%Y-%m-%dT%H:%M:%S"
|
||||
)
|
||||
ret["lastedited_raw"] = mtime
|
||||
return ret
|
||||
|
||||
def loadmeta(p):
|
||||
# Consider a set of grouped files
|
||||
# Otherwise, create a "dummy" one that wraps all the files as versions
|
||||
if p.endswith(".meta.json"):
|
||||
with open(p) as f:
|
||||
return json.load(f)
|
||||
# if there is a .meta.json, load it & MERGE with other files
|
||||
# if ret:
|
||||
# # TODO: merge with other files
|
||||
# for p in paths:
|
||||
# if "./"+p not in ret['versions']:
|
||||
# ret['versions'].append(wrappath(p))
|
||||
# return ret
|
||||
# else:
|
||||
# return metaforpaths(paths)
|
||||
|
||||
def fixdates(padmeta):
|
||||
d = dateutil.parser.parse(padmeta["lastedited_iso"])
|
||||
padmeta["lastedited"] = d
|
||||
padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000")
|
||||
return padmeta
|
||||
|
||||
pads = list(map(loadmeta, inputs))
|
||||
pads = [x for x in pads if x != None]
|
||||
pads = list(map(fixdates, pads))
|
||||
args.pads = list(pads)
|
||||
|
||||
def could_have_base(x, y):
|
||||
return x == y or (x.startswith(y) and x[len(y) :].startswith("."))
|
||||
|
||||
def get_best_pad(x):
|
||||
for pb in padbases:
|
||||
p = pads_by_base[pb]
|
||||
if could_have_base(x, pb):
|
||||
return p
|
||||
|
||||
def has_version(padinfo, path):
|
||||
return [
|
||||
x
|
||||
for x in padinfo["versions"]
|
||||
if "path" in x and x["path"] == "./" + path
|
||||
]
|
||||
|
||||
if args.files:
|
||||
inputs = args.input
|
||||
inputs.sort()
|
||||
removelist = []
|
||||
|
||||
pads_by_base = {}
|
||||
for p in args.pads:
|
||||
# print ("Trying padid", p['padid'], file=sys.stderr)
|
||||
padbase = os.path.splitext(p["padid"])[0]
|
||||
pads_by_base[padbase] = p
|
||||
padbases = list(pads_by_base.keys())
|
||||
# SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH
|
||||
padbases.sort(key=lambda x: len(x), reverse=True)
|
||||
# print ("PADBASES", file=sys.stderr)
|
||||
# for pb in padbases:
|
||||
# print (" ", pb, file=sys.stderr)
|
||||
print("pairing input files with pads", file=sys.stderr)
|
||||
for x in inputs:
|
||||
# pair input with a pad if possible
|
||||
xbasename = os.path.basename(x)
|
||||
p = get_best_pad(xbasename)
|
||||
if p:
|
||||
if not has_version(p, x):
|
||||
print(
|
||||
"Grouping file {0} with pad {1}".format(x, p["padid"]),
|
||||
file=sys.stderr,
|
||||
)
|
||||
p["versions"].append(wrappath(x))
|
||||
else:
|
||||
print(
|
||||
"Skipping existing version {0} ({1})...".format(
|
||||
x, p["padid"]
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
removelist.append(x)
|
||||
# Removed Matches files
|
||||
for x in removelist:
|
||||
inputs.remove(x)
|
||||
print("Remaining files:", file=sys.stderr)
|
||||
for x in inputs:
|
||||
print(x, file=sys.stderr)
|
||||
print(file=sys.stderr)
|
||||
# Add "fake" pads for remaining files
|
||||
for x in inputs:
|
||||
args.pads.append(metaforpaths([x]))
|
||||
|
||||
if args.timestamp == None:
|
||||
args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
|
||||
args.siteurl = args.siteurl or padurlbase
|
||||
args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000")
|
||||
|
||||
# order items & apply limit
|
||||
if args.order == "lastedited":
|
||||
args.pads.sort(
|
||||
key=lambda x: x.get("lastedited_iso"), reverse=args.reverse
|
||||
)
|
||||
elif args.order == "pad":
|
||||
args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse)
|
||||
elif args.order == "padid":
|
||||
args.pads.sort(key=lambda x: x.get("padid"), reverse=args.reverse)
|
||||
elif args.order == "revisions":
|
||||
args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse)
|
||||
elif args.order == "authors":
|
||||
args.pads.sort(
|
||||
key=lambda x: len(x.get("authors")), reverse=args.reverse
|
||||
)
|
||||
elif args.order == "custom":
|
||||
|
||||
# TODO: make this list non-static, but a variable that can be given from the CLI
|
||||
|
||||
customorder = [
|
||||
"nooo.relearn.preamble",
|
||||
"nooo.relearn.activating.the.archive",
|
||||
"nooo.relearn.call.for.proposals",
|
||||
"nooo.relearn.call.for.proposals-proposal-footnote",
|
||||
"nooo.relearn.colophon",
|
||||
]
|
||||
order = []
|
||||
for x in customorder:
|
||||
for pad in args.pads:
|
||||
if pad["padid"] == x:
|
||||
order.append(pad)
|
||||
args.pads = order
|
||||
else:
|
||||
raise Exception("That ordering is not implemented!")
|
||||
|
||||
if args.limit:
|
||||
args.pads = args.pads[: args.limit]
|
||||
|
||||
# add versions_by_type, add in full text
|
||||
# add link (based on args.link)
|
||||
linkversions = args.link.split(",")
|
||||
linkbase = args.linkbase or url_base(args.feedurl)
|
||||
# print ("linkbase", linkbase, args.linkbase, args.feedurl)
|
||||
|
||||
for p in args.pads:
|
||||
versions_by_type = {}
|
||||
p["versions_by_type"] = versions_by_type
|
||||
for v in p["versions"]:
|
||||
t = v["type"]
|
||||
versions_by_type[t] = v
|
||||
|
||||
if "text" in versions_by_type:
|
||||
# try:
|
||||
with open(versions_by_type["text"]["path"]) as f:
|
||||
content = f.read()
|
||||
# print('content:', content)
|
||||
# [Relearn] Add pandoc command here?
|
||||
html = pypandoc.convert_text(content, "html", format="md")
|
||||
# print('html:', html)
|
||||
p["text"] = html
|
||||
# except FileNotFoundError:
|
||||
# p['text'] = 'ERROR'
|
||||
|
||||
# ADD IN LINK TO PAD AS "link"
|
||||
for v in linkversions:
|
||||
if v in versions_by_type:
|
||||
vdata = versions_by_type[v]
|
||||
try:
|
||||
if v == "pad" or os.path.exists(vdata["path"]):
|
||||
p["link"] = absurl(vdata["url"], linkbase)
|
||||
break
|
||||
except KeyError as e:
|
||||
pass
|
||||
|
||||
if args.output:
|
||||
with open(args.output, "w") as f:
|
||||
print(template.render(vars(args)), file=f)
|
||||
else:
|
||||
print(template.render(vars(args)))
|
578
etherpump/commands/pull.py
Normal file
578
etherpump/commands/pull.py
Normal file
@ -0,0 +1,578 @@
|
||||
"""Check for pads that have changed since last sync (according to .meta.json)"""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from argparse import ArgumentParser
|
||||
from datetime import datetime
|
||||
from fnmatch import fnmatch
|
||||
from urllib.parse import quote, urlencode
|
||||
from urllib.request import HTTPError
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
import asks
|
||||
import html5lib
|
||||
import trio
|
||||
|
||||
from etherpump.commands.common import * # noqa
|
||||
from etherpump.commands.html5tidy import html5tidy
|
||||
|
||||
"""
|
||||
pull(meta):
|
||||
Update meta data files for those that have changed.
|
||||
Check for changed pads by looking at revisions & comparing to existing
|
||||
todo...
|
||||
use/prefer public interfaces ? (export functions)
|
||||
"""
|
||||
|
||||
# Note(decentral1se): simple globals counting
|
||||
skipped, saved = 0, 0
|
||||
|
||||
|
||||
async def try_deleting(files):
|
||||
for f in files:
|
||||
try:
|
||||
path = trio.Path(f)
|
||||
if os.path.exists(path):
|
||||
await path.rmdir()
|
||||
except Exception as exception:
|
||||
print("PANIC: {}".format(exception))
|
||||
|
||||
|
||||
def build_argument_parser(args):
|
||||
parser = ArgumentParser(
|
||||
"Check for pads that have changed since last sync (according to .meta.json)"
|
||||
)
|
||||
parser.add_argument("padid", nargs="*", default=[])
|
||||
parser.add_argument(
|
||||
"--glob", default=False, help="download pads matching a glob pattern"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherpump/settings.json",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--zerorevs",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pub",
|
||||
default="p",
|
||||
help="folder to store files for public pads, default: p",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--group",
|
||||
default="g",
|
||||
help="folder to store files for group pads, default: g",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skip",
|
||||
default=None,
|
||||
type=int,
|
||||
help="skip this many items, default: None",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--connection",
|
||||
default=50,
|
||||
type=int,
|
||||
help="number of connections to run concurrently",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--meta",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download meta to PADID.meta.json, default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--text",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download text to PADID.txt, default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--html",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download html to PADID.html, default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dhtml",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download dhtml to PADID.diff.html, default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--all",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download all files (meta, text, html, dhtml), default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--folder",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="dump files in a folder named PADID (meta, text, html, dhtml), default: False",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="output changed padids on stdout",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--force",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="reload, even if revisions count matches previous",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-raw-ext",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="save plain text as padname with no (additional) extension",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--fix-names",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="normalize padid's (no spaces, special control chars) for use in file names",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filter-ext", default=None, help="filter pads by extension"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--css",
|
||||
default="/styles.css",
|
||||
help="add css url to output pages, default: /styles.css",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--script",
|
||||
default="/versions.js",
|
||||
help="add script url to output pages, default: /versions.js",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--nopublish",
|
||||
default="__NOPUBLISH__",
|
||||
help="no publish magic word, default: __NOPUBLISH__",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--publish",
|
||||
default="__PUBLISH__",
|
||||
help="the publish magic word, default: __PUBLISH__",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--publish-opt-in",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="ensure `--publish` is honoured instead of `--nopublish`",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--magicwords",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download html to PADID.magicwords.html",
|
||||
)
|
||||
return parser
|
||||
|
||||
|
||||
async def get_padids(args, info, data, session):
|
||||
if args.padid:
|
||||
padids = args.padid
|
||||
elif args.glob:
|
||||
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
|
||||
padids = await agetjson(session, url)
|
||||
padids = padids["data"]["padIDs"]
|
||||
padids = [x for x in padids if fnmatch(x, args.glob)]
|
||||
else:
|
||||
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
|
||||
padids = await agetjson(session, url)
|
||||
padids = padids["data"]["padIDs"]
|
||||
|
||||
padids.sort()
|
||||
return padids
|
||||
|
||||
|
||||
async def handle_pad(args, padid, data, info, session):
|
||||
global skipped, saved
|
||||
|
||||
raw_ext = ".raw.txt"
|
||||
if args.no_raw_ext:
|
||||
raw_ext = ""
|
||||
|
||||
data["padID"] = padid
|
||||
p = padpath(padid, args.pub, args.group, args.fix_names)
|
||||
if args.folder:
|
||||
p = os.path.join(p, padid)
|
||||
|
||||
metapath = p + ".meta.json"
|
||||
revisions = None
|
||||
tries = 1
|
||||
skip = False
|
||||
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
|
||||
meta = {}
|
||||
|
||||
while True:
|
||||
try:
|
||||
if os.path.exists(metapath):
|
||||
async with await trio.open_file(metapath) as f:
|
||||
contents = await f.read()
|
||||
meta.update(json.loads(contents))
|
||||
url = (
|
||||
info["localapiurl"] + "getRevisionsCount?" + urlencode(data)
|
||||
)
|
||||
response = await agetjson(session, url)
|
||||
revisions = response["data"]["revisions"]
|
||||
if meta["revisions"] == revisions and not args.force:
|
||||
skip = True
|
||||
reason = "No new revisions, we already have the latest local copy"
|
||||
break
|
||||
|
||||
meta["padid"] = padid
|
||||
versions = meta["versions"] = []
|
||||
versions.append(
|
||||
{"url": padurlbase + quote(padid), "type": "pad", "code": 200,}
|
||||
)
|
||||
|
||||
if revisions is None:
|
||||
url = (
|
||||
info["localapiurl"] + "getRevisionsCount?" + urlencode(data)
|
||||
)
|
||||
response = await agetjson(session, url)
|
||||
meta["revisions"] = response["data"]["revisions"]
|
||||
else:
|
||||
meta["revisions"] = revisions
|
||||
|
||||
if (meta["revisions"] == 0) and (not args.zerorevs):
|
||||
skip = True
|
||||
reason = "0 revisions, this pad was never edited"
|
||||
break
|
||||
|
||||
# todo: load more metadata!
|
||||
meta["group"], meta["pad"] = splitpadname(padid)
|
||||
meta["pathbase"] = p
|
||||
|
||||
url = info["localapiurl"] + "getLastEdited?" + urlencode(data)
|
||||
response = await agetjson(session, url)
|
||||
meta["lastedited_raw"] = int(response["data"]["lastEdited"])
|
||||
|
||||
meta["lastedited_iso"] = datetime.fromtimestamp(
|
||||
int(meta["lastedited_raw"]) / 1000
|
||||
).isoformat()
|
||||
|
||||
url = info["localapiurl"] + "listAuthorsOfPad?" + urlencode(data)
|
||||
response = await agetjson(session, url)
|
||||
meta["author_ids"] = response["data"]["authorIDs"]
|
||||
|
||||
break
|
||||
except HTTPError as e:
|
||||
tries += 1
|
||||
if tries > 3:
|
||||
print(
|
||||
"Too many failures ({0}), skipping".format(padid),
|
||||
file=sys.stderr,
|
||||
)
|
||||
skip = True
|
||||
reason = "PANIC, couldn't download the pad contents"
|
||||
break
|
||||
else:
|
||||
await trio.sleep(1)
|
||||
except TypeError as e:
|
||||
print(
|
||||
"Type Error loading pad {0} (phantom pad?), skipping".format(
|
||||
padid
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
skip = True
|
||||
reason = "PANIC, couldn't download the pad contents"
|
||||
break
|
||||
|
||||
if skip:
|
||||
print("[ ] {} (skipped, reason: {})".format(padid, reason))
|
||||
skipped += 1
|
||||
return
|
||||
|
||||
if args.output:
|
||||
print(padid)
|
||||
|
||||
if args.all or (args.meta or args.text or args.html or args.dhtml):
|
||||
try:
|
||||
path = trio.Path(os.path.split(metapath)[0])
|
||||
if not os.path.exists(path):
|
||||
await path.mkdir()
|
||||
except OSError:
|
||||
# Note(decentral1se): the path already exists
|
||||
pass
|
||||
|
||||
if args.all or args.text:
|
||||
url = info["localapiurl"] + "getText?" + urlencode(data)
|
||||
text = await agetjson(session, url)
|
||||
ver = {"type": "text"}
|
||||
versions.append(ver)
|
||||
ver["code"] = text["_code"]
|
||||
|
||||
if text["_code"] == 200:
|
||||
text = text["data"]["text"]
|
||||
|
||||
##########################################
|
||||
## ENFORCE __NOPUBLISH__ MAGIC WORD
|
||||
##########################################
|
||||
if args.nopublish in text:
|
||||
await try_deleting(
|
||||
(
|
||||
p + raw_ext,
|
||||
p + ".raw.html",
|
||||
p + ".diff.html",
|
||||
p + ".meta.json",
|
||||
)
|
||||
)
|
||||
print(
|
||||
"[ ] {} (deleted, reason: explicit __NOPUBLISH__)".format(
|
||||
padid
|
||||
)
|
||||
)
|
||||
skipped += 1
|
||||
return False
|
||||
|
||||
##########################################
|
||||
## ENFORCE __PUBLISH__ MAGIC WORD
|
||||
##########################################
|
||||
if args.publish_opt_in and args.publish not in text:
|
||||
await try_deleting(
|
||||
(
|
||||
p + raw_ext,
|
||||
p + ".raw.html",
|
||||
p + ".diff.html",
|
||||
p + ".meta.json",
|
||||
)
|
||||
)
|
||||
print("[ ] {} (deleted, reason: publish opt-out)".format(padid))
|
||||
skipped += 1
|
||||
return False
|
||||
|
||||
ver["path"] = p + raw_ext
|
||||
ver["url"] = quote(ver["path"])
|
||||
async with await trio.open_file(ver["path"], "w") as f:
|
||||
try:
|
||||
# Note(decentral1se): unicode handling...
|
||||
safe_text = text.encode("utf-8", "replace").decode()
|
||||
await f.write(safe_text)
|
||||
except Exception as exception:
|
||||
print("PANIC: {}".format(exception))
|
||||
|
||||
# once the content is settled, compute a hash
|
||||
# and link it in the metadata!
|
||||
|
||||
##########################################
|
||||
# INCLUDE __XXX__ MAGIC WORDS
|
||||
##########################################
|
||||
if args.all or args.magicwords:
|
||||
pattern = r"__[a-zA-Z0-9]+?__"
|
||||
all_matches = re.findall(pattern, text)
|
||||
magic_words = list(set(all_matches))
|
||||
if magic_words:
|
||||
meta["magicwords"] = magic_words
|
||||
|
||||
links = []
|
||||
if args.css:
|
||||
links.append({"href": args.css, "rel": "stylesheet"})
|
||||
# todo, make this process reflect which files actually were made
|
||||
versionbaseurl = quote(padid)
|
||||
links.append(
|
||||
{
|
||||
"href": versions[0]["url"],
|
||||
"rel": "alternate",
|
||||
"type": "text/html",
|
||||
"title": "Etherpad",
|
||||
}
|
||||
)
|
||||
if args.all or args.text:
|
||||
links.append(
|
||||
{
|
||||
"href": versionbaseurl + raw_ext,
|
||||
"rel": "alternate",
|
||||
"type": "text/plain",
|
||||
"title": "Plain text",
|
||||
}
|
||||
)
|
||||
if args.all or args.html:
|
||||
links.append(
|
||||
{
|
||||
"href": versionbaseurl + ".raw.html",
|
||||
"rel": "alternate",
|
||||
"type": "text/html",
|
||||
"title": "HTML",
|
||||
}
|
||||
)
|
||||
if args.all or args.dhtml:
|
||||
links.append(
|
||||
{
|
||||
"href": versionbaseurl + ".diff.html",
|
||||
"rel": "alternate",
|
||||
"type": "text/html",
|
||||
"title": "HTML with author colors",
|
||||
}
|
||||
)
|
||||
if args.all or args.meta:
|
||||
links.append(
|
||||
{
|
||||
"href": versionbaseurl + ".meta.json",
|
||||
"rel": "alternate",
|
||||
"type": "application/json",
|
||||
"title": "Meta data",
|
||||
}
|
||||
)
|
||||
|
||||
if args.all or args.dhtml:
|
||||
data["startRev"] = "0"
|
||||
url = info["localapiurl"] + "createDiffHTML?" + urlencode(data)
|
||||
dhtml = await agetjson(session, url)
|
||||
ver = {"type": "diffhtml"}
|
||||
versions.append(ver)
|
||||
ver["code"] = dhtml["_code"]
|
||||
if dhtml["_code"] == 200:
|
||||
try:
|
||||
dhtml_body = dhtml["data"]["html"]
|
||||
ver["path"] = p + ".diff.html"
|
||||
ver["url"] = quote(ver["path"])
|
||||
doc = html5lib.parse(
|
||||
dhtml_body, treebuilder="etree", namespaceHTMLElements=False
|
||||
)
|
||||
html5tidy(
|
||||
doc,
|
||||
indent=True,
|
||||
title=padid,
|
||||
scripts=args.script,
|
||||
links=links,
|
||||
)
|
||||
async with await trio.open_file(ver["path"], "w") as f:
|
||||
output = ET.tostring(doc, method="html", encoding="unicode")
|
||||
await f.write(output)
|
||||
except TypeError:
|
||||
ver["message"] = dhtml["message"]
|
||||
|
||||
# Process text, html, dhtml, magicwords and all options
|
||||
downloaded_html = False
|
||||
if args.all or args.html:
|
||||
url = info["localapiurl"] + "getHTML?" + urlencode(data)
|
||||
html = await agetjson(session, url)
|
||||
ver = {"type": "html"}
|
||||
versions.append(ver)
|
||||
ver["code"] = html["_code"]
|
||||
downloaded_html = True
|
||||
|
||||
if html["_code"] == 200:
|
||||
try:
|
||||
html_body = html["data"]["html"]
|
||||
ver["path"] = p + ".raw.html"
|
||||
ver["url"] = quote(ver["path"])
|
||||
doc = html5lib.parse(
|
||||
html_body, treebuilder="etree", namespaceHTMLElements=False
|
||||
)
|
||||
html5tidy(
|
||||
doc,
|
||||
indent=True,
|
||||
title=padid,
|
||||
scripts=args.script,
|
||||
links=links,
|
||||
)
|
||||
async with await trio.open_file(ver["path"], "w") as f:
|
||||
output = ET.tostring(doc, method="html", encoding="unicode")
|
||||
await f.write(output)
|
||||
except TypeError:
|
||||
ver["message"] = html["message"]
|
||||
|
||||
if args.all or args.magicwords:
|
||||
if not downloaded_html:
|
||||
html = await agetjson(session, url)
|
||||
ver = {"type": "magicwords"}
|
||||
versions.append(ver)
|
||||
ver["code"] = html["_code"]
|
||||
|
||||
if html["_code"] == 200:
|
||||
try:
|
||||
html_body = html["data"]["html"]
|
||||
ver["path"] = p + ".magicwords.html"
|
||||
ver["url"] = quote(ver["path"])
|
||||
for magic_word in magic_words:
|
||||
replace_word = (
|
||||
"<span class='highlight'>" + magic_word + "</span>"
|
||||
)
|
||||
if magic_word in html_body:
|
||||
html_body = html_body.replace(magic_word, replace_word)
|
||||
doc = html5lib.parse(
|
||||
html_body, treebuilder="etree", namespaceHTMLElements=False
|
||||
)
|
||||
html5tidy(
|
||||
doc,
|
||||
indent=True,
|
||||
title=padid,
|
||||
scripts=args.script,
|
||||
links=links,
|
||||
)
|
||||
async with await trio.open_file(ver["path"], "w") as f:
|
||||
output = ET.tostring(doc, method="html", encoding="unicode")
|
||||
await f.write(output)
|
||||
except TypeError:
|
||||
ver["message"] = html["message"]
|
||||
|
||||
# output meta
|
||||
if args.all or args.meta:
|
||||
ver = {"type": "meta"}
|
||||
versions.append(ver)
|
||||
ver["path"] = metapath
|
||||
ver["url"] = quote(metapath)
|
||||
async with await trio.open_file(metapath, "w") as f:
|
||||
await f.write(json.dumps(meta))
|
||||
|
||||
try:
|
||||
mwords_msg = ", magic words: {}".format(", ".join(magic_words))
|
||||
except UnboundLocalError:
|
||||
mwords_msg = "" # Note(decentral1se): for when magic_words are not counted
|
||||
|
||||
print("[x] {} (saved{})".format(padid, mwords_msg))
|
||||
saved += 1
|
||||
return
|
||||
|
||||
|
||||
async def handle_pads(args):
|
||||
global skipped, saved
|
||||
|
||||
session = asks.Session(connections=args.connection)
|
||||
info = loadpadinfo(args.padinfo)
|
||||
data = {"apikey": info["apikey"]}
|
||||
|
||||
padids = await get_padids(args, info, data, session)
|
||||
if args.skip:
|
||||
padids = padids[args.skip : len(padids)]
|
||||
|
||||
print("=" * 79)
|
||||
print("Etherpump is warming up the engines ...")
|
||||
print("=" * 79)
|
||||
|
||||
start = time.time()
|
||||
async with trio.open_nursery() as nursery:
|
||||
for padid in padids:
|
||||
nursery.start_soon(
|
||||
handle_pad, args, padid, data.copy(), info, session
|
||||
)
|
||||
end = time.time()
|
||||
timeit = round(end - start, 2)
|
||||
|
||||
print("=" * 79)
|
||||
print(
|
||||
"Processed {} :: Skipped {} :: Saved {} :: Time {}s".format(
|
||||
len(padids), skipped, saved, timeit
|
||||
)
|
||||
)
|
||||
print("=" * 79)
|
||||
|
||||
|
||||
def main(args):
|
||||
p = build_argument_parser(args)
|
||||
args = p.parse_args(args)
|
||||
trio.run(handle_pads, args)
|
32
etherpump/commands/revisionscount.py
Normal file
32
etherpump/commands/revisionscount.py
Normal file
@ -0,0 +1,32 @@
|
||||
"""Call getRevisionsCount for the given padid"""
|
||||
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from urllib.error import HTTPError, URLError
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("call getRevisionsCount for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid
|
||||
requesturl = apiurl + "getRevisionsCount?" + urlencode(data)
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
else:
|
||||
results = json.load(urlopen(requesturl))["data"]["revisions"]
|
||||
print(results)
|
95
etherpump/commands/sethtml.py
Normal file
95
etherpump/commands/sethtml.py
Normal file
@ -0,0 +1,95 @@
|
||||
"""Calls the setHTML API function for the given padid"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
import requests
|
||||
|
||||
LIMIT_BYTES = 100 * 1000
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the setHTML API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--html", default=None, help="html, default: read from stdin"
|
||||
)
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument(
|
||||
"--create",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="flag to create pad if necessary",
|
||||
)
|
||||
p.add_argument(
|
||||
"--limit",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="limit text to 100k (etherpad limit)",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
# data = {}
|
||||
# data['apikey'] = info['apikey']
|
||||
# data['padID'] = args.padid # is utf-8 encoded
|
||||
|
||||
createPad = False
|
||||
if args.create:
|
||||
# check if it's in fact necessary
|
||||
requesturl = (
|
||||
apiurl
|
||||
+ "getRevisionsCount?"
|
||||
+ urlencode({"apikey": info["apikey"], "padID": args.padid})
|
||||
)
|
||||
results = json.load(urlopen(requesturl))
|
||||
print(json.dumps(results, indent=2), file=sys.stderr)
|
||||
if results["code"] != 0:
|
||||
createPad = True
|
||||
|
||||
if args.html:
|
||||
html = args.html
|
||||
else:
|
||||
html = sys.stdin.read()
|
||||
|
||||
params = {}
|
||||
params["apikey"] = info["apikey"]
|
||||
params["padID"] = args.padid
|
||||
|
||||
if createPad:
|
||||
requesturl = apiurl + "createPad"
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
results = requests.post(
|
||||
requesturl, params=params, data={"text": ""}
|
||||
) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
print(json.dumps(results, indent=2))
|
||||
|
||||
if len(html) > LIMIT_BYTES and args.limit:
|
||||
print("limiting", len(text), LIMIT_BYTES, file=sys.stderr)
|
||||
html = html[:LIMIT_BYTES]
|
||||
|
||||
requesturl = apiurl + "setHTML"
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
# params['html'] = html
|
||||
results = requests.post(
|
||||
requesturl,
|
||||
params={"apikey": info["apikey"]},
|
||||
data={"apikey": info["apikey"], "padID": args.padid, "html": html},
|
||||
) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
print(json.dumps(results, indent=2))
|
85
etherpump/commands/settext.py
Normal file
85
etherpump/commands/settext.py
Normal file
@ -0,0 +1,85 @@
|
||||
"""Calls the getText API function for the given padid"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode
|
||||
from urllib.request import urlopen
|
||||
|
||||
import requests
|
||||
|
||||
LIMIT_BYTES = 100 * 1000
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser("calls the getText API function for the given padid")
|
||||
p.add_argument("padid", help="the padid")
|
||||
p.add_argument(
|
||||
"--text", default=None, help="text, default: read from stdin"
|
||||
)
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument("--showurl", default=False, action="store_true")
|
||||
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
|
||||
p.add_argument(
|
||||
"--create",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="flag to create pad if necessary",
|
||||
)
|
||||
p.add_argument(
|
||||
"--limit",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="limit text to 100k (etherpad limit)",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
with open(args.padinfo) as f:
|
||||
info = json.load(f)
|
||||
apiurl = info.get("apiurl")
|
||||
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
data["padID"] = args.padid # is utf-8 encoded
|
||||
|
||||
createPad = False
|
||||
if args.create:
|
||||
requesturl = apiurl + "getRevisionsCount?" + urlencode(data)
|
||||
results = json.load(urlopen(requesturl))
|
||||
# print (json.dumps(results, indent=2))
|
||||
if results["code"] != 0:
|
||||
createPad = True
|
||||
|
||||
if args.text:
|
||||
text = args.text
|
||||
else:
|
||||
text = sys.stdin.read()
|
||||
|
||||
if len(text) > LIMIT_BYTES and args.limit:
|
||||
print("limiting", len(text), LIMIT_BYTES)
|
||||
text = text[:LIMIT_BYTES]
|
||||
|
||||
data["text"] = text
|
||||
|
||||
if createPad:
|
||||
requesturl = apiurl + "createPad"
|
||||
else:
|
||||
requesturl = apiurl + "setText"
|
||||
|
||||
if args.showurl:
|
||||
print(requesturl)
|
||||
results = requests.post(
|
||||
requesturl, params=data
|
||||
) # json.load(urlopen(requesturl))
|
||||
results = json.loads(results.text)
|
||||
if results["code"] != 0:
|
||||
print(
|
||||
"setText: ERROR ({0}) on pad {1}: {2}".format(
|
||||
results["code"], args.padid, results["message"]
|
||||
)
|
||||
)
|
||||
# json.dumps(results, indent=2)
|
@ -1,17 +1,25 @@
|
||||
from __future__ import print_function
|
||||
"""Extract and output selected fields of metadata"""
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from argparse import ArgumentParser
|
||||
import json, sys, re
|
||||
from common import *
|
||||
|
||||
from .common import * # noqa
|
||||
|
||||
"""
|
||||
Extract and output selected fields of metadata
|
||||
"""
|
||||
|
||||
def main (args):
|
||||
p = ArgumentParser("extract & display meta data from a specific .meta.json file, or for a given padid (nb: it still looks for a .meta.json file)")
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser(
|
||||
"extract & display meta data from a specific .meta.json file, or for a given padid (nb: it still looks for a .meta.json file)"
|
||||
)
|
||||
p.add_argument("--path", default=None, help="read from a meta.json file")
|
||||
p.add_argument("--padid", default=None, help="read meta for this padid")
|
||||
p.add_argument("--format", default="{padid}", help="format str, default: {padid}")
|
||||
p.add_argument(
|
||||
"--format", default="{padid}", help="format str, default: {padid}"
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
path = args.path
|
||||
@ -19,7 +27,7 @@ def main (args):
|
||||
path = padpath(args.padid) + ".meta.json"
|
||||
|
||||
if not path:
|
||||
print ("Must specify either --path or --padid")
|
||||
print("Must specify either --path or --padid")
|
||||
sys.exit(-1)
|
||||
|
||||
with open(path) as f:
|
||||
@ -27,5 +35,4 @@ def main (args):
|
||||
|
||||
formatstr = args.format.decode("utf-8")
|
||||
formatstr = re.sub(r"{(\w+)}", r"{0[\1]}", formatstr)
|
||||
print (formatstr.format(meta).encode("utf-8"))
|
||||
|
||||
print(formatstr.format(meta))
|
164
etherpump/commands/status.py
Normal file
164
etherpump/commands/status.py
Normal file
@ -0,0 +1,164 @@
|
||||
"""Update meta data files for those that have changed"""
|
||||
import os
|
||||
from argparse import ArgumentParser
|
||||
from urllib.parse import urlencode
|
||||
|
||||
from .common import * # noqa
|
||||
|
||||
"""
|
||||
status (meta):
|
||||
Update meta data files for those that have changed.
|
||||
Check for changed pads by looking at revisions & comparing to existing
|
||||
|
||||
|
||||
design decisions...
|
||||
ok based on the fact that only the txt file is pushable (via setText)
|
||||
it makes sense to give this file "primacy" ... ie to put the other forms
|
||||
(html, diff.html) in a special place (if created at all). Otherwise this
|
||||
complicates the "syncing" idea....
|
||||
|
||||
"""
|
||||
|
||||
|
||||
class PadItemException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class PadItem:
|
||||
def __init__(self, padid=None, path=None, padexists=False):
|
||||
self.padexists = padexists
|
||||
if padid and path:
|
||||
raise PadItemException("only give padid or path")
|
||||
if not (padid or path):
|
||||
raise PadItemException("either padid or path must be specified")
|
||||
if padid:
|
||||
self.padid = padid
|
||||
self.path = padpath(padid, group_path="g")
|
||||
else:
|
||||
self.path = path
|
||||
self.padid = padpath2id(path)
|
||||
|
||||
@property
|
||||
def status(self):
|
||||
if self.fileexists:
|
||||
if self.padexists:
|
||||
return "S"
|
||||
else:
|
||||
return "F"
|
||||
elif self.padexists:
|
||||
return "P"
|
||||
else:
|
||||
return "?"
|
||||
|
||||
@property
|
||||
def fileexists(self):
|
||||
return os.path.exists(self.path)
|
||||
|
||||
|
||||
def ignore_p(path, settings=None):
|
||||
if path.startswith("."):
|
||||
return True
|
||||
|
||||
|
||||
def main(args):
|
||||
p = ArgumentParser(
|
||||
"Check for pads that have changed since last sync (according to .meta.json)"
|
||||
)
|
||||
# p.add_argument("padid", nargs="*", default=[])
|
||||
p.add_argument(
|
||||
"--padinfo",
|
||||
default=".etherpump/settings.json",
|
||||
help="settings, default: .etherdump/settings.json",
|
||||
)
|
||||
p.add_argument(
|
||||
"--zerorevs",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)",
|
||||
)
|
||||
p.add_argument(
|
||||
"--pub",
|
||||
default=".",
|
||||
help="folder to store files for public pads, default: pub",
|
||||
)
|
||||
p.add_argument(
|
||||
"--group",
|
||||
default="g",
|
||||
help="folder to store files for group pads, default: g",
|
||||
)
|
||||
p.add_argument(
|
||||
"--skip",
|
||||
default=None,
|
||||
type=int,
|
||||
help="skip this many items, default: None",
|
||||
)
|
||||
p.add_argument(
|
||||
"--meta",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download meta to PADID.meta.json, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--text",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download text to PADID.txt, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--html",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download html to PADID.html, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--dhtml",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download dhtml to PADID.dhtml, default: False",
|
||||
)
|
||||
p.add_argument(
|
||||
"--all",
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="download all files (meta, text, html, dhtml), default: False",
|
||||
)
|
||||
args = p.parse_args(args)
|
||||
|
||||
info = loadpadinfo(args.padinfo)
|
||||
data = {}
|
||||
data["apikey"] = info["apikey"]
|
||||
|
||||
padsbypath = {}
|
||||
|
||||
# listAllPads
|
||||
padids = getjson(info["apiurl"] + "listAllPads?" + urlencode(data))["data"][
|
||||
"padIDs"
|
||||
]
|
||||
padids.sort()
|
||||
for padid in padids:
|
||||
pad = PadItem(padid=padid, padexists=True)
|
||||
padsbypath[pad.path] = pad
|
||||
|
||||
files = os.listdir(args.pub)
|
||||
files = [x for x in files if not ignore_p(x)]
|
||||
files.sort()
|
||||
for p in files:
|
||||
pad = padsbypath.get(p)
|
||||
if not pad:
|
||||
pad = PadItem(path=p)
|
||||
padsbypath[pad.path] = pad
|
||||
|
||||
pads = list(padsbypath.values())
|
||||
pads.sort(key=lambda x: (x.status, x.padid))
|
||||
|
||||
curstat = None
|
||||
for p in pads:
|
||||
if p.status != curstat:
|
||||
curstat = p.status
|
||||
if curstat == "F":
|
||||
print("New/changed files")
|
||||
elif curstat == "P":
|
||||
print("New/changed pads")
|
||||
elif curstat == ".":
|
||||
print("Up to date")
|
||||
print(" ", p.status, p.padid)
|
@ -10,7 +10,7 @@
|
||||
<body>
|
||||
{{ html }}
|
||||
|
||||
<div class="etherdump_version_links">
|
||||
<div class="etherpump_version_links">
|
||||
Pad last edited {{lastedited}}; other versions: <a href="{{raw_url}}">text-only</a> <a href="{{meta_url}}">metadata</a>
|
||||
</div>
|
||||
</body>
|
42
etherpump/data/templates/publication.html
Normal file
42
etherpump/data/templates/publication.html
Normal file
@ -0,0 +1,42 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="{{language}}">
|
||||
<!-- __RELEARN__ -->
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<title>{{title}}</title>
|
||||
<link rel="stylesheet" type="text/css" href="{%block css %}publication.assets/publication.css{%endblock%}">
|
||||
<link rel="stylesheet" type="text/css" href="publication.assets/normalise.css">
|
||||
<link rel="alternate" type="application/rss+xml" href="recentchanges.rss">
|
||||
</head>
|
||||
<body>
|
||||
<h1>{{ title }}</h1>
|
||||
|
||||
<div id="toc">
|
||||
<p>Table of Contents</p>
|
||||
<ol>
|
||||
{% for pad in pads %}
|
||||
<li class="name">
|
||||
<a href="#{{ pad.padid }}">{{ pad.padid }}</a>
|
||||
</li>
|
||||
{% endfor %}
|
||||
</ol>
|
||||
</div>
|
||||
|
||||
<img id="coverimg" src="publication.assets/rumination.svg">
|
||||
|
||||
{% for pad in pads %}
|
||||
<hr>
|
||||
<div id="{{ pad.padid }}" data-padid="{{ pad.padid }}" class="pad">
|
||||
|
||||
<small class="lastedited">Last edited: {{ pad.lastedited_iso|datetimeformat }}</small><br>
|
||||
<small class="revisions">Revisions: {{ pad.revisions }}</small><br>
|
||||
<small class="padname"><a href="{{ pad.link }}">{{ pad.pathbase }}</a></small><br>
|
||||
<!-- <small class="authors">Authors: {{ pad.author_ids|length }}</small><br> -->
|
||||
<div class="padcontent">{{ pad.text }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
|
||||
{% block info %}<hr><small class="info">Last update {{timestamp}}.</small>{% endblock %}
|
||||
<div id="footer"></div>
|
||||
</body>
|
||||
</html>
|
@ -1,12 +0,0 @@
|
||||
{
|
||||
"protocol": "http",
|
||||
"port": 9001,
|
||||
"hostname": "localhost",
|
||||
"apiversion": "1.2.9",
|
||||
"apiurl": "/api/",
|
||||
"apikey": "8f55f9ede1b3f5d88b3c54eb638225a7bb71c64867786b608abacfdb7d418be1",
|
||||
"groups": {
|
||||
"71FpVh4MZBvl8VZ6": {"name": "Transmediale", "id": 43},
|
||||
"HyYfoX3Q6S5utxs5": {"name": "test", "id": 42 }
|
||||
}
|
||||
}
|
809
poetry.lock
generated
Normal file
809
poetry.lock
generated
Normal file
@ -0,0 +1,809 @@
|
||||
[[package]]
|
||||
name = "anyio"
|
||||
version = "1.4.0"
|
||||
description = "High level compatibility layer for multiple asynchronous event loop implementations"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5.3"
|
||||
|
||||
[package.dependencies]
|
||||
async-generator = "*"
|
||||
idna = ">=2.8"
|
||||
sniffio = ">=1.1"
|
||||
|
||||
[package.extras]
|
||||
curio = ["curio (==0.9)", "curio (>=0.9)"]
|
||||
doc = ["sphinx-rtd-theme", "sphinx-autodoc-typehints (>=1.2.0)"]
|
||||
test = ["coverage (>=4.5)", "hypothesis (>=4.0)", "pytest (>=3.7.2)", "uvloop"]
|
||||
trio = ["trio (>=0.12)"]
|
||||
|
||||
[[package]]
|
||||
name = "appdirs"
|
||||
version = "1.4.4"
|
||||
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "asks"
|
||||
version = "2.4.10"
|
||||
description = "asks - async http"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[package.dependencies]
|
||||
anyio = "<2"
|
||||
async_generator = "*"
|
||||
h11 = "*"
|
||||
|
||||
[[package]]
|
||||
name = "async-generator"
|
||||
version = "1.10"
|
||||
description = "Async generators and context managers for Python 3.5+"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
|
||||
[[package]]
|
||||
name = "attrs"
|
||||
version = "20.3.0"
|
||||
description = "Classes Without Boilerplate"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[package.extras]
|
||||
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "furo", "sphinx", "pre-commit"]
|
||||
docs = ["furo", "sphinx", "zope.interface"]
|
||||
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"]
|
||||
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six"]
|
||||
|
||||
[[package]]
|
||||
name = "black"
|
||||
version = "19.10b0"
|
||||
description = "The uncompromising code formatter."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[package.dependencies]
|
||||
appdirs = "*"
|
||||
attrs = ">=18.1.0"
|
||||
click = ">=6.5"
|
||||
pathspec = ">=0.6,<1"
|
||||
regex = "*"
|
||||
toml = ">=0.9.4"
|
||||
typed-ast = ">=1.4.0"
|
||||
|
||||
[package.extras]
|
||||
d = ["aiohttp (>=3.3.2)", "aiohttp-cors"]
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2020.12.5"
|
||||
description = "Python package for providing Mozilla's CA Bundle."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "cffi"
|
||||
version = "1.14.5"
|
||||
description = "Foreign Function Interface for Python calling C code."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[package.dependencies]
|
||||
pycparser = "*"
|
||||
|
||||
[[package]]
|
||||
name = "chardet"
|
||||
version = "4.0.0"
|
||||
description = "Universal encoding detector for Python 2 and 3"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "7.1.2"
|
||||
description = "Composable command line interface toolkit"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[[package]]
|
||||
name = "contextvars"
|
||||
version = "2.4"
|
||||
description = "PEP 567 Backport"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[package.dependencies]
|
||||
immutables = ">=0.9"
|
||||
|
||||
[[package]]
|
||||
name = "flake8"
|
||||
version = "3.9.0"
|
||||
description = "the modular source code checker: pep8 pyflakes and co"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
|
||||
|
||||
[package.dependencies]
|
||||
importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
|
||||
mccabe = ">=0.6.0,<0.7.0"
|
||||
pycodestyle = ">=2.7.0,<2.8.0"
|
||||
pyflakes = ">=2.3.0,<2.4.0"
|
||||
|
||||
[[package]]
|
||||
name = "h11"
|
||||
version = "0.12.0"
|
||||
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[[package]]
|
||||
name = "html5lib"
|
||||
version = "1.1"
|
||||
description = "HTML parser based on the WHATWG HTML specification"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[package.dependencies]
|
||||
six = ">=1.9"
|
||||
webencodings = "*"
|
||||
|
||||
[package.extras]
|
||||
all = ["genshi", "chardet (>=2.2)", "lxml"]
|
||||
chardet = ["chardet (>=2.2)"]
|
||||
genshi = ["genshi"]
|
||||
lxml = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "2.10"
|
||||
description = "Internationalized Domain Names in Applications (IDNA)"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "immutables"
|
||||
version = "0.15"
|
||||
description = "Immutable Collections"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
|
||||
[package.extras]
|
||||
test = ["flake8 (>=3.8.4,<3.9.0)", "pycodestyle (>=2.6.0,<2.7.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "importlib-metadata"
|
||||
version = "3.7.3"
|
||||
description = "Read metadata from Python packages"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[package.dependencies]
|
||||
typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""}
|
||||
zipp = ">=0.5"
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
|
||||
testing = ["pytest (>=3.5,!=3.7.3)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "isort"
|
||||
version = "5.7.0"
|
||||
description = "A Python utility / library to sort Python imports."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.6,<4.0"
|
||||
|
||||
[package.extras]
|
||||
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
|
||||
requirements_deprecated_finder = ["pipreqs", "pip-api"]
|
||||
colors = ["colorama (>=0.4.3,<0.5.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "jinja2"
|
||||
version = "2.11.3"
|
||||
description = "A very fast and expressive template engine."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[package.dependencies]
|
||||
MarkupSafe = ">=0.23"
|
||||
|
||||
[package.extras]
|
||||
i18n = ["Babel (>=0.8)"]
|
||||
|
||||
[[package]]
|
||||
name = "markupsafe"
|
||||
version = "1.1.1"
|
||||
description = "Safely add untrusted strings to HTML/XML markup."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "mccabe"
|
||||
version = "0.6.1"
|
||||
description = "McCabe checker, plugin for flake8"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "0.782"
|
||||
description = "Optional static typing for Python"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
|
||||
[package.dependencies]
|
||||
mypy-extensions = ">=0.4.3,<0.5.0"
|
||||
typed-ast = ">=1.4.0,<1.5.0"
|
||||
typing-extensions = ">=3.7.4"
|
||||
|
||||
[package.extras]
|
||||
dmypy = ["psutil (>=4.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "mypy-extensions"
|
||||
version = "0.4.3"
|
||||
description = "Experimental type system extensions for programs checked with the mypy typechecker."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "outcome"
|
||||
version = "1.1.0"
|
||||
description = "Capture the outcome of Python function calls."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=19.2.0"
|
||||
|
||||
[[package]]
|
||||
name = "pathspec"
|
||||
version = "0.8.1"
|
||||
description = "Utility library for gitignore style pattern matching of file paths."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[[package]]
|
||||
name = "pycodestyle"
|
||||
version = "2.7.0"
|
||||
description = "Python style guide checker"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "pycparser"
|
||||
version = "2.20"
|
||||
description = "C parser in Python"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "pyflakes"
|
||||
version = "2.3.0"
|
||||
description = "passive checker of Python programs"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
|
||||
[[package]]
|
||||
name = "pypandoc"
|
||||
version = "1.5"
|
||||
description = "Thin wrapper for pandoc."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "python-dateutil"
|
||||
version = "2.8.1"
|
||||
description = "Extensions to the standard Python datetime module"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||
|
||||
[package.dependencies]
|
||||
six = ">=1.5"
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "2021.3.17"
|
||||
description = "Alternative regular expression module, to replace re."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.25.1"
|
||||
description = "Python HTTP for Humans."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=2017.4.17"
|
||||
chardet = ">=3.0.2,<5"
|
||||
idna = ">=2.5,<3"
|
||||
urllib3 = ">=1.21.1,<1.27"
|
||||
|
||||
[package.extras]
|
||||
security = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)"]
|
||||
socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"]
|
||||
|
||||
[[package]]
|
||||
name = "six"
|
||||
version = "1.15.0"
|
||||
description = "Python 2 and 3 compatibility utilities"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
|
||||
|
||||
[[package]]
|
||||
name = "sniffio"
|
||||
version = "1.2.0"
|
||||
description = "Sniff out which async library your code is running under"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
|
||||
[package.dependencies]
|
||||
contextvars = {version = ">=2.1", markers = "python_version < \"3.7\""}
|
||||
|
||||
[[package]]
|
||||
name = "sortedcontainers"
|
||||
version = "2.3.0"
|
||||
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "toml"
|
||||
version = "0.10.2"
|
||||
description = "Python Library for Tom's Obvious, Minimal Language"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
|
||||
|
||||
[[package]]
|
||||
name = "trio"
|
||||
version = "0.17.0"
|
||||
description = "A friendly Python library for async concurrency and I/O"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[package.dependencies]
|
||||
async-generator = ">=1.9"
|
||||
attrs = ">=19.2.0"
|
||||
cffi = {version = ">=1.14", markers = "os_name == \"nt\" and implementation_name != \"pypy\""}
|
||||
contextvars = {version = ">=2.1", markers = "python_version < \"3.7\""}
|
||||
idna = "*"
|
||||
outcome = "*"
|
||||
sniffio = "*"
|
||||
sortedcontainers = "*"
|
||||
|
||||
[[package]]
|
||||
name = "typed-ast"
|
||||
version = "1.4.2"
|
||||
description = "a fork of Python 2 and 3 ast modules with type comment support"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "3.7.4.3"
|
||||
description = "Backported and Experimental Type Hints for Python 3.5+"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "urllib3"
|
||||
version = "1.26.4"
|
||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4"
|
||||
|
||||
[package.extras]
|
||||
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
|
||||
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
|
||||
brotli = ["brotlipy (>=0.6.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "webencodings"
|
||||
version = "0.5.1"
|
||||
description = "Character encoding aliases for legacy web content"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "zipp"
|
||||
version = "3.4.1"
|
||||
description = "Backport of pathlib-compatible object wrapper for zip files"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
|
||||
testing = ["pytest (>=4.6)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "1.1"
|
||||
python-versions = "^3.6"
|
||||
content-hash = "f526837d3cce386db46118b1044839c60e52deafb740bf410c3cf75f0648987e"
|
||||
|
||||
[metadata.files]
|
||||
anyio = [
|
||||
{file = "anyio-1.4.0-py3-none-any.whl", hash = "sha256:9ee67e8131853f42957e214d4531cee6f2b66dda164a298d9686a768b7161a4f"},
|
||||
{file = "anyio-1.4.0.tar.gz", hash = "sha256:95f60964fc4583f3f226f8dc275dfb02aefe7b39b85a999c6d14f4ec5323c1d8"},
|
||||
]
|
||||
appdirs = [
|
||||
{file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"},
|
||||
{file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"},
|
||||
]
|
||||
asks = [
|
||||
{file = "asks-2.4.10.tar.gz", hash = "sha256:c9db16bdf9fed8cae76db3b4365216ea2f1563b8ab9fe9a5e8e554177de61192"},
|
||||
]
|
||||
async-generator = [
|
||||
{file = "async_generator-1.10-py3-none-any.whl", hash = "sha256:01c7bf666359b4967d2cda0000cc2e4af16a0ae098cbffcb8472fb9e8ad6585b"},
|
||||
{file = "async_generator-1.10.tar.gz", hash = "sha256:6ebb3d106c12920aaae42ccb6f787ef5eefdcdd166ea3d628fa8476abe712144"},
|
||||
]
|
||||
attrs = [
|
||||
{file = "attrs-20.3.0-py2.py3-none-any.whl", hash = "sha256:31b2eced602aa8423c2aea9c76a724617ed67cf9513173fd3a4f03e3a929c7e6"},
|
||||
{file = "attrs-20.3.0.tar.gz", hash = "sha256:832aa3cde19744e49938b91fea06d69ecb9e649c93ba974535d08ad92164f700"},
|
||||
]
|
||||
black = [
|
||||
{file = "black-19.10b0-py36-none-any.whl", hash = "sha256:1b30e59be925fafc1ee4565e5e08abef6b03fe455102883820fe5ee2e4734e0b"},
|
||||
{file = "black-19.10b0.tar.gz", hash = "sha256:c2edb73a08e9e0e6f65a0e6af18b059b8b1cdd5bef997d7a0b181df93dc81539"},
|
||||
]
|
||||
certifi = [
|
||||
{file = "certifi-2020.12.5-py2.py3-none-any.whl", hash = "sha256:719a74fb9e33b9bd44cc7f3a8d94bc35e4049deebe19ba7d8e108280cfd59830"},
|
||||
{file = "certifi-2020.12.5.tar.gz", hash = "sha256:1a4995114262bffbc2413b159f2a1a480c969de6e6eb13ee966d470af86af59c"},
|
||||
]
|
||||
cffi = [
|
||||
{file = "cffi-1.14.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:bb89f306e5da99f4d922728ddcd6f7fcebb3241fc40edebcb7284d7514741991"},
|
||||
{file = "cffi-1.14.5-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:34eff4b97f3d982fb93e2831e6750127d1355a923ebaeeb565407b3d2f8d41a1"},
|
||||
{file = "cffi-1.14.5-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99cd03ae7988a93dd00bcd9d0b75e1f6c426063d6f03d2f90b89e29b25b82dfa"},
|
||||
{file = "cffi-1.14.5-cp27-cp27m-win32.whl", hash = "sha256:65fa59693c62cf06e45ddbb822165394a288edce9e276647f0046e1ec26920f3"},
|
||||
{file = "cffi-1.14.5-cp27-cp27m-win_amd64.whl", hash = "sha256:51182f8927c5af975fece87b1b369f722c570fe169f9880764b1ee3bca8347b5"},
|
||||
{file = "cffi-1.14.5-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:43e0b9d9e2c9e5d152946b9c5fe062c151614b262fda2e7b201204de0b99e482"},
|
||||
{file = "cffi-1.14.5-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:cbde590d4faaa07c72bf979734738f328d239913ba3e043b1e98fe9a39f8b2b6"},
|
||||
{file = "cffi-1.14.5-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:5de7970188bb46b7bf9858eb6890aad302577a5f6f75091fd7cdd3ef13ef3045"},
|
||||
{file = "cffi-1.14.5-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:a465da611f6fa124963b91bf432d960a555563efe4ed1cc403ba5077b15370aa"},
|
||||
{file = "cffi-1.14.5-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:d42b11d692e11b6634f7613ad8df5d6d5f8875f5d48939520d351007b3c13406"},
|
||||
{file = "cffi-1.14.5-cp35-cp35m-win32.whl", hash = "sha256:72d8d3ef52c208ee1c7b2e341f7d71c6fd3157138abf1a95166e6165dd5d4369"},
|
||||
{file = "cffi-1.14.5-cp35-cp35m-win_amd64.whl", hash = "sha256:29314480e958fd8aab22e4a58b355b629c59bf5f2ac2492b61e3dc06d8c7a315"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3d3dd4c9e559eb172ecf00a2a7517e97d1e96de2a5e610bd9b68cea3925b4892"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:48e1c69bbacfc3d932221851b39d49e81567a4d4aac3b21258d9c24578280058"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:69e395c24fc60aad6bb4fa7e583698ea6cc684648e1ffb7fe85e3c1ca131a7d5"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:9e93e79c2551ff263400e1e4be085a1210e12073a31c2011dbbda14bda0c6132"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-win32.whl", hash = "sha256:58e3f59d583d413809d60779492342801d6e82fefb89c86a38e040c16883be53"},
|
||||
{file = "cffi-1.14.5-cp36-cp36m-win_amd64.whl", hash = "sha256:005a36f41773e148deac64b08f233873a4d0c18b053d37da83f6af4d9087b813"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2894f2df484ff56d717bead0a5c2abb6b9d2bf26d6960c4604d5c48bbc30ee73"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:0857f0ae312d855239a55c81ef453ee8fd24136eaba8e87a2eceba644c0d4c06"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:cd2868886d547469123fadc46eac7ea5253ea7fcb139f12e1dfc2bbd406427d1"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:35f27e6eb43380fa080dccf676dece30bef72e4a67617ffda586641cd4508d49"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-win32.whl", hash = "sha256:9ff227395193126d82e60319a673a037d5de84633f11279e336f9c0f189ecc62"},
|
||||
{file = "cffi-1.14.5-cp37-cp37m-win_amd64.whl", hash = "sha256:9cf8022fb8d07a97c178b02327b284521c7708d7c71a9c9c355c178ac4bbd3d4"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8b198cec6c72df5289c05b05b8b0969819783f9418e0409865dac47288d2a053"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-manylinux1_i686.whl", hash = "sha256:ad17025d226ee5beec591b52800c11680fca3df50b8b29fe51d882576e039ee0"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6c97d7350133666fbb5cf4abdc1178c812cb205dc6f41d174a7b0f18fb93337e"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8ae6299f6c68de06f136f1f9e69458eae58f1dacf10af5c17353eae03aa0d827"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-win32.whl", hash = "sha256:b85eb46a81787c50650f2392b9b4ef23e1f126313b9e0e9013b35c15e4288e2e"},
|
||||
{file = "cffi-1.14.5-cp38-cp38-win_amd64.whl", hash = "sha256:1f436816fc868b098b0d63b8920de7d208c90a67212546d02f84fe78a9c26396"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1071534bbbf8cbb31b498d5d9db0f274f2f7a865adca4ae429e147ba40f73dea"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-manylinux1_i686.whl", hash = "sha256:9de2e279153a443c656f2defd67769e6d1e4163952b3c622dcea5b08a6405322"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:6e4714cc64f474e4d6e37cfff31a814b509a35cb17de4fb1999907575684479c"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:158d0d15119b4b7ff6b926536763dc0714313aa59e320ddf787502c70c4d4bee"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-win32.whl", hash = "sha256:afb29c1ba2e5a3736f1c301d9d0abe3ec8b86957d04ddfa9d7a6a42b9367e396"},
|
||||
{file = "cffi-1.14.5-cp39-cp39-win_amd64.whl", hash = "sha256:f2d45f97ab6bb54753eab54fffe75aaf3de4ff2341c9daee1987ee1837636f1d"},
|
||||
{file = "cffi-1.14.5.tar.gz", hash = "sha256:fd78e5fee591709f32ef6edb9a015b4aa1a5022598e36227500c8f4e02328d9c"},
|
||||
]
|
||||
chardet = [
|
||||
{file = "chardet-4.0.0-py2.py3-none-any.whl", hash = "sha256:f864054d66fd9118f2e67044ac8981a54775ec5b67aed0441892edb553d21da5"},
|
||||
{file = "chardet-4.0.0.tar.gz", hash = "sha256:0d6f53a15db4120f2b08c94f11e7d93d2c911ee118b6b30a04ec3ee8310179fa"},
|
||||
]
|
||||
click = [
|
||||
{file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"},
|
||||
{file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"},
|
||||
]
|
||||
contextvars = [
|
||||
{file = "contextvars-2.4.tar.gz", hash = "sha256:f38c908aaa59c14335eeea12abea5f443646216c4e29380d7bf34d2018e2c39e"},
|
||||
]
|
||||
flake8 = [
|
||||
{file = "flake8-3.9.0-py2.py3-none-any.whl", hash = "sha256:12d05ab02614b6aee8df7c36b97d1a3b2372761222b19b58621355e82acddcff"},
|
||||
{file = "flake8-3.9.0.tar.gz", hash = "sha256:78873e372b12b093da7b5e5ed302e8ad9e988b38b063b61ad937f26ca58fc5f0"},
|
||||
]
|
||||
h11 = [
|
||||
{file = "h11-0.12.0-py3-none-any.whl", hash = "sha256:36a3cb8c0a032f56e2da7084577878a035d3b61d104230d4bd49c0c6b555a9c6"},
|
||||
{file = "h11-0.12.0.tar.gz", hash = "sha256:47222cb6067e4a307d535814917cd98fd0a57b6788ce715755fa2b6c28b56042"},
|
||||
]
|
||||
html5lib = [
|
||||
{file = "html5lib-1.1-py2.py3-none-any.whl", hash = "sha256:0d78f8fde1c230e99fe37986a60526d7049ed4bf8a9fadbad5f00e22e58e041d"},
|
||||
{file = "html5lib-1.1.tar.gz", hash = "sha256:b2e5b40261e20f354d198eae92afc10d750afb487ed5e50f9c4eaf07c184146f"},
|
||||
]
|
||||
idna = [
|
||||
{file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"},
|
||||
{file = "idna-2.10.tar.gz", hash = "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6"},
|
||||
]
|
||||
immutables = [
|
||||
{file = "immutables-0.15-cp35-cp35m-macosx_10_14_x86_64.whl", hash = "sha256:6728f4392e3e8e64b593a5a0cd910a1278f07f879795517e09f308daed138631"},
|
||||
{file = "immutables-0.15-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f0836cd3bdc37c8a77b192bbe5f41dbcc3ce654db048ebbba89bdfe6db7a1c7a"},
|
||||
{file = "immutables-0.15-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:8703d8abfd8687932f2a05f38e7de270c3a6ca3bd1c1efb3c938656b3f2f985a"},
|
||||
{file = "immutables-0.15-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:b8ad986f9b532c026f19585289384b0769188fcb68b37c7f0bd0df9092a6ca54"},
|
||||
{file = "immutables-0.15-cp36-cp36m-win_amd64.whl", hash = "sha256:6f117d9206165b9dab8fd81c5129db757d1a044953f438654236ed9a7a4224ae"},
|
||||
{file = "immutables-0.15-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b75ade826920c4e490b1bb14cf967ac14e61eb7c5562161c5d7337d61962c226"},
|
||||
{file = "immutables-0.15-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:b7e13c061785e34f73c4f659861f1b3e4a5fd918e4395c84b21c4e3d449ebe27"},
|
||||
{file = "immutables-0.15-cp37-cp37m-win_amd64.whl", hash = "sha256:3035849accee4f4e510ed7c94366a40e0f5fef9069fbe04a35f4787b13610a4a"},
|
||||
{file = "immutables-0.15-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b04fa69174e0c8f815f9c55f2a43fc9e5a68452fab459a08e904a74e8471639f"},
|
||||
{file = "immutables-0.15-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:141c2e9ea515a3a815007a429f0b47a578ebeb42c831edaec882a245a35fffca"},
|
||||
{file = "immutables-0.15-cp38-cp38-win_amd64.whl", hash = "sha256:cbe8c64640637faa5535d539421b293327f119c31507c33ca880bd4f16035eb6"},
|
||||
{file = "immutables-0.15-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:a0a4e4417d5ef4812d7f99470cd39347b58cb927365dd2b8da9161040d260db0"},
|
||||
{file = "immutables-0.15-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3b15c08c71c59e5b7c2470ef949d49ff9f4263bb77f488422eaa157da84d6999"},
|
||||
{file = "immutables-0.15-cp39-cp39-win_amd64.whl", hash = "sha256:2283a93c151566e6830aee0e5bee55fc273455503b43aa004356b50f9182092b"},
|
||||
{file = "immutables-0.15.tar.gz", hash = "sha256:3713ab1ebbb6946b7ce1387bb9d1d7f5e09c45add58c2a2ee65f963c171e746b"},
|
||||
]
|
||||
importlib-metadata = [
|
||||
{file = "importlib_metadata-3.7.3-py3-none-any.whl", hash = "sha256:b74159469b464a99cb8cc3e21973e4d96e05d3024d337313fedb618a6e86e6f4"},
|
||||
{file = "importlib_metadata-3.7.3.tar.gz", hash = "sha256:742add720a20d0467df2f444ae41704000f50e1234f46174b51f9c6031a1bd71"},
|
||||
]
|
||||
isort = [
|
||||
{file = "isort-5.7.0-py3-none-any.whl", hash = "sha256:fff4f0c04e1825522ce6949973e83110a6e907750cd92d128b0d14aaaadbffdc"},
|
||||
{file = "isort-5.7.0.tar.gz", hash = "sha256:c729845434366216d320e936b8ad6f9d681aab72dc7cbc2d51bedc3582f3ad1e"},
|
||||
]
|
||||
jinja2 = [
|
||||
{file = "Jinja2-2.11.3-py2.py3-none-any.whl", hash = "sha256:03e47ad063331dd6a3f04a43eddca8a966a26ba0c5b7207a9a9e4e08f1b29419"},
|
||||
{file = "Jinja2-2.11.3.tar.gz", hash = "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6"},
|
||||
]
|
||||
markupsafe = [
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d53bc011414228441014aa71dbec320c66468c1030aae3a6e29778a3382d96e5"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:3b8a6499709d29c2e2399569d96719a1b21dcd94410a586a18526b143ec8470f"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:84dee80c15f1b560d55bcfe6d47b27d070b4681c699c572af2e3c7cc90a3b8e0"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:b1dba4527182c95a0db8b6060cc98ac49b9e2f5e64320e2b56e47cb2831978c7"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:bf5aa3cbcfdf57fa2ee9cd1822c862ef23037f5c832ad09cfea57fa846dec193"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:6fffc775d90dcc9aed1b89219549b329a9250d918fd0b8fa8d93d154918422e1"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:a6a744282b7718a2a62d2ed9d993cad6f5f585605ad352c11de459f4108df0a1"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:195d7d2c4fbb0ee8139a6cf67194f3973a6b3042d742ebe0a9ed36d8b6f0c07f"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:acf08ac40292838b3cbbb06cfe9b2cb9ec78fce8baca31ddb87aaac2e2dc3bc2"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:d9be0ba6c527163cbed5e0857c451fcd092ce83947944d6c14bc95441203f032"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:caabedc8323f1e93231b52fc32bdcde6db817623d33e100708d9a68e1f53b26b"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-win32.whl", hash = "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b"},
|
||||
{file = "MarkupSafe-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d73a845f227b0bfe8a7455ee623525ee656a9e2e749e4742706d80a6065d5e2c"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:98bae9582248d6cf62321dcb52aaf5d9adf0bad3b40582925ef7c7f0ed85fceb"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:2beec1e0de6924ea551859edb9e7679da6e4870d32cb766240ce17e0a0ba2014"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:7fed13866cf14bba33e7176717346713881f56d9d2bcebab207f7a036f41b850"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:6f1e273a344928347c1290119b493a1f0303c52f5a5eae5f16d74f48c15d4a85"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:feb7b34d6325451ef96bc0e36e1a6c0c1c64bc1fbec4b854f4529e51887b1621"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-win32.whl", hash = "sha256:22c178a091fc6630d0d045bdb5992d2dfe14e3259760e713c490da5323866c39"},
|
||||
{file = "MarkupSafe-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7d644ddb4dbd407d31ffb699f1d140bc35478da613b441c582aeb7c43838dd8"},
|
||||
{file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"},
|
||||
]
|
||||
mccabe = [
|
||||
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
|
||||
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
|
||||
]
|
||||
mypy = [
|
||||
{file = "mypy-0.782-cp35-cp35m-macosx_10_6_x86_64.whl", hash = "sha256:2c6cde8aa3426c1682d35190b59b71f661237d74b053822ea3d748e2c9578a7c"},
|
||||
{file = "mypy-0.782-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9c7a9a7ceb2871ba4bac1cf7217a7dd9ccd44c27c2950edbc6dc08530f32ad4e"},
|
||||
{file = "mypy-0.782-cp35-cp35m-win_amd64.whl", hash = "sha256:c05b9e4fb1d8a41d41dec8786c94f3b95d3c5f528298d769eb8e73d293abc48d"},
|
||||
{file = "mypy-0.782-cp36-cp36m-macosx_10_6_x86_64.whl", hash = "sha256:6731603dfe0ce4352c555c6284c6db0dc935b685e9ce2e4cf220abe1e14386fd"},
|
||||
{file = "mypy-0.782-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f05644db6779387ccdb468cc47a44b4356fc2ffa9287135d05b70a98dc83b89a"},
|
||||
{file = "mypy-0.782-cp36-cp36m-win_amd64.whl", hash = "sha256:b7fbfabdbcc78c4f6fc4712544b9b0d6bf171069c6e0e3cb82440dd10ced3406"},
|
||||
{file = "mypy-0.782-cp37-cp37m-macosx_10_6_x86_64.whl", hash = "sha256:3fdda71c067d3ddfb21da4b80e2686b71e9e5c72cca65fa216d207a358827f86"},
|
||||
{file = "mypy-0.782-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d7df6eddb6054d21ca4d3c6249cae5578cb4602951fd2b6ee2f5510ffb098707"},
|
||||
{file = "mypy-0.782-cp37-cp37m-win_amd64.whl", hash = "sha256:a4a2cbcfc4cbf45cd126f531dedda8485671545b43107ded25ce952aac6fb308"},
|
||||
{file = "mypy-0.782-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6bb93479caa6619d21d6e7160c552c1193f6952f0668cdda2f851156e85186fc"},
|
||||
{file = "mypy-0.782-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:81c7908b94239c4010e16642c9102bfc958ab14e36048fa77d0be3289dda76ea"},
|
||||
{file = "mypy-0.782-cp38-cp38-win_amd64.whl", hash = "sha256:5dd13ff1f2a97f94540fd37a49e5d255950ebcdf446fb597463a40d0df3fac8b"},
|
||||
{file = "mypy-0.782-py3-none-any.whl", hash = "sha256:e0b61738ab504e656d1fe4ff0c0601387a5489ca122d55390ade31f9ca0e252d"},
|
||||
{file = "mypy-0.782.tar.gz", hash = "sha256:eff7d4a85e9eea55afa34888dfeaccde99e7520b51f867ac28a48492c0b1130c"},
|
||||
]
|
||||
mypy-extensions = [
|
||||
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
|
||||
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
|
||||
]
|
||||
outcome = [
|
||||
{file = "outcome-1.1.0-py2.py3-none-any.whl", hash = "sha256:c7dd9375cfd3c12db9801d080a3b63d4b0a261aa996c4c13152380587288d958"},
|
||||
{file = "outcome-1.1.0.tar.gz", hash = "sha256:e862f01d4e626e63e8f92c38d1f8d5546d3f9cce989263c521b2e7990d186967"},
|
||||
]
|
||||
pathspec = [
|
||||
{file = "pathspec-0.8.1-py2.py3-none-any.whl", hash = "sha256:aa0cb481c4041bf52ffa7b0d8fa6cd3e88a2ca4879c533c9153882ee2556790d"},
|
||||
{file = "pathspec-0.8.1.tar.gz", hash = "sha256:86379d6b86d75816baba717e64b1a3a3469deb93bb76d613c9ce79edc5cb68fd"},
|
||||
]
|
||||
pycodestyle = [
|
||||
{file = "pycodestyle-2.7.0-py2.py3-none-any.whl", hash = "sha256:514f76d918fcc0b55c6680472f0a37970994e07bbb80725808c17089be302068"},
|
||||
{file = "pycodestyle-2.7.0.tar.gz", hash = "sha256:c389c1d06bf7904078ca03399a4816f974a1d590090fecea0c63ec26ebaf1cef"},
|
||||
]
|
||||
pycparser = [
|
||||
{file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"},
|
||||
{file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"},
|
||||
]
|
||||
pyflakes = [
|
||||
{file = "pyflakes-2.3.0-py2.py3-none-any.whl", hash = "sha256:910208209dcea632721cb58363d0f72913d9e8cf64dc6f8ae2e02a3609aba40d"},
|
||||
{file = "pyflakes-2.3.0.tar.gz", hash = "sha256:e59fd8e750e588358f1b8885e5a4751203a0516e0ee6d34811089ac294c8806f"},
|
||||
]
|
||||
pypandoc = [
|
||||
{file = "pypandoc-1.5.tar.gz", hash = "sha256:14a49977ab1fbc9b14ef3087dcb101f336851837fca55ca79cf33846cc4976ff"},
|
||||
]
|
||||
python-dateutil = [
|
||||
{file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"},
|
||||
{file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"},
|
||||
]
|
||||
regex = [
|
||||
{file = "regex-2021.3.17-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b97ec5d299c10d96617cc851b2e0f81ba5d9d6248413cd374ef7f3a8871ee4a6"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:cb4ee827857a5ad9b8ae34d3c8cc51151cb4a3fe082c12ec20ec73e63cc7c6f0"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:633497504e2a485a70a3268d4fc403fe3063a50a50eed1039083e9471ad0101c"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:a59a2ee329b3de764b21495d78c92ab00b4ea79acef0f7ae8c1067f773570afa"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:f85d6f41e34f6a2d1607e312820971872944f1661a73d33e1e82d35ea3305e14"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:4651f839dbde0816798e698626af6a2469eee6d9964824bb5386091255a1694f"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:39c44532d0e4f1639a89e52355b949573e1e2c5116106a395642cbbae0ff9bcd"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:3d9a7e215e02bd7646a91fb8bcba30bc55fd42a719d6b35cf80e5bae31d9134e"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-win32.whl", hash = "sha256:159fac1a4731409c830d32913f13f68346d6b8e39650ed5d704a9ce2f9ef9cb3"},
|
||||
{file = "regex-2021.3.17-cp36-cp36m-win_amd64.whl", hash = "sha256:13f50969028e81765ed2a1c5fcfdc246c245cf8d47986d5172e82ab1a0c42ee5"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b9d8d286c53fe0cbc6d20bf3d583cabcd1499d89034524e3b94c93a5ab85ca90"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:201e2619a77b21a7780580ab7b5ce43835e242d3e20fef50f66a8df0542e437f"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d47d359545b0ccad29d572ecd52c9da945de7cd6cf9c0cfcb0269f76d3555689"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:ea2f41445852c660ba7c3ebf7d70b3779b20d9ca8ba54485a17740db49f46932"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:486a5f8e11e1f5bbfcad87f7c7745eb14796642323e7e1829a331f87a713daaa"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:18e25e0afe1cf0f62781a150c1454b2113785401ba285c745acf10c8ca8917df"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:a2ee026f4156789df8644d23ef423e6194fad0bc53575534101bb1de5d67e8ce"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:4c0788010a93ace8a174d73e7c6c9d3e6e3b7ad99a453c8ee8c975ddd9965643"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-win32.whl", hash = "sha256:575a832e09d237ae5fedb825a7a5bc6a116090dd57d6417d4f3b75121c73e3be"},
|
||||
{file = "regex-2021.3.17-cp37-cp37m-win_amd64.whl", hash = "sha256:8e65e3e4c6feadf6770e2ad89ad3deb524bcb03d8dc679f381d0568c024e0deb"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a0df9a0ad2aad49ea3c7f65edd2ffb3d5c59589b85992a6006354f6fb109bb18"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux1_i686.whl", hash = "sha256:b98bc9db003f1079caf07b610377ed1ac2e2c11acc2bea4892e28cc5b509d8d5"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:808404898e9a765e4058bf3d7607d0629000e0a14a6782ccbb089296b76fa8fe"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:5770a51180d85ea468234bc7987f5597803a4c3d7463e7323322fe4a1b181578"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:976a54d44fd043d958a69b18705a910a8376196c6b6ee5f2596ffc11bff4420d"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:63f3ca8451e5ff7133ffbec9eda641aeab2001be1a01878990f6c87e3c44b9d5"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:bcd945175c29a672f13fce13a11893556cd440e37c1b643d6eeab1988c8b209c"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:3d9356add82cff75413bec360c1eca3e58db4a9f5dafa1f19650958a81e3249d"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-win32.whl", hash = "sha256:f5d0c921c99297354cecc5a416ee4280bd3f20fd81b9fb671ca6be71499c3fdf"},
|
||||
{file = "regex-2021.3.17-cp38-cp38-win_amd64.whl", hash = "sha256:14de88eda0976020528efc92d0a1f8830e2fb0de2ae6005a6fc4e062553031fa"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4c2e364491406b7888c2ad4428245fc56c327e34a5dfe58fd40df272b3c3dab3"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux1_i686.whl", hash = "sha256:8bd4f91f3fb1c9b1380d6894bd5b4a519409135bec14c0c80151e58394a4e88a"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:882f53afe31ef0425b405a3f601c0009b44206ea7f55ee1c606aad3cc213a52c"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:07ef35301b4484bce843831e7039a84e19d8d33b3f8b2f9aab86c376813d0139"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:360a01b5fa2ad35b3113ae0c07fb544ad180603fa3b1f074f52d98c1096fa15e"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:709f65bb2fa9825f09892617d01246002097f8f9b6dde8d1bb4083cf554701ba"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:c66221e947d7207457f8b6f42b12f613b09efa9669f65a587a2a71f6a0e4d106"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:c782da0e45aff131f0bed6e66fbcfa589ff2862fc719b83a88640daa01a5aff7"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-win32.whl", hash = "sha256:dc9963aacb7da5177e40874585d7407c0f93fb9d7518ec58b86e562f633f36cd"},
|
||||
{file = "regex-2021.3.17-cp39-cp39-win_amd64.whl", hash = "sha256:a0d04128e005142260de3733591ddf476e4902c0c23c1af237d9acf3c96e1b38"},
|
||||
{file = "regex-2021.3.17.tar.gz", hash = "sha256:4b8a1fb724904139149a43e172850f35aa6ea97fb0545244dc0b805e0154ed68"},
|
||||
]
|
||||
requests = [
|
||||
{file = "requests-2.25.1-py2.py3-none-any.whl", hash = "sha256:c210084e36a42ae6b9219e00e48287def368a26d03a048ddad7bfee44f75871e"},
|
||||
{file = "requests-2.25.1.tar.gz", hash = "sha256:27973dd4a904a4f13b263a19c866c13b92a39ed1c964655f025f3f8d3d75b804"},
|
||||
]
|
||||
six = [
|
||||
{file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"},
|
||||
{file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"},
|
||||
]
|
||||
sniffio = [
|
||||
{file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
|
||||
{file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
|
||||
]
|
||||
sortedcontainers = [
|
||||
{file = "sortedcontainers-2.3.0-py2.py3-none-any.whl", hash = "sha256:37257a32add0a3ee490bb170b599e93095eed89a55da91fa9f48753ea12fd73f"},
|
||||
{file = "sortedcontainers-2.3.0.tar.gz", hash = "sha256:59cc937650cf60d677c16775597c89a960658a09cf7c1a668f86e1e4464b10a1"},
|
||||
]
|
||||
toml = [
|
||||
{file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
|
||||
{file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
|
||||
]
|
||||
trio = [
|
||||
{file = "trio-0.17.0-py3-none-any.whl", hash = "sha256:fc70c74e8736d1105b3c05cc2e49b30c58755733740f9c51ae6d88a4d6d0a291"},
|
||||
{file = "trio-0.17.0.tar.gz", hash = "sha256:e85cf9858e445465dfbb0e3fdf36efe92082d2df87bfe9d62585eedd6e8e9d7d"},
|
||||
]
|
||||
typed-ast = [
|
||||
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:7703620125e4fb79b64aa52427ec192822e9f45d37d4b6625ab37ef403e1df70"},
|
||||
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c9aadc4924d4b5799112837b226160428524a9a45f830e0d0f184b19e4090487"},
|
||||
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:9ec45db0c766f196ae629e509f059ff05fc3148f9ffd28f3cfe75d4afb485412"},
|
||||
{file = "typed_ast-1.4.2-cp35-cp35m-win32.whl", hash = "sha256:85f95aa97a35bdb2f2f7d10ec5bbdac0aeb9dafdaf88e17492da0504de2e6400"},
|
||||
{file = "typed_ast-1.4.2-cp35-cp35m-win_amd64.whl", hash = "sha256:9044ef2df88d7f33692ae3f18d3be63dec69c4fb1b5a4a9ac950f9b4ba571606"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c1c876fd795b36126f773db9cbb393f19808edd2637e00fd6caba0e25f2c7b64"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:5dcfc2e264bd8a1db8b11a892bd1647154ce03eeba94b461effe68790d8b8e07"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8db0e856712f79c45956da0c9a40ca4246abc3485ae0d7ecc86a20f5e4c09abc"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:d003156bb6a59cda9050e983441b7fa2487f7800d76bdc065566b7d728b4581a"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-win32.whl", hash = "sha256:4c790331247081ea7c632a76d5b2a265e6d325ecd3179d06e9cf8d46d90dd151"},
|
||||
{file = "typed_ast-1.4.2-cp36-cp36m-win_amd64.whl", hash = "sha256:d175297e9533d8d37437abc14e8a83cbc68af93cc9c1c59c2c292ec59a0697a3"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cf54cfa843f297991b7388c281cb3855d911137223c6b6d2dd82a47ae5125a41"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b4fcdcfa302538f70929eb7b392f536a237cbe2ed9cba88e3bf5027b39f5f77f"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:987f15737aba2ab5f3928c617ccf1ce412e2e321c77ab16ca5a293e7bbffd581"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:37f48d46d733d57cc70fd5f30572d11ab8ed92da6e6b28e024e4a3edfb456e37"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-win32.whl", hash = "sha256:36d829b31ab67d6fcb30e185ec996e1f72b892255a745d3a82138c97d21ed1cd"},
|
||||
{file = "typed_ast-1.4.2-cp37-cp37m-win_amd64.whl", hash = "sha256:8368f83e93c7156ccd40e49a783a6a6850ca25b556c0fa0240ed0f659d2fe496"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:963c80b583b0661918718b095e02303d8078950b26cc00b5e5ea9ababe0de1fc"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e683e409e5c45d5c9082dc1daf13f6374300806240719f95dc783d1fc942af10"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:84aa6223d71012c68d577c83f4e7db50d11d6b1399a9c779046d75e24bed74ea"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:a38878a223bdd37c9709d07cd357bb79f4c760b29210e14ad0fb395294583787"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-win32.whl", hash = "sha256:a2c927c49f2029291fbabd673d51a2180038f8cd5a5b2f290f78c4516be48be2"},
|
||||
{file = "typed_ast-1.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:c0c74e5579af4b977c8b932f40a5464764b2f86681327410aa028a22d2f54937"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07d49388d5bf7e863f7fa2f124b1b1d89d8aa0e2f7812faff0a5658c01c59aa1"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:240296b27397e4e37874abb1df2a608a92df85cf3e2a04d0d4d61055c8305ba6"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:d746a437cdbca200622385305aedd9aef68e8a645e385cc483bdc5e488f07166"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:14bf1522cdee369e8f5581238edac09150c765ec1cb33615855889cf33dcb92d"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-win32.whl", hash = "sha256:cc7b98bf58167b7f2db91a4327da24fb93368838eb84a44c472283778fc2446b"},
|
||||
{file = "typed_ast-1.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:7147e2a76c75f0f64c4319886e7639e490fee87c9d25cb1d4faef1d8cf83a440"},
|
||||
{file = "typed_ast-1.4.2.tar.gz", hash = "sha256:9fc0b3cb5d1720e7141d103cf4819aea239f7d136acf9ee4a69b047b7986175a"},
|
||||
]
|
||||
typing-extensions = [
|
||||
{file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"},
|
||||
{file = "typing_extensions-3.7.4.3-py3-none-any.whl", hash = "sha256:7cb407020f00f7bfc3cb3e7881628838e69d8f3fcab2f64742a5e76b2f841918"},
|
||||
{file = "typing_extensions-3.7.4.3.tar.gz", hash = "sha256:99d4073b617d30288f569d3f13d2bd7548c3a7e4c8de87db09a9d29bb3a4a60c"},
|
||||
]
|
||||
urllib3 = [
|
||||
{file = "urllib3-1.26.4-py2.py3-none-any.whl", hash = "sha256:2f4da4594db7e1e110a944bb1b551fdf4e6c136ad42e4234131391e21eb5b0df"},
|
||||
{file = "urllib3-1.26.4.tar.gz", hash = "sha256:e7b021f7241115872f92f43c6508082facffbd1c048e3c6e2bb9c2a157e28937"},
|
||||
]
|
||||
webencodings = [
|
||||
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
|
||||
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
|
||||
]
|
||||
zipp = [
|
||||
{file = "zipp-3.4.1-py3-none-any.whl", hash = "sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"},
|
||||
{file = "zipp-3.4.1.tar.gz", hash = "sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76"},
|
||||
]
|
46
pyproject.toml
Normal file
46
pyproject.toml
Normal file
@ -0,0 +1,46 @@
|
||||
[build-system]
|
||||
requires = ["poetry>=1.0.9,<2.0"]
|
||||
build-backend = "poetry.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "etherpump"
|
||||
version = "0.0.20"
|
||||
description = "Pumping text from etherpads into publications"
|
||||
authors = ["Varia, Center for Everyday Technology"]
|
||||
maintainers = ["Varia, Center for Everyday Technology <info@varia.zone>"]
|
||||
license = "GPLv3"
|
||||
readme = "README.md"
|
||||
repository = "https://git.vvvvvvaria.org/varia/etherpump"
|
||||
keywords = ["etherpad", "etherdump", "etherpump"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.6"
|
||||
asks = "^2.4.10"
|
||||
html5lib = "^1.1"
|
||||
jinja2 = "^2.11.2"
|
||||
pypandoc = "^1.5"
|
||||
python-dateutil = "^2.8.1"
|
||||
requests = "^2.24.0"
|
||||
trio = "^0.17.0"
|
||||
|
||||
[tool.poetry.dev-dependencies]
|
||||
black = "^19.10b0"
|
||||
flake8 = "^3.8.3"
|
||||
isort = "^5.0.2"
|
||||
mypy = "^0.782"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
etherpump = "etherpump:main"
|
||||
|
||||
[tool.black]
|
||||
line-length = 80
|
||||
target-version = ["py38"]
|
||||
include = '\.pyi?$'
|
||||
|
||||
[tool.isort]
|
||||
include_trailing_comma = true
|
||||
known_first_party = "abra"
|
||||
known_third_party = "pytest"
|
||||
line_length = 80
|
||||
multi_line_output = 3
|
||||
skip = ".tox"
|
36
setup.py
36
setup.py
@ -1,36 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import distutils.command.install_lib
|
||||
from distutils.core import setup
|
||||
import os
|
||||
|
||||
def find (p, d):
|
||||
ret = []
|
||||
for b, dd, ff in os.walk(os.path.join(p, d)):
|
||||
|
||||
for f in ff:
|
||||
if not f.startswith("."):
|
||||
fp = os.path.join(b, f)
|
||||
ret.append(os.path.relpath(fp, p))
|
||||
ret.sort()
|
||||
# for x in ret[:10]:
|
||||
# print "**", x
|
||||
return ret
|
||||
|
||||
setup(
|
||||
name='etherdump',
|
||||
version='0.3.0',
|
||||
author='Active Archives Contributors',
|
||||
author_email='mm@automatist.org',
|
||||
packages=['etherdump', 'etherdump.commands'],
|
||||
package_dir={'etherdump': 'etherdump'},
|
||||
#package_data={'activearchives': find("activearchives", "templates/") + find("activearchives", "data/")},
|
||||
package_data={'etherdump': find("etherdump", "data/")},
|
||||
scripts=['bin/etherdump'],
|
||||
url='http://activearchives.org/wiki/Etherdump',
|
||||
license='LICENSE.txt',
|
||||
description='Etherdump an etherpad publishing & archiving system',
|
||||
# long_description=open('README.md').read(),
|
||||
install_requires=[
|
||||
"html5lib", "jinja2"
|
||||
]
|
||||
)
|
62
stylesheet.css
Normal file
62
stylesheet.css
Normal file
@ -0,0 +1,62 @@
|
||||
html {
|
||||
border: 10px inset magenta;
|
||||
min-height: calc(100vh - 20px);
|
||||
min-width: 1000px;
|
||||
}
|
||||
body {
|
||||
margin: 1em;
|
||||
font-family: monospace;
|
||||
font-size: 16px;
|
||||
line-height: 1.3;
|
||||
background-color: #ffff00a3;
|
||||
color: green;
|
||||
}
|
||||
#welcome {
|
||||
max-width: 600px;
|
||||
margin: 1em 0;
|
||||
}
|
||||
table {
|
||||
min-width: 600px;
|
||||
}
|
||||
th,
|
||||
td {
|
||||
text-align: left;
|
||||
padding: 0 1em 0 0;
|
||||
vertical-align: top;
|
||||
}
|
||||
td.name {
|
||||
width: 323px;
|
||||
}
|
||||
td.versions {
|
||||
width: 290px;
|
||||
}
|
||||
td.magicwords a {
|
||||
color: magenta;
|
||||
}
|
||||
hr {
|
||||
border: 0;
|
||||
border-bottom: 1px solid;
|
||||
margin: 2em 0 1em;
|
||||
}
|
||||
#footer {
|
||||
max-width: 600px;
|
||||
}
|
||||
.info {
|
||||
font-size: smaller;
|
||||
}
|
||||
.highlight {
|
||||
padding: 0.5em;
|
||||
background-color: rgb(255, 192, 203, 0.8);
|
||||
}
|
||||
.magic {
|
||||
margin-top: 2em;
|
||||
}
|
||||
.magicwords {
|
||||
padding-right: 5px;
|
||||
}
|
||||
.magicwords-publish {
|
||||
padding-right: 5px;
|
||||
display: inline;
|
||||
color: magenta;
|
||||
opacity: 0.4;
|
||||
}
|
132
templates/index.html
Normal file
132
templates/index.html
Normal file
@ -0,0 +1,132 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="{{ language }}">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<title>{{ title }}</title>
|
||||
<link rel="stylesheet" type="text/css" href="{% block css %}stylesheet.css{% endblock %}">
|
||||
<!--<link rel="alternate" type="application/rss+xml" href="recentchanges.rss">-->
|
||||
{% block scripts %}
|
||||
{% endblock scripts %}
|
||||
</head>
|
||||
<body>
|
||||
{% set padoftheday = pads | random %}
|
||||
|
||||
<h1>{{ title }}</h1>
|
||||
|
||||
<div id="welcome">
|
||||
Welcome! The pages below have been deliberately published by their authors in
|
||||
order to share their thoughts, research and process in an early form. This
|
||||
page represents one of Varia's low-effort publishing tools. The pages are all
|
||||
produced through Varia's <a href="https://pad.vvvvvvaria.org/">Etherpad instance</a>.
|
||||
<br>
|
||||
<br>
|
||||
Etherpad is used as collaborative writing tool to take notes, create readers,
|
||||
coordinate projects and document gatherings that happen in and around Varia.
|
||||
For example <a href="{{ padoftheday.link }}">{{ padoftheday.padid }}</a>.
|
||||
<br>
|
||||
<br>
|
||||
This index is updated every 60 minutes.
|
||||
</div>
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>name</th>
|
||||
<th>versions</th>
|
||||
<!--<th>last edited</th>-->
|
||||
<!--<th>revisions</th>-->
|
||||
<!--<th>authors</th>-->
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
||||
{% set allmagicwords = [] %}
|
||||
|
||||
{% for pad in pads %}
|
||||
<tr>
|
||||
<td class="name">
|
||||
<a href="{{ pad.link }}">{{ pad.padid }}</a>
|
||||
</td>
|
||||
|
||||
<td class="versions">
|
||||
{% for v in pad.versions %}<a href="{{ v.url }}">{{ v.type }}</a> {% endfor %}
|
||||
</td>
|
||||
|
||||
<!-- WOW -->
|
||||
<td class="magicwords">
|
||||
{% for magicword in pad.magicwords | sort %}
|
||||
{% if magicword == "__PUBLISH__" %}
|
||||
<p class="magicwords-publish">{{magicword}}</p>
|
||||
{% else %}
|
||||
<a class="magicwords" href="#{{ magicword }}">{{ magicword }}</a>
|
||||
{% endif %}
|
||||
{% if magicword %}
|
||||
<div style="display:none;">{{ allmagicwords.append(magicword) }}</div>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</td>
|
||||
<!--<td class="lastedited">{{ pad.lastedited_iso|datetimeformat }}</td>-->
|
||||
<!--<td class="revisions">{{ pad.revisions }}</td>-->
|
||||
<!--<td class="authors">{{ pad.author_ids|length }}</td>-->
|
||||
</tr>
|
||||
{% endfor %}
|
||||
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<div id="magicarea">
|
||||
{% for magicword in allmagicwords | unique | sort %}
|
||||
{% if magicword != "__PUBLISH__" %}
|
||||
<div class="magic" id="{{magicword}}">
|
||||
<h2>{{ magicword }}</h2>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>name</th>
|
||||
<th>versions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for pad in pads %}
|
||||
{% if magicword in pad.magicwords %}
|
||||
<tr>
|
||||
<td class="name">
|
||||
<a href="{{ pad.link }}">{{ pad.padid }}</a>
|
||||
</td>
|
||||
<td class="versions">
|
||||
{% for v in pad.versions %}<a href="{{ v.url }}">{{ v.type }}</a> {% endfor %}
|
||||
</td>
|
||||
<!-- WOW -->
|
||||
<td class="magicwords">
|
||||
{% for magicword in pad.magicwords | sort %}
|
||||
{% if magicword == "__PUBLISH__" %}
|
||||
<p class="magicwords-publish">{{magicword}}</p>
|
||||
{% else %}
|
||||
<a class="magicwords" href="#{{ magicword }}">{{ magicword }}</a>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</td>
|
||||
</tr>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</div>
|
||||
|
||||
<div id="footer">
|
||||
<hr>
|
||||
<p>
|
||||
<small>
|
||||
This page is generated using <a href="https://git.vvvvvvaria.org/varia/etherpump">Etherpump</a>.
|
||||
It is a command-line utility and python library that extends the multi
|
||||
writing and publishing functionalities of the Etherpad.
|
||||
</small>
|
||||
<br><br>
|
||||
</p>
|
||||
{% block info %}<p class="info">Last updated {{ timestamp }}.</p>{% endblock %}
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
Loading…
Reference in New Issue
Block a user